Science.gov

Sample records for amanda observations constrain

  1. Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data

    SciTech Connect

    Esmaili, Arman; Peres, O.L.G.; Halzen, Francis E-mail: halzen@icecube.wisc.edu

    2012-11-01

    We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.

  2. Results from AMANDA

    NASA Astrophysics Data System (ADS)

    Wiebusch, Christopher; Ahrens, J.; Bai, X.; Barwick, S. W.; Becka, T.; Becker, K.-H.; Bertrand, D.; Bernadini, E.; Binon, F.; Biron, A.; Böser, S.; Botner, O.; Bouchta, A.; Bouhali, O.; Burgess, T.; Carius, S.; Castermans, T.; Chen, A.; Chirkin, D.; Conrad, J.; Cooley, J.; Cowen, D. F.; Davour, A.; de Clercq, C.; De Young, T.; Desiati, P.; Dewulf, J.-P.; Doksus, P.; Ekström, P.; Feser, T.; Gaisser, T. K.; Gaug, M.; Gerhardt, L.; Goldschmidt, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Hardtke, R.; Hauschildt, T.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hulth, P. O.; Hundertmark, S.; Jacobsen, J.; Karle, A.; Koci, B.; Köpke, L.; Kowalski, M.; Kuehn, K.; Lamoureux, J. I.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Madsen, J.; Marciniewski, P.; Matis, H. S.; McParland, C. P.; Minaeva, Y.; Miočinović, P.; Mock, P. C.; Morse, R.; Nahnhauer, R.; Neunhöffer, T.; Niessen, P.; Nygren, D. R.; Ogelman, H.; Olbrechts, Ph.; Pérez de Los Heros, C.; Pohl, A. C.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Resconi, E.; Rhode, W.; Ribordy, M.; Richter, S.; Rodríguez Martino, J.; Ross, D.; Sander, H.-G.; Schmidt, T.; Schneider, D.; Schwarz, R.; Silvestri, A.; Solarz, M.; Spiczak, G. M.; Spiering, C.; Steele, D.; Steffen, P.; Stokstad, R. G.; Sudhoff, P.; Sulanke, K.-H.; Taboada, I.; Thollander, L.; Tilav, S.; Walck, C.; Weinheimer, C.; Wiebusch, C. H.; Wiedemann, C.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Yodh, G.; Young, S.

    The Antarctic Muon and Neutrino Detector Array (AMANDA) is a high-energy neutrino telescope operating at the geographic South Pole. It is a lattice of photo-multiplier tubes buried deep in the polar ice. The primary goal of this detector is to discover astrophysical sources of high energy neutrinos. We describe the detector methods of operation and present results from the AMANDA-B10 prototype. We demonstrate the improved sensitivity of the current AMANDA-II detector. We conclude with an outlook to the envisioned sensitivity of the future IceCube detector.

  3. Geometrically constrained observability. [control theory

    NASA Technical Reports Server (NTRS)

    Brammer, R. F.

    1974-01-01

    This paper deals with observed processes in situations in which observations are available only when the state vector lies in certain regions. For linear autonomous observed processes, necessary and sufficient conditions are obtained for half-space observation regions. These results are shown to contain a theorem dual to a controllability result proved by the author for a linear autonomous control system whose control restraint set does not contain the origin as an interior point. Observability results relating to continuous observation systems and sampled data systems are presented, and an example of observing the state of an electrical network is given.

  4. Constraining the braneworld with gravitational wave observations.

    PubMed

    McWilliams, Sean T

    2010-04-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm. PMID:20481929

  5. Cloudsat Satellite Images of Amanda

    NASA Video Gallery

    NASA's CloudSat satellite flew over Hurricane Amanda on May 25, at 5 p.m. EDT and saw a deep area of moderate to heavy-moderate precipitation below the freezing level (where precipitation changes f...

  6. Constraining dark matter through 21-cm observations

    NASA Astrophysics Data System (ADS)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  7. Constraining the halo mass function with observations

    NASA Astrophysics Data System (ADS)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-08-01

    The abundances of dark matter halos in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the halo mass function. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  8. Constraining CO emission estimates using atmospheric observations

    NASA Astrophysics Data System (ADS)

    Hooghiemstra, P. B.

    2012-06-01

    We apply a four-dimensional variational (4D-Var) data assimilation system to optimize carbon monoxide (CO) emissions and to reduce the uncertainty of emission estimates from individual sources using the chemistry transport model TM5. In the first study only a limited amount of surface network observations from the National Oceanic and Atmospheric Administration Earth System Research Laboratory (NOAA/ESRL) Global Monitoring Division (GMD) is used to test the 4D-Var system. Uncertainty reduction up to 60% in yearly emissions is observed over well-constrained regions and the inferred emissions compare well with recent studies for 2004. However, since the observations only constrain total CO emissions, the 4D-Var system has difficulties separating anthropogenic and biogenic sources in particular. The inferred emissions are validated with NOAA aircraft data over North America and the agreement is significantly improved from the prior to posterior simulation. Validation with the Measurements Of Pollution In The Troposphere (MOPITT) instrument shows a slight improved agreement over the well-constrained Northern Hemisphere and in the tropics (except for the African continent). However, the model simulation with posterior emissions underestimates MOPITT CO total columns on the remote Southern Hemisphere (SH) by about 10%. This is caused by a reduction in SH CO sources mainly due to surface stations on the high southern latitudes. In the second study, we compare two global inversions to estimate carbon monoxide (CO) emissions for 2004. Either surface flask observations from NOAA or CO total columns from the MOPITT instrument are assimilated in a 4D-Var framework. In the Southern Hemisphere (SH) three important findings are reported. First, due to their different vertical sensitivity, the stations-only inversion increases SH biomass burning emissions by 108 Tg CO/yr more than the MOPITT-only inversion. Conversely, the MOPITT-only inversion results in SH natural emissions

  9. Constraining Simulated Photosynthesis with Fluorescence Observations

    NASA Astrophysics Data System (ADS)

    Baker, I. T.; Berry, J. A.; Lee, J.; Frankenberg, C.; Denning, S.

    2012-12-01

    The measurement of chlorophyll fluorescence from satellites is an emerging technology. To date, most applications have compared fluorescence to light use efficiency models of Gross Primary Productivity (GPP). A close correspondence between fluorescence and GPP has been found in these comparisons. Here, we 'go the other way' and calculate fluorescence using an enzyme kinetic photosynthesis model (the Simple Biosphere Model; SiB), and compare to spectral retrievals. We utilize multiple representations for model phenology as a sensitivity test, obtaining leaf area index (LAI) and fraction of photosynthetically active radiation absorbed (fPAR) from both MODIS-derived products as well as a prognostic model of LAI/fPAR based on growing season index (PGSI). We find that bidirectional reflectance distribution function (BRDF), canopy radiative transfer, and leaf-to-canopy scaling all contribute to variability in simulated fluorescence. We use our results to evaluate discrepancies between light use efficiency and enzyme kinetic models across latitudinal, vegetation and climatological gradients. Satellite retrievals of fluorescence will provide insight into photosynthetic process and constrain simulations of the carbon cycle across multiple spatiotemporal scales.

  10. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  11. Simulations of the Local Universe constrained by observational peculiar velocities

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Courtois, Hélène M.; Gottlöber, Stefan; Hoffman, Yehuda; Tully, R. Brent

    2014-02-01

    Peculiar velocities, obtained from direct distance measurements, are data of choice to achieve constrained simulations of the Local Universe reliable down to a scale of a few megaparsec. Unlike redshift surveys, peculiar velocities are direct tracers of the underlying gravitational field as they trace both baryonic and dark matter. This paper presents the first attempt to use solely observational peculiar velocities to constrain cosmological simulations of the nearby Universe. In order to set up initial conditions, a Reverse Zel'dovich Approximation (RZA) is used to displace constraints from their positions at z = 0 to their precursors' locations at higher redshifts. An additional new feature replaces original observed radial peculiar velocity vectors by their full 3D reconstructions provided by the Wiener-Filter (WF) estimator. Subsequently, the constrained realization (CR) of Gaussian fields technique is applied to build various realizations of the initial conditions. The WF/RZA/CR method is first tested on realistic mock catalogues built from a reference simulation similar to the Local Universe. These mocks include errors on peculiar velocities, on data point positions and a large continuous zone devoid of data in order to mimic galactic extinction. Large-scale structures are recovered with a typical accuracy of 5 h-1 Mpc in position, the best realizations reaching a 2-3 h-1 Mpc precision, the limit imposed by the RZA linear theory. Then, the method is applied to the first observational radial peculiar velocity catalogue of the project Cosmicflows. This paper is a proof of concept that the WF/RZA/CR method can be applied to observational peculiar velocities to successfully build constrained initial conditions.

  12. Constraining interacting dark energy models with latest cosmological observations

    NASA Astrophysics Data System (ADS)

    Xia, Dong-Mei; Wang, Sai

    2016-08-01

    The local measurement of H0 is in tension with the prediction of ΛCDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  13. Search for point sources of high energy neutrinos with Amanda

    SciTech Connect

    Ahrens, J.

    2002-08-01

    Report of search for likely point sources for neutrinos observed by the Amanda detector. Places intensity limits on observable point sources. This paper describes the search for astronomical sources of high-energy neutrinos using the AMANDA-B10 detector, an array of 302 photomultiplier tubes, used for the detection of Cherenkov light from upward traveling neutrino-induced muons, buried deep in ice at the South Pole. The absolute pointing accuracy and angular resolution were studied by using coincident events between the AMANDA detector and two independent telescopes on the surface, the GASP air Cherenkov telescope and the SPASE extensive air shower array. Using data collected from April to October of 1997 (130.1 days of livetime), a general survey of the northern hemisphere revealed no statistically significant excess of events from any direction. The sensitivity for a flux of muon neutrinos is based on the effective detection area for through-going muons. Averaged over the Northern sky, the effective detection area exceeds 10,000 m{sup 2} for E{sub {mu}} {approx} 10 TeV. Neutrinos generated in the atmosphere by cosmic ray interactions were used to verify the predicted performance of the detector. For a source with a differential energy spectrum proportional to E{sub {nu}}{sup -2} and declination larger than +40{sup o}, we obtain E{sup 2} (dN{sub {nu}}/dE) {le} 10{sup -6} GeV cm{sup -2} s{sup -1} for an energy threshold of 10 GeV.

  14. Constraining the volatile fraction of planets from transit observations

    NASA Astrophysics Data System (ADS)

    Alibert, Y.

    2016-06-01

    Context. The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Aims: Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount of volatiles in the planetary interior, and consequently the amount of gas (defined in this paper to be only H and He) that the planet harbors. We first explore planets that are located close enough to their star to have lost their gas envelope. We then concentrate on planets at larger distances and show that the observation of transiting planets at different evolutionary ages can provide statistical information on their internal composition, in particular on their volatile fraction. Methods: We computed the evolution of low-mass planets (super-Earths to Neptune-like) for different fractions of volatiles and gas. We used a four-layer model (core, silicate mantle, icy mantle, and gas envelope) and computed the internal structure of planets for different luminosities. With this internal structure model, we computed the internal and gravitational energy of planets, which was then used to derive the time evolution of the planet. Since the total energy of a planet depends on its heat capacity and density distribution and therefore on its composition, planets with different ice fractions have different evolution tracks. Results: We show for low-mass gas-poor planets that are located close to their central star that assuming evaporation has efficiently removed the entire gas envelope, it is possible to constrain the volatile fraction of close-in transiting planets. We illustrate this method on the example of 55 Cnc e and show that under the assumption of the absence of

  15. Thermal evolution of Mercury as constrained by MESSENGER observations

    NASA Astrophysics Data System (ADS)

    Michel, Nathalie C.; Hauck, Steven A.; Solomon, Sean C.; Phillips, Roger J.; Roberts, James H.; Zuber, Maria T.

    2013-05-01

    observations of Mercury by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft provide new constraints on that planet's thermal and interior evolution. Specifically, MESSENGER observations have constrained the rate of radiogenic heat production via measurement of uranium, thorium, and potassium at the surface, and identified a range of surface compositions consistent with high-temperature, high-degree partial melts of the mantle. Additionally, MESSENGER data have placed new limits on the spatial and temporal variation in volcanic and tectonic activity and enabled determination that the planet's core is larger than previously estimated. Because Mercury's mantle layer is also thinner than previously thought, this result gives greater likelihood to the possibility that mantle convection is marginally supercritical or even that the mantle is not convecting. We simulate mantle convection and magma generation within Mercury's mantle under two-dimensional axisymmetry and a broad range of conditions to understand the implications of MESSENGER observations for the thermal evolution of the planet. These models demonstrate that mantle convection can persist in such a thin mantle for a substantial portion of Mercury's history, and often to the present, as long as the mantle is thicker than ~300 km. We also find that magma generation in Mercury's convecting mantle is capable of producing widespread magmas by large-degree partial melting, consistent with MESSENGER observations of the planet's surface chemistry and geology.

  16. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    NASA Astrophysics Data System (ADS)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  17. Juno radio science observations to constrain Jupiter's moment of inertia

    NASA Astrophysics Data System (ADS)

    Le Maistre, S.; Folkner, W. M.; Jacobson, R. A.

    2015-10-01

    Through detailed and realistic numerical simulations, the present study assesses the precision with which Juno can measure the normalized polar moment of inertia (MOI) of Jupiter. Based on Ka-band Doppler and range data, this analysis shows that the determination of the precession rate of Jupiter is by far more efficient than the previously proposed Lense-Thirring effect to determine the moment of inertia and therefore to constrain the internal structure of the giant planet with Juno.

  18. Mars atmospheric escape constrained using MAVEN IUVS coronal observations

    NASA Astrophysics Data System (ADS)

    Chaffin, Michael S.; Deighan, Justin; Chaufray, Jean-Yves; Jain, Sonal; Stewart, Ian; McClintock, Bill; Crismani, Matteo; Stiepen, Arnaud; Holsclaw, Greg; Clarke, John; Montmessin, Franck; Eparvier, Frank; Thiemann, Ed; Chamberlain, Phil; Schneider, Nick; Jakosky, Bruce

    2015-11-01

    Every planetary atmosphere is capped by a corona: an extended, extremely tenuous region where collisions are negligible and particles follow ballistic trajectories. At Mars, the corona is especially extended due to the low gravity of the planet, and a large number of coronal particles are on escaping trajectories. Such escape has played a critical role in the history of the Mars system, likely removing a substantial fraction of the water initially present on the planet, but the mechanism and magnitude of this escape remains poorly constrained. Currently in orbit at Mars, MAVEN's Imaging Ultraviolet Spectrograph (IUVS) is mapping the distribution of oxygen and hydrogen above 200 km at a high spatial and temporal cadence, revealing a dynamic corona in unprecedented detail. Results will be presented demonstrating that the H in the corona is not spherically symmetric in its distribution, and can potentially be used as a tracer of thermospheric general circulation; and that non-thermal "hot" O (in contrast with more spatially confined "cold" thermal O) is ionospherically sourced with a characteristic energy of 1.1 eV and responds to solar EUV forcing. These results will be interpreted in terms of their impact on our current understanding of how atmospheric escape operates today. We will also discuss how these processes may have acted in the past to deplete Mars' initial water inventory, potentially altering the redox balance of the planet and atmosphere through differential escape of H and O.

  19. Mercury's thermo-chemical evolution constrained by MESSENGER observations

    NASA Astrophysics Data System (ADS)

    Tosi, Nicola; Grott, Matthias; Breuer, Doris; Plesa, Ana-Catalina

    2013-04-01

    Low-degree coefficients of Mercury's gravity field as obtained from the MESSENGER's Radio Science experiment combined with estimates of Mercury's spin state permit to compute the normalized polar moment of inertia of the planet (C-MR2) as well as the ratio of the moment of inertia of the mantle to that of the planet (Cm-C). These two parameters provide a strong constraint on the internal mass distribution. With C-MR2 = 0.346 and Cm-C = 0.431 [1], interior structure models predict a large core radius but also a large mantle density. The latter requirement can be met with a relatively standard composition of the silicate mantle for which a core radius of ~ 2000 km is expected [2]. Alternatively, the large density of the silicate shell has been interpreted as a consequence of the presence of a solid FeS layer that could form atop the liquid core under suitable temperature conditions [3]. According to this hypothesis, the thickness of the mantle would be reduced to ~ 300 km only. Additionally, the Gamma-Ray Spectrometer measured a surface abundance of U, Th and K, which hints at a bulk mantle composition comparable to other terrestrial planets [4]. Geological evidence also suggests that volcanism was a globally extensive process even after the late heavy bombardment (LHB) and that northern plains were likely emplaced in a flood lava mode by high-temperature, low-viscosity lava. Finally, the analysis of previously unrecognized compressional tectonic features as revealed by recent MESSENGER images yielded revised estimates of the global planetary contraction, which is calculated to be as high as 4-5 km [5]. We employed the above pieces of information to constrain the thermal and magmatic history of Mercury with numerical simulations. Using 1D-parameterized thermo-chemical evolution models, we ran a large set of Monte-Carlo simulations (~ 10000) in which we varied systematically the thickness of the silicate shell, intial mantle and CMB temperatures, mantle rheology

  20. Observationally constraining the jet power extracted from spinning black holes

    NASA Astrophysics Data System (ADS)

    Markoff, Sera

    2014-03-01

    Black holes of all sizes, from stellar to supermassive, launch relativistic jets of magnetized plasma that can radiate across the entire electromagnetic spectrum. These flows originate from near-event horizon scales, where ordered magnetic fields threading the plasma likely play a defining role in their collimation and source of power. Depending on where the power is extracted from in the system, e.g., the inner accretion flow or the ergosphere of the black hole, there can be a markedly different dependence of observed power on black hole spin. Further complicating our ability to derive the spin from observations is the fact that the exact relationship between jet emission properties and spin will be very model dependent, and the fact that jets themselves evolve depending on the state of the accretion flow. I will present an overview of the current state of the art in understanding black hole jet observations and their relation to spin, as well as discuss some special cases like our Galactic center's supermassive black hole Sgr A*, and the evolving jets observed in X-ray binary systems.

  1. Constraining Galaxy Evolution Using Observed UV-Optical Spectra

    NASA Technical Reports Server (NTRS)

    Heap, Sally

    2007-01-01

    Our understanding of galaxy evolution depends on model spectra of stellar populations, and the models are only as good as the observed spectra and stellar parameters that go into them. We are therefore evaluating modem UV-optical model spectra using Hubble's Next Generation Spectral Library (NGSL) as the reference standard. The NGSL comprises intermediate-resolution (R is approximately 1000) STIS spectra of 378 stars having a wide range in metallicity and age. Unique features of the NGSL include its broad wavelength coverage (1,800-10,100 A) and high-S/N, absolute spectrophotometry. We will report on a systematic comparison of model and observed UV-blue spectra, describe where on the HR diagram significant differences occur, and comment on current approaches to correct the models for these differences.

  2. HEATING OF FLARE LOOPS WITH OBSERVATIONALLY CONSTRAINED HEATING FUNCTIONS

    SciTech Connect

    Qiu Jiong; Liu Wenjuan; Longcope, Dana W.

    2012-06-20

    We analyze high-cadence high-resolution observations of a C3.2 flare obtained by AIA/SDO on 2010 August 1. The flare is a long-duration event with soft X-ray and EUV radiation lasting for over 4 hr. Analysis suggests that magnetic reconnection and formation of new loops continue for more than 2 hr. Furthermore, the UV 1600 Angstrom-Sign observations show that each of the individual pixels at the feet of flare loops is brightened instantaneously with a timescale of a few minutes, and decays over a much longer timescale of more than 30 minutes. We use these spatially resolved UV light curves during the rise phase to construct empirical heating functions for individual flare loops, and model heating of coronal plasmas in these loops. The total coronal radiation of these flare loops are compared with soft X-ray and EUV radiation fluxes measured by GOES and AIA. This study presents a method to observationally infer heating functions in numerous flare loops that are formed and heated sequentially by reconnection throughout the flare, and provides a very useful constraint to coronal heating models.

  3. Constraining competing models of dark energy with cosmological observations

    NASA Astrophysics Data System (ADS)

    Pavlov, Anatoly

    The last decade of the 20th century was marked by the discovery of the accelerated expansion of the universe. This discovery puzzles physicists and has yet to be fully understood. It contradicts the conventional theory of gravity, i.e. Einstein's General Relativity (GR). According to GR, a universe filled with dark matter and ordinary matter, i.e. baryons, leptons, and photons, can only expand with deceleration. Two approaches have been developed to study this phenomenon. One attempt is to assume that GR might not be the correct description of gravity, hence a modified theory of gravity has to be developed to account for the observed acceleration of the universe's expansion. This approach is known as the "Modified Gravity Theory". The other way is to assume that the energy budget of the universe has one more component which causes expansion of space with acceleration on large scales. Dark Energy (DE) was introduced as a hypothetical type of energy homogeneously filling the entire universe and very weakly or not at all interacting with ordinary and dark matter. Observational data suggest that if DE is assumed then its contribution to the energy budget of the universe at the current epoch should be about 70% of the total energy density of the universe. In the standard cosmological model a DE term is introduced into the Einstein GR equations through the cosmological constant, a constant in time and space, and proportional to the metric tensor gmunu. While this model so far fits most available observational data, it has some significant conceptual shortcomings. Hence there are a number of alternative cosmological models of DE in which the dark energy density is allowed to vary in time and space.

  4. Observationally constrained projections of Antarctic ice sheet instability

    NASA Astrophysics Data System (ADS)

    Edwards, Tamsin; Ritz, Catherine; Durand, Gael; Payne, Anthony; Peyaud, Vincent; Hindmarsh, Richard

    2015-04-01

    Large parts of the Antarctic ice sheet lie on bedrock below sea level and may be vulnerable to a positive feedback known as Marine Ice Sheet Instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence MISI may be underway throughout the Amundsen Sea Embayment (ASE) of West Antarctica, induced by circulation of warm Circumpolar Deep Water. If this retreat is sustained the region could contribute up to 1-2 m to global mean sea level, and if triggered in other areas the potential contribution to sea level on centennial to millennial timescales could be two to three times greater. However, physically plausible projections of Antarctic MISI are challenging: numerical ice sheet models are too low in spatial resolution to resolve grounding line processes or else too computationally expensive to assess modelling uncertainties, and no dynamical models exist of the ocean-atmosphere-ice sheet system. Furthermore, previous numerical ice sheet model projections for Antarctica have not been calibrated with observations, which can reduce uncertainties. Here we estimate the probability of dynamic mass loss in the event of MISI under a medium climate scenario, assessing 16 modelling uncertainties and calibrating the projections with observed mass losses in the ASE from 1992-2011. We project losses of up to 30 cm sea level equivalent (SLE) by 2100 and 72 cm SLE by 2200 (95% credibility interval: CI). Our results are substantially lower than previous estimates. The ASE sustains substantial losses, 83% of the continental total by 2100 and 67% by 2200 (95% CI), but in other regions losses are limited by ice dynamical theory, observations, or a lack of projected triggers.

  5. Observations that Constrain the Scaling of Apparent Stress

    NASA Astrophysics Data System (ADS)

    McGarr, A.; Fletcher, J. B.

    2002-12-01

    Slip models developed for major earthquakes are composed of distributions of fault slip, rupture time, and slip velocity time function over the rupture surface, as divided into many smaller subfaults. Using a recently-developed technique, the seismic energy radiated from each subfault can be estimated from the time history of slip there and the average rupture velocity. Total seismic energies, calculated by summing contributions from all of the subfaults, agree reasonably well with independent estimates based on seismic energy flux in the far-field at regional or teleseismic distances. Two recent examples are the 1999 Izmit, Turkey and the 1999 Hector Mine, California earthquakes for which the NEIS teleseismic measurements of radiated energy agree fairly closely with seismic energy estimates from several different slip models, developed by others, for each of these events. Similar remarks apply to the 1989 Loma Prieta, 1992 Landers, and 1995 Kobe earthquakes. Apparent stresses calculated from these energy and moment results do not indicate any moment or magnitude dependence. The distributions of both fault slip and seismic energy radiation over the rupture surfaces of earthquakes are highly inhomogeneous. These results from slip models, combined with underground and seismic observations of slip for much smaller mining-induced earthquakes, can provide stronger constraint on the possible scaling of apparent stress with moment magnitude M or seismic moment. Slip models for major earthquakes in the range M6.2 to M7.4 show maximum slips ranging from 1.6 to 8 m. Mining-induced earthquakes at depths near 2000 m in South Africa are associated with peak slips of 0.2 to 0.37 m for events of M4.4 to M4.6. These maximum slips, whether derived from a slip model or directly observed underground in a deep gold mine, scale quite definitively as the cube root of the seismic moment. In contrast, peak slip rates (maximum subfault slip/rise time) appear to be scale invariant. A 1.25 m

  6. Observationally constrained estimates of carbonaceous aerosol radiative forcing.

    PubMed

    Chung, Chul E; Ramanathan, V; Decremer, Damien

    2012-07-17

    Carbonaceous aerosols (CA) emitted by fossil and biomass fuels consist of black carbon (BC), a strong absorber of solar radiation, and organic matter (OM). OM scatters as well as absorbs solar radiation. The absorbing component of OM, which is ignored in most climate models, is referred to as brown carbon (BrC). Model estimates of the global CA radiative forcing range from 0 to 0.7 Wm(-2), to be compared with the Intergovernmental Panel on Climate Change's estimate for the pre-Industrial to the present net radiative forcing of about 1.6 Wm(-2). This study provides a model-independent, observationally based estimate of the CA direct radiative forcing. Ground-based aerosol network data is integrated with field data and satellite-based aerosol observations to provide a decadal (2001 through 2009) global view of the CA optical properties and direct radiative forcing. The estimated global CA direct radiative effect is about 0.75 Wm(-2) (0.5 to 1.0). This study identifies the global importance of BrC, which is shown to contribute about 20% to 550-nm CA solar absorption globally. Because of the inclusion of BrC, the net effect of OM is close to zero and the CA forcing is nearly equal to that of BC. The CA direct radiative forcing is estimated to be about 0.65 (0.5 to about 0.8) Wm(-2), thus comparable to or exceeding that by methane. Caused in part by BrC absorption, CAs have a net warming effect even over open biomass-burning regions in Africa and the Amazon. PMID:22753522

  7. Using cm observations to constrain the abundance of very small dust grains in Galactic cold cores

    NASA Astrophysics Data System (ADS)

    Tibbs, C. T.; Paladini, R.; Cleary, K.; Muchovej, S. J. C.; Scaife, A. M. M.; Stevenson, M. A.; Laureijs, R. J.; Ysard, N.; Grainge, K. J. B.; Perrott, Y. C.; Rumsey, C.; Villadsen, J.

    2016-03-01

    In this analysis, we illustrate how the relatively new emission mechanism, known as spinning dust, can be used to characterize dust grains in the interstellar medium. We demonstrate this by using spinning dust emission observations to constrain the abundance of very small dust grains (a ≲ 10 nm) in a sample of Galactic cold cores. Using the physical properties of the cores in our sample as inputs to a spinning dust model, we predict the expected level of emission at a wavelength of 1 cm for four different very small dust grain abundances, which we constrain by comparing to 1 cm CARMA observations. For all of our cores, we find a depletion of very small grains, which we suggest is due to the process of grain growth. This work represents the first time that spinning dust emission has been used to constrain the physical properties of interstellar dust grains.

  8. Muon energy reconstruction in the Antarctic muon and neutrino detector array (AMANDA)

    NASA Astrophysics Data System (ADS)

    Miocinovic, Predrag

    AMANDA is an optical Cerenkov detector designed for observation of high-energy neutrinos (E ≳ 100 GeV) and is located deep inside the South Polar ice cap. The neutrinos that undergo charged-current interaction in or near the detector can be observed by the telltale Cerenkov light generated by the resulting lepton and its secondaries. The presence of insoluble particulates in the ice increases the light scattering, which in turn increases the light containment inside the detector. This enhances the light collection efficiency, allowing for a calorimetry-like measurement of energy deposited by the neutrino-induced leptons. In this work, I developed a probabilistic method for measuring the energy of non-contained muons detected by AMANDA-B10 (1997 configuration). The knowledge of muon energy opens a large window of discovery since it helps to determine whether the parent neutrino has a terrestrial or extraterrestrial origin. The method is based on finding the muon energy that will most likely produce the observed detector response. The energy likelihood is generated by combining the average light-emission profiles of muons with different energies and the models of light distribution in ice and detector response to light. Event reconstruction results in an energy resolution of ˜0.35 in log(E/GeV) over the 1 TeV--1 PeV range. Below 1 TeV, the light produced is insufficient to reliably determine the muon energy, while above 1 PeV, the AMANDA-B10 response to energy saturates, due to the finite detector size and limitations in its hardware. Stochastic variations in muon energy loss and photon propagation are the dominant sources that limit the reconstructed energy resolution. I showed that in such case, a Bayesian unfolding technique improves the reconstruction of the underlying muon energy spectrum. The unfolding also corrects for known systematic effects such as saturation, directional reconstruction bias, data "cleaning", and others. My analysis of 1997 data shows

  9. Multi-constrained fault estimation observer design with finite frequency specifications for continuous-time systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Jiang, Bin; Shi, Peng; Xu, Jinfa

    2014-08-01

    The design of a multi-constrained full-order fault estimation observer (FFEO) with finite frequency specifications is studied for continuous-time systems. By constructing an augmented system, a multi-constrained FFEO in finite frequency domain is proposed to achieve fault estimation. Meanwhile, the presented FFEO can avoid the overdesign problem generated by the entire frequency domain by the generalised Kalman-Yakubovich-Popov lemma. Furthermore, by introducing slack variables, improved results on FFEO design in different frequency domains are obtained such that different Lyapunov matrices can be separately designed for each constraint. Simulation results are presented to demonstrate the effectiveness and potentials of the proposed techniques.

  10. The Search for Muon Neutrinos from Northern HemisphereGamma-Ray Bursts with AMANDA

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Achterberg, A.

    2007-05-08

    We present the results of the analysis of neutrino observations by the Antarctic Muon and Neutrino Detector Array (AMANDA) correlated with photon observations of more than 400 gamma-ray bursts (GRBs) in the Northern Hemisphere from 1997 to 2003. During this time period, AMANDA's effective collection area for muon neutrinos was larger than that of any other existing detector. Based on our observations of zero neutrinos during and immediately prior to the GRBs in the dataset, we set the most stringent upper limit on muon neutrino emission correlated with gamma-ray bursts. Assuming a Waxman-Bahcall spectrum and incorporating all systematic uncertainties, our flux upper limit has a normalization at 1 PeV of E{sup 2}{Phi}{sub {nu}} {le} 6.0 x 10{sup -9} GeV cm{sup -2}s{sup -1}sr{sup -1}, with 90% of the events expected within the energy range of {approx}10 TeV to {approx}3 PeV. The impact of this limit on several theoretical models of GRBs is discussed, as well as the future potential for detection of GRBs by next generation neutrino telescopes. Finally, we briefly describe several modifications to this analysis in order to apply it to other types of transient point sources.

  11. The Search for Muon Neutrinos from Northern Hemisphere Gamma-Ray Bursts with AMANDA

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bahcall, J. N.; Bai, X.; Baret, B.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, C.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; De Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; DeYoung, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Griesel, T.; Gross, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Hart, J. E.; Hasegawa, Y.; Hauschildt, T.; Hays, D.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hulth, P. O.; Hülss, J.-P.; Hultqvist, K.; Hundertmark, S.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Jones, A.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Labare, M.; Landsman, H.; Leich, H.; Leier, D.; Liubarsky, I.; Lundberg, J.; Lünemann, J.; Madsen, J.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mokhtarani, A.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Niessen, P.; Nygren, D. R.; Ögelman, H.; Olivas, A.; Patton, S.; Peña-Garay, C.; Pérez de los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Roth, P.; Rott, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Seckel, D.; Semburg, B.; Seo, S. H.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Solarz, M.; Song, C.; Sopher, J. E.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Steffen, P.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Viscomi, V.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.; Interplanetary Network, The

    2008-02-01

    We present the results of the analysis of neutrino observations by the Antarctic Muon and Neutrino Detector Array (AMANDA) correlated with photon observations of more than 400 gamma-ray bursts (GRBs) in the northern hemisphere from 1997 to 2003. During this time period, AMANDA's effective collection area for muon neutrinos was larger than that of any other existing detector. After the application of various selection criteria to our data, we expect ~1 neutrino event and <2 background events. Based on our observations of zero events during and immediately prior to the GRBs in the data set, we set the most stringent upper limit on muon neutrino emission correlated with GRBs. Assuming a Waxman-Bahcall spectrum and incorporating all systematic uncertainties, our flux upper limit has a normalization at 1 PeV of E2Φν <= 6.3 × 10-9 GeV cm-2 s-1 sr-1, with 90% of the events expected within the energy range of ~10 TeV to ~3 PeV. The impact of this limit on several theoretical models of GRBs is discussed, as well as the future potential for detection of GRBs by next-generation neutrino telescopes. Finally, we briefly describe several modifications to this analysis in order to apply it to other types of transient point sources.

  12. Geochemical record of high emperor penguin populations during the Little Ice Age at Amanda Bay, Antarctica.

    PubMed

    Huang, Tao; Yang, Lianjiao; Chu, Zhuding; Sun, Liguang; Yin, Xijie

    2016-09-15

    Emperor penguins (Aptenodytes forsteri) are sensitive to the Antarctic climate change because they breed on the fast sea ice. Studies of paleohistory for the emperor penguin are rare, due to the lack of archives on land. In this study, we obtained an emperor penguin ornithogenic sediment profile (PI) and performed geochronological, geochemical and stable isotope analyses on the sediments and feather remains. Two radiocarbon dates of penguin feathers in PI indicate that emperor penguins colonized Amanda Bay as early as CE 1540. By using the bio-elements (P, Se, Hg, Zn and Cd) in sediments and stable isotope values (δ(15)N and δ(13)C) in feathers, we inferred relative population size and dietary change of emperor penguins during the period of CE 1540-2008, respectively. An increase in population size with depleted N isotope ratios for emperor penguins on N island at Amanda Bay during the Little Ice Age (CE 1540-1866) was observed, suggesting that cold climate affected the penguin's breeding habitat, prey availability and thus their population and dietary composition. PMID:27261428

  13. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    SciTech Connect

    Shah, Sweta; Nelemans, Gijs

    2014-08-01

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, the inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.

  14. An observationally constrained evaluation of the oxidative capacity in the tropical western Pacific troposphere

    NASA Astrophysics Data System (ADS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; Campos, Teresa L.; Dickerson, Russell R.; Duncan, Bryan; Emmons, Louisa K.; Evans, Mathew J.; Fernandez, Rafael P.; Flemming, Johannes; Hall, Samuel R.; Hanisco, Thomas F.; Honomichl, Shawn B.; Hornbrook, Rebecca S.; Huijnen, Vincent; Kaser, Lisa; Kinnison, Douglas E.; Lamarque, Jean-Francois; Mao, Jingqiu; Monks, Sarah A.; Montzka, Denise D.; Pan, Laura L.; Riemer, Daniel D.; Saiz-Lopez, Alfonso; Steenrod, Stephen D.; Stell, Meghan H.; Tilmes, Simone; Turquety, Solene; Ullmann, Kirk; Weinheimer, Andrew J.

    2016-06-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO + NO2) by a factor of 2, resulting in OHCOL ~30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  15. Observationally constraining gravitational wave emission from short gamma-ray burst remnants

    NASA Astrophysics Data System (ADS)

    Lasky, Paul D.; Glampedakis, Kostas

    2016-05-01

    Observations of short gamma-ray bursts indicate ongoing energy injection following the prompt emission, with the most likely candidate being the birth of a rapidly rotating, highly magnetized neutron star. We utilize X-ray observations of the burst remnant to constrain properties of the nascent neutron star, including its magnetic field-induced ellipticity and the saturation amplitude of various oscillation modes. Moreover, we derive strict upper limits on the gravitational wave emission from these objects by looking only at the X-ray light curve, showing the burst remnants are unlikely to be detected in the near future using ground-based gravitational wave interferometers, such as Advanced LIGO.

  16. Constraining Earth's Rheology of the Barents Sea Using Grace Gravity Change Observations

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Root, B. C.; Tarasov, L.

    2014-12-01

    The Barents Sea region was ice covered during last glacial maximum and experiences Glacial Isostatic Adjustment (GIA). Because of the limited amount of relevant geological and geodetic observations, it is difficult to constrain GIA models for this region. With improved ice sheet models and gravity observations from GRACE, it is possible to better constrain Earth rheology. This study aims to constrain the upper mantle viscosity and elastic lithosphere thickness from GRACE data in the Barents Sea region. The GRACE observations are corrected for current ice melting on Svalbard, Novaya Zemlya and Frans Joseph Land. A secular trend in gravity rate trend is estimated from the CSR release 5 GRACE data for the period of February 2003 to July 2013. Furthermore, long wavelength effects from distant large mass balance signals such as Greenland ice melting are filtered out. A new high-variance set of ice loading histories from calibrated glaciological modeling are used in the GIA modeling as it is found that ICE-5G over-estimates the observed GIA gravity change in the region. It is found that the rheology structure represented by VM5a results in over-estimation of the observed gravity change in the region for all ice sheet chronologies investigated. Therefore, other rheological Earth models were investigated. The best fitting upper mantle viscosity and elastic lithosphere thickness in the Barents Sea region are 4 (±0.5)*10^20 Pas and 110 (±20) km, respectively. The GRACE satellite mission proves to be a useful constraint in the Barents Sea Region for improving our knowledge on the upper mantle rheology.

  17. CONSTRAINING THE DARK ENERGY EQUATION OF STATE USING LISA OBSERVATIONS OF SPINNING MASSIVE BLACK HOLE BINARIES

    SciTech Connect

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    2011-05-10

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here the possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.

  18. Measurements of atmospheric muons using AMANDA with emphasis on the prompt component

    NASA Astrophysics Data System (ADS)

    Ganugapati, Raghunath

    The main aim of AMANDA neutrino telescope is to detect diffuse extra- terrestrial neutrinos. While atmospheric muons can be easily filtered out atmospheric neutrinos are an irreducible back-ground for diffuse extra- terrestrial neutrino fluxes. At GeV energies the atmospheric neutrino fluxes are dominated by conventional neutrinos. However with increasing energy (> 100TeV), the harder "prompt" neutrinos that arise through semi-leptonic decays of hadrons containing heavy quarks, most notably charm, become dominant. Estimates of the magnitude of the prompt atmospheric fluxes differ by almost two orders of magnitude. The main principle in this thesis is that it is possible to overcome the theoretical uncertainty in the magnitude of the prompt neutrino fluxes by deriving their intensity from a measurement of the down- going prompt muon flux . An attempt to constrain this flux using this principle was made and an analysis of the down-going muon data was performed to constrain the RPQM model of prompt muons by a factor of 3.67 under a strict set of simplifying assumptions.

  19. MULTI-WAVELENGTH OBSERVATIONS OF SOLAR FLARES WITH A CONSTRAINED PEAK X-RAY FLUX

    SciTech Connect

    Bowen, Trevor A.; Testa, Paola; Reeves, Katharine K.

    2013-06-20

    We present an analysis of soft X-ray (SXR) and extreme-ultraviolet (EUV) observations of solar flares with an approximate C8 Geostationary Operational Environmental Satellite (GOES) class. Our constraint on peak GOES SXR flux allows for the investigation of correlations between various flare parameters. We show that the duration of the decay phase of a flare is proportional to the duration of its rise phase. Additionally, we show significant correlations between the radiation emitted in the flare rise and decay phases. These results suggest that the total radiated energy of a given flare is proportional to the energy radiated during the rise phase alone. This partitioning of radiated energy between the rise and decay phases is observed in both SXR and EUV wavelengths. Though observations from the EUV Variability Experiment show significant variation in the behavior of individual EUV spectral lines during different C8 events, this work suggests that broadband EUV emission is well constrained. Furthermore, GOES and Atmospheric Imaging Assembly data allow us to determine several thermal parameters (e.g., temperature, volume, density, and emission measure) for the flares within our sample. Analysis of these parameters demonstrate that, within this constrained GOES class, the longer duration solar flares are cooler events with larger volumes capable of emitting vast amounts of radiation. The shortest C8 flares are typically the hottest events, smaller in physical size, and have lower associated total energies. These relationships are directly comparable with several scaling laws and flare loop models.

  20. Future sea level rise constrained by observations and long-term commitment.

    PubMed

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  1. How precisely can neutrino emission from supernova remnants be constrained by gamma ray observations?

    SciTech Connect

    Villante, F. L.; Vissani, F.

    2008-11-15

    We propose a conceptually and computationally simple method to evaluate the neutrinos emitted by supernova remnants using the observed {gamma} ray spectrum. The proposed method does not require any preliminary parametrization of the gamma ray flux; the gamma ray data can be used as an input. In this way, we are able to propagate easily the observational errors and to understand how well the neutrino flux and the signal in neutrino telescopes can be constrained by {gamma} ray data. We discuss the various possible sources of theoretical and systematical uncertainties (e.g., hadronic modeling, neutrino oscillation parameters, etc.), obtaining an estimate of the accuracy of our calculation. Furthermore, we apply our approach to the supernova remnant RX J1713.7-3946, showing that neutrino emission is very well constrained by the H.E.S.S. {gamma} ray data: indeed, the accuracy of our prediction is limited by theoretical uncertainties. The observation of neutrinos from RX J1713.7-3946 seems possible with an exposure of the order of few km{sup 2}xyear, provided that the detection threshold in future neutrino telescopes will be not higher than about 1 TeV.

  2. Future sea level rise constrained by observations and long-term commitment

    PubMed Central

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-01-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  3. Choosing a 'best' global aerosol model: Can observations constrain parametric uncertainty?

    NASA Astrophysics Data System (ADS)

    Browse, Jo; Reddington, Carly; Pringle, Kirsty; Regayre, Leighton; Lee, Lindsay; Schmidt, Anja; Field, Paul; Carslaw, Kenneth

    2015-04-01

    Anthropogenic aerosol has been shown to contribute to climate change via direct radiative forcing and cloud-aerosol interactions. While the role of aerosol as a climate agent is likely to diminish as CO2 emissions increase, recent studies suggest that uncertainty in modelled aerosol is likely to dominate uncertainty in future forcing projections. Uncertainty in modelled aerosol derives from uncertainty in the representation of emissions and aerosol processes (parametric uncertainty) as well as structural error. Here we utilise Latin hyper-cube sampling methods to produce an ensemble model (composed of 280 runs) of a global model of aerosol processes (GLOMAP) spanning 31 parametric ranges. Using an unprecedented number of observations made available by the GASSP project we have evaluated our ensemble model against a multi-variable (CCN, BC mass, PM2.5) data-set to determine if 'an ideal' aerosol model exists. Ignoring structural errors, optimization of a global model against multiple data-sets to within a factor of 2 is possible, with multiple model runs identified. However, (even regionally) the parametric range of our 'best' model runs is very wide with the same model skill arising from multiple parameter settings. Our results suggest that 'traditional' in-situ measurements are insufficient to constrain parametric uncertainty. Thus, to constrain aerosol in climate models, future evaluations must include process based observations.

  4. Constraining mantle viscosity structure for a thermochemical mantle using the geoid observation

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhong, Shijie

    2016-03-01

    Long-wavelength geoid anomalies provide important constraints on mantle dynamics and viscosity structure. Previous studies have successfully reproduced the observed geoid using seismically inferred buoyancy in whole-mantle convection models. However, it has been suggested that large low shear velocity provinces (LLSVPs) underneath Pacific and Africa in the lower mantle are chemically distinct and are likely denser than the ambient mantle. We formulate instantaneous flow models based on seismic tomographic models to compute the geoid and constrain mantle viscosity by assuming both thermochemical and whole-mantle convection. Geoid modeling for the thermochemical model is performed by considering the compensation effect of dense thermochemical piles and removing buoyancy structure of the compensation layer in the lower mantle. Thermochemical models well reproduce the observed geoid, thus reconciling the geoid with the interpretation of LLSVPs as dense thermochemical piles. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models. In the preferred model, the lower mantle viscosity is ˜10 times higher than the upper mantle viscosity that is ˜10 times higher than the transition zone viscosity. The weak transition zone is consistent with the proposed high water content there. The geoid in thermochemical mantle models is sensitive to seismic structure at midmantle depths, suggesting a need to improve seismic imaging resolution there. The geoid modeling constrains the vertical extent of dense and stable chemical piles to be within ˜500 km above CMB. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modeling.

  5. The impact of priors and observables on parameter inferences in the constrained MSSM

    NASA Astrophysics Data System (ADS)

    Trotta, Roberto; Feroz, Farhan; Hobson, Mike; Roszkowski, Leszek; Ruiz de Austri, Roberto

    2008-12-01

    We use a newly released version of the SuperBayeS code to analyze the impact of the choice of priors and the influence of various constraints on the statistical conclusions for the preferred values of the parameters of the Constrained MSSM. We assess the effect in a Bayesian framework and compare it with an alternative likelihood-based measure of a profile likelihood. We employ a new scanning algorithm (MultiNest) which increases the computational efficiency by a factor ~ 200 with respect to previously used techniques. We demonstrate that the currently available data are not yet sufficiently constraining to allow one to determine the preferred values of CMSSM parameters in a way that is completely independent of the choice of priors and statistical measures. While B R ({\\bar B} → Xsγ) generally favors large m0, this is in some contrast with the preference for low values of m0 and m1/2 that is almost entirely a consequence of a combination of prior effects and a single constraint coming from the anomalous magnetic moment of the muon, which remains somewhat controversial. Using an information-theoretical measure, we find that the cosmological dark matter abundance determination provides at least 80% of the total constraining power of all available observables. Despite the remaining uncertainties, prospects for direct detection in the CMSSM remain excellent, with the spin-independent neutralino-proton cross section almost guaranteed above σSIp 10-10pb, independently of the choice of priors or statistics. Likewise, gluino and lightest Higgs discovery at the LHC remain highly encouraging. While in this work we have used the CMSSM as particle physics model, our formalism and scanning technique can be readily applied to a wider class of models with several free parameters.

  6. Direct and Semi-direct Radiative Responses to Observation-Constrained Aerosol Absorption over S Asia

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Kotamarthi, V. R.; Manoharan, V.

    2013-12-01

    Climate impacts of aerosols over S. Asia have been studied extensively in both models and observations. However, discrepancies between observed and modeled aerosol concentrations and optical properties have hindered our understanding of the aerosol influences on the regional monsoon circulation and rainfall. We present an in-depth examination of direct and semi-direct radiative responses due to aerosols on the latitudinal heating gradient and cloud distribution, with observational constraints on solar absorption by aerosols. Regional distributions of aerosol concentration are simulated with a 12-km regional climate model (WRF-Chem) driven by the NCEP analysis data from August 2011 to March 2012. During this time period, the ground-based measurements of aerosols and clouds, surface radiation, water vapor, and temperature were taken at Nainital (29.38°N, 79.45°E) during the DOE Ganges Valley Experiment (GVAX). This data set, which is available at high temporal resolution (hourly), is used to evaluate and constrain the simulated wavelength dependence of aerosol absorption and the correlation with changes in surface radiation, cloud base height and liquid water content for the entire post-monsoon period. The analysis is extended to a regional scale by comparing with satellite observation of absorbing aerosol optical depth (OMI) and cloud properties (MODIS). Preliminary results show good agreement in monthly variations of simulated and observed aerosol optical depth (AOD) except during periods of high observed AOD. Initial analysis indicates a possible local origin for the aerosols that is not captured in the model at present. Furthermore, analysis of the spectrally resolved aerosol absorption measurements indicates that these local aerosols exhibit strong absorption in near-UV and visible wavelengths. A large fraction of increased absorption during October and November (local fall harvest season) is attributable to the super-micron sized aerosol particles. In

  7. Healing in forgiveness: A discussion with Amanda Lindhout and Katherine Porterfield, PhD

    PubMed Central

    Porterfield, Katherine A.; Lindhout, Amanda

    2014-01-01

    In 2008, Amanda Lindhout was kidnapped by a group of extremists while traveling as a freelance journalist in Somalia. She and a colleague were held captive for more than 15 months, released only after their families paid a ransom. In this interview, Amanda discusses her experiences in captivity and her ongoing recovery from this experience with Katherine Porterfield, Ph.D. a clinical psychologist at the Bellevue/NYU Program for Survivors of Torture. Specifically, Amanda describes the childhood experiences that shaped her thirst for travel and knowledge, the conditions of her kidnapping, and her experiences after she was released from captivity. Amanda outlines the techniques that she employed to survive in the early aftermath of her capture, and how these coping strategies changed as her captivity lengthened. She reflects on her transition home, her recovery process, and her experiences with mental health professionals. Amanda's insights provide an example of resilience in the face of severe, extended trauma to researchers, clinicians, and survivors alike. The article ends with an discussion of the ways that Amanda's coping strategies and recovery process are consistent with existing resilience literature. Amanda's experiences as a hostage, her astonishing struggle for physical and mental survival, and her life after being freed are documented in her book, co-authored with Sara Corbett, A House in the Sky. PMID:25317259

  8. EQUATION OF STATE AND NEUTRON STAR PROPERTIES CONSTRAINED BY NUCLEAR PHYSICS AND OBSERVATION

    SciTech Connect

    Hebeler, K.; Lattimer, J. M.; Pethick, C. J.; Schwenk, A.

    2013-08-10

    Microscopic calculations of neutron matter based on nuclear interactions derived from chiral effective field theory, combined with the recent observation of a 1.97 {+-} 0.04 M{sub Sun} neutron star, constrain the equation of state of neutron-rich matter at sub- and supranuclear densities. We discuss in detail the allowed equations of state and the impact of our results on the structure of neutron stars, the crust-core transition density, and the nuclear symmetry energy. In particular, we show that the predicted range for neutron star radii is robust. For use in astrophysical simulations, we provide detailed numerical tables for a representative set of equations of state consistent with these constraints.

  9. Deep source model for Nevado del Ruiz Volcano, Colombia, constrained by interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Samsonov, Sergey V.; López Velez, Cristian Mauricio; Ordoñez, Milton

    2015-06-01

    Nevado del Ruiz is part of a large volcano complex in the northern Andes of Colombia. Interferometric synthetic aperture radar observations from the RADARSAT-2 satellite since 2011 show steady inflation of the volcano since 2012 at 3-4 cm/yr. The broad (>20 km) deformation pattern from both ascending and descending track data constrain source models for either point or spheroidal sources, both located at >14 km beneath the surface (mean elevation 4.2 km) and 10 km SW of Nevado del Ruiz, below nearby Santa Isabel Volcano. Stress change computations for both sources in the context of a compressive regional stress indicate that dikes propagating from the source should become trapped in sills, possibly leading to a more complex pathway to the surface and explaining the significant lateral separation of the source and Nevado del Ruiz Volcano.

  10. Global fine-mode aerosol radiative effect, as constrained by comprehensive observations

    NASA Astrophysics Data System (ADS)

    Chung, Chul E.; Chu, Jung-Eun; Lee, Yunha; van Noije, Twan; Jeoung, Hwayoung; Ha, Kyung-Ja; Marks, Marguerite

    2016-07-01

    Aerosols directly affect the radiative balance of the Earth through the absorption and scattering of solar radiation. Although the contributions of absorption (heating) and scattering (cooling) of sunlight have proved difficult to quantify, the consensus is that anthropogenic aerosols cool the climate, partially offsetting the warming by rising greenhouse gas concentrations. Recent estimates of global direct anthropogenic aerosol radiative forcing (i.e., global radiative forcing due to aerosol-radiation interactions) are -0.35 ± 0.5 W m-2, and these estimates depend heavily on aerosol simulation. Here, we integrate a comprehensive suite of satellite and ground-based observations to constrain total aerosol optical depth (AOD), its fine-mode fraction, the vertical distribution of aerosols and clouds, and the collocation of clouds and overlying aerosols. We find that the direct fine-mode aerosol radiative effect is -0.46 W m-2 (-0.54 to -0.39 W m-2). Fine-mode aerosols include sea salt and dust aerosols, and we find that these natural aerosols result in a very large cooling (-0.44 to -0.26 W m-2) when constrained by observations. When the contribution of these natural aerosols is subtracted from the fine-mode radiative effect, the net becomes -0.11 (-0.28 to +0.05) W m-2. This net arises from total (natural + anthropogenic) carbonaceous, sulfate and nitrate aerosols, which suggests that global direct anthropogenic aerosol radiative forcing is less negative than -0.35 W m-2.

  11. Observational techniques for constraining hydraulic and hydrologic models for use in catchment scale flood impact assessment

    NASA Astrophysics Data System (ADS)

    Owen, Gareth; Wilkinson, Mark; Nicholson, Alex; Quinn, Paul; O'Donnell, Greg

    2015-04-01

    There is an increase in the use of Natural Flood Management (NFM) schemes to tackle excessive runoff in rural catchments, but direct evidence of their functioning during extreme events is often lacking. With the availability of low cost sensors, a dense nested monitoring network can be established to provide near continuous optical and physical observations of hydrological processes. This paper will discuss findings for a number of catchments in the North of England where land use management and NFM have been implemented for flood risk reduction; and show how these observations have been used to inform both a hydraulic and a rainfall-runoff model. The value of observations in understanding how measures function is of fundamental importance and is becoming increasingly viable and affordable. Open source electronic platforms such as Arduino and Raspberry Pi are being used with cheap sensors to perform these tasks. For example, a level gauge has been developed for approximately €110 and cameras capable of capturing still or moving pictures are available for approximately €120; these are being used to better understand the behaviour of NFM features such as ponds and woody debris. There is potential for networks of these instruments to be configured and data collected through Wi-Fi or other wireless networks. The potential to expand informative networks of data that can constrain models is now possible. The functioning of small scale runoff attenuation features, such as offline ponds, has been demonstrated at the local scale. Specifically, through the measurement of both instream and in-pond water levels, it has been possible to calculate the impact of storing/attenuating flood flows on the adjacent river flow. This information has been encapsulated in a hydraulic model that allows the extrapolation of impacts to the larger catchment scale, contributing to understanding of the scalability of such features. Using a dense network of level gauges located along the main

  12. Constraining cosmic reionization with quasar, gamma ray burst, and Lyalpha emitter observations

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Ferrara, A.; Choudhury, T. Roy; Fan, X.; Salvaterra, R.; Dayal, P.

    We investigate the cosmic reionization history by comparing semi-analytical models of the Lyalpha forest with observations of high-z quasars and gamma ray bursts absorption spectra. In order to constrain the reionization epoch z_rei, we consider two physically motivated scenarios in which reionization ends either early (ERM, z_reigtrsim 7) or late (LRM, z_rei≈ 6). We analyze the transmitted flux in a sample of 17 QSOs spectra at 5.7< z_em< 6.4 and in the spectrum of the GRB 050904 at z=6.3, studying the wide dark portions (gaps) in the observed absorption spectra. By comparing the statistics of these spectral features with our models, we conclude that current observational data do not require any sudden change in the ionization state of the IGM at z≈ 6, favouring indeed a highly ionized Universe at these epochs, as predicted by the ERM. Moreover, we test the predictions of this model through Lyalpha emitters observations, finding that the ERM provide a good fit to the evolution of the luminosity function of Lyalpha emitting galaxies in the redshift range z=5.7-6.5. The overall result points towards an extended reionization process which starts at zgtrsim 11 and completes at z_reigtrsim 7, in agreement with the recent WMAP5 data.

  13. Predicting the future by explaining the past: constraining carbon-climate feedback using contemporary observations

    NASA Astrophysics Data System (ADS)

    Denning, S.

    2014-12-01

    The carbon-climate community has an historic opportunity to make a step-function improvement in climate prediction by using regional constraints to improve mechanistic model representation of carbon cycle processes. Interactions among atmospheric CO2, global biogeochemistry, and physical climate constitute leading sources of uncertainty in future climate. First-order differences among leading models of these processes produce differences in climate as large as differences in aerosol-cloud-radiation interactions and fossil fuel combustion. Emergent constraints based on global observations of interannual variations provide powerful constraints on model parameterizations. Additional constraints can be defined at regional scales. Organized intercomparison experiments have shown that uncertainties in future carbon-climate feedback arise primarily from model representations of the dependence of photosynthesis on CO2 and drought stress and the dependence of decomposition on temperature. Just as representations of net carbon fluxes have benefited from eddy flux, ecosystem manipulations, and atmospheric CO2, component carbon fluxes (photosynthesis, respiration, decomposition, disturbance) can be constrained at regional scales using new observations. Examples include biogeochemical tracers such as isotopes and carbonyl sulfide as well as remotely-sensed parameters such as chlorophyll fluorescence and biomass. Innovative model evaluation experiments will be needed to leverage the information content of new observations to improve process representations as well as to provide accurate initial conditions for coupled climate model simulations. Successful implementation of a comprehensive benchmarking program could have a huge impact on understanding and predicting future climate change.

  14. Constraining cloud lifetime effects of aerosols using A-Train satellite observations

    SciTech Connect

    Wang, Minghuai; Ghan, Steven J.; Liu, Xiaohong; Ecuyer, Tristan L.; Zhang, Kai; Morrison, H.; Ovchinnikov, Mikhail; Easter, Richard C.; Marchand, Roger; Chand, Duli; Qian, Yun; Penner, Joyce E.

    2012-08-15

    Aerosol indirect effects have remained the largest uncertainty in estimates of the radiative forcing of past and future climate change. Observational constraints on cloud lifetime effects are particularly challenging since it is difficult to separate aerosol effects from meteorological influences. Here we use three global climate models, including a multi-scale aerosol-climate model PNNL-MMF, to show that the dependence of the probability of precipitation on aerosol loading, termed the precipitation frequency susceptibility (S{sub pop}), is a good measure of the liquid water path response to aerosol perturbation ({lambda}), as both Spop and {lambda} strongly depend on the magnitude of autoconversion, a model representation of precipitation formation via collisions among cloud droplets. This provides a method to use satellite observations to constrain cloud lifetime effects in global climate models. S{sub pop} in marine clouds estimated from CloudSat, MODIS and AMSR-E observations is substantially lower than that from global climate models and suggests a liquid water path increase of less than 5% from doubled cloud condensation nuclei concentrations. This implies a substantially smaller impact on shortwave cloud radiative forcing (SWCF) over ocean due to aerosol indirect effects than simulated by current global climate models (a reduction by one-third for one of the conventional aerosol-climate models). Further work is needed to quantify the uncertainties in satellite-derived estimates of S{sub pop} and to examine S{sub pop} in high-resolution models.

  15. Combining Observations of Shock-induced Minerals with Calculations to Constrain the Shock History of Meteorites.

    NASA Astrophysics Data System (ADS)

    de Carli, P. S.; Xie, Z.; Sharp, T. G.

    2007-12-01

    All available evidence from shock Hugoniot and release adiabat measurements and from shock recovery experiments supports the hypothesis that the conditions for shock-induced phase transitions are similar to the conditions under which quasistatic phase transitions are observed. Transitions that require high temperatures under quasistatic pressures require high temperatures under shock pressures. The high-pressure phases found in shocked meteorites are almost invariably associated with shock melt veins. A shock melt vein is analogous to a pseudotachylite, a sheet of locally melted material that was quenched by conduction to surrounding cooler material. The mechanism by which shock melt veins form is not known; possible mechanisms include shock collisions, shock interactions with cracks and pores, and adiabatic shear. If one assumes that the phases within the vein crystallized in their stability fields, then available static high-pressure data constrain the shock pressure range over which the vein solidified. Since the veins have a sheet-like geometry, one may use one-dimensional heat flow calculations to constrain the cooling and crystallization history of the veins (Langenhorst and Poirier, 2000). Although the formation mechanism of a melt vein may involve transient pressure excursions, pressure equilibration of a mm-wide vein will be complete within about a microsecond, whereas thermal equilibration will require seconds. Some of our melt vein studies have indicated that the highly-shocked L chondrite meteorites were exposed to a narrow range of shock pressures, e.g., 18-25 GPa, over a minimum duration of the order of a second. We have used the Autodyn(TM) wave propagation code to calculate details of plausible impacts on the L-chondrite parent body for a variety of possible parent body stratigraphies. We infer that some meteorites probably represent material that was shocked at a depth of >10 km in their parent bodies.

  16. Constraining future terrestrial carbon cycle projections using observation-based water and carbon flux estimates.

    PubMed

    Mystakidis, Stefanos; Davin, Edouard L; Gruber, Nicolas; Seneviratne, Sonia I

    2016-06-01

    The terrestrial biosphere is currently acting as a sink for about a third of the total anthropogenic CO2  emissions. However, the future fate of this sink in the coming decades is very uncertain, as current earth system models (ESMs) simulate diverging responses of the terrestrial carbon cycle to upcoming climate change. Here, we use observation-based constraints of water and carbon fluxes to reduce uncertainties in the projected terrestrial carbon cycle response derived from simulations of ESMs conducted as part of the 5th phase of the Coupled Model Intercomparison Project (CMIP5). We find in the ESMs a clear linear relationship between present-day evapotranspiration (ET) and gross primary productivity (GPP), as well as between these present-day fluxes and projected changes in GPP, thus providing an emergent constraint on projected GPP. Constraining the ESMs based on their ability to simulate present-day ET and GPP leads to a substantial decrease in the projected GPP and to a ca. 50% reduction in the associated model spread in GPP by the end of the century. Given the strong correlation between projected changes in GPP and in NBP in the ESMs, applying the constraints on net biome productivity (NBP) reduces the model spread in the projected land sink by more than 30% by 2100. Moreover, the projected decline in the land sink is at least doubled in the constrained ensembles and the probability that the terrestrial biosphere is turned into a net carbon source by the end of the century is strongly increased. This indicates that the decline in the future land carbon uptake might be stronger than previously thought, which would have important implications for the rate of increase in the atmospheric CO2 concentration and for future climate change. PMID:26732346

  17. Fast emission estimates in China and South Africa constrained by satellite observations

    NASA Astrophysics Data System (ADS)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  18. Using modern stellar observables to constrain stellar parameters and the physics of the stellar interior

    NASA Astrophysics Data System (ADS)

    van Saders, Jennifer L.

    2014-05-01

    The current state and future evolution of a star is, in principle, specified by a only a few physical quantities: the mass, age, hydrogen, helium, and metal abundance. These same fundamental quantities are crucial for reconstructing the history of stellar systems ranging in scale from planetary systems to galaxies. However, the fundamental parameters are rarely directly observable, and we are forced to use proxies that are not always sensitive or unique functions of the stellar parameters we wish to determine. Imprecise or inaccurate determinations of the fundamental parameters often limit our ability to draw inferences about a given system. As new technologies, instruments, and observing techniques become available, the list of viable stellar observables increases, and we can explore new links between the observables and fundamental quantities in an effort to better characterize stellar systems. In the era of missions such as Kepler, time-domain observables such as the stellar rotation period and stellar oscillations are now available for an unprecedented number of stars, and future missions promise to further expand the sample. Furthermore, despite the successes of stellar evolution models, the processes and detailed structure of the deep stellar interior remains uncertain. Even in the case of well-measured, well understood stellar observables, the link to the underlying parameters contains uncertainties due to our imperfect understanding of stellar interiors. Model uncertainties arise from sources such as the treatment of turbulent convection, transport of angular momentum and mixing, and assumptions about the physical conditions of stellar matter. By carefully examining the sensitivity of stellar observables to physical processes operating within the star and model assumptions, we can design observational tests for the theory of stellar interiors. I propose a series of tools based on new or revisited stellar observables that can be used both to constrain

  19. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    NASA Astrophysics Data System (ADS)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  20. Constraining the Magnetic Fields of Transiting Exoplanets through Ground-based Near-UV Observations

    NASA Astrophysics Data System (ADS)

    Turner, Jake; Smart, B. M.; Pearson, K. A.; Biddle, L. I.; Cates, I. T.; Berube, M.; Thompson, R. M.; Smith, C. W.; Teske, J. K.; Hardegree-Ullman, K. K.; Robertson, A. N.; Crawfod, B. E.; Zellem, R.; Nieberding, M. N.; Raphael, B. A.; Tombleson, R.; Cook, K. L.; Hoglund, S.; Hofmann, R. A.; Jones, C.; Towner, A.; Small, L. C.; Walker-LaFollette, A. M.; Sanford, B.; Griffith, C. C.; Sagan, T.

    2013-10-01

    We observed the primary transits of the exoplanets CoRoT-1b, HAT-P-1b, HAT-P-13b, HAT-P-22b, TrES-2b, TrES-4b, WASP-12b, WASP-33b, WASP-44b, WASP-48b, and WASP77A-b in the near-ultraviolet photometric band in an attempt to detect their magnetic fields and update their planetary parameters. Vidotto et al. (2011) suggest that the magnetic fields of these targets could be constrained if their near-UV light curves show an early ingress compared to their optical light curves, while their egress remain unaffected. We do not observe this effect in any of our targets, however, we have determined an upper limit on their magnetic field strengths. Our results are consistent with observations of TrES-3b and HAT-P-16b which both have had upper limits on their magnetic fields found using this method. We find abnormally low field strengths for all our targets. Due to this result we advocate for follow-up studies on the magnetic fields of all our targets using other detection methods (such as radio emission and magnetic star-planet interactions) and other telescopes capable of achieving a better near-UV cadence to verify our findings and the techniques of Vidotto et al. (2011). We find that the near-UV planetary radii of all our targets are consistent within error of their optical radii. Our data includes the only published near-UV light curves of CoRoT-1b, HAT-P-1b, HAT-P-13b, HAT-P-22b, TrES-2b, TrES-4b, WASP-33b, WASP-44b, WASP-48b, and WASP77A-b. We used an automated reduction pipeline, ExoDRPL, to perform aperture photometry on our data. In addition, we developed a modeling package called EXOMOP that utilizes the Levenberg-Marquardt minimization algorithm to find a least-squares best fit and a differential evolution Markov Chain Monte Carlo algorithm to find the best fit to the light curve. To constrain the red noise in both fitting models we used the residual permutation (rosary bead), time-averaging, and wavelet method.

  1. Constraining the Magnetic Fields of Transiting Exoplanets through Ground-based Near-UV Observations

    NASA Astrophysics Data System (ADS)

    Turner, Jake; Smart, B.; Pearson, K.; Biddle, L. I.; Cates, I.; Berube, M.; Thompson, R.; Smith, C.; Teske, J. K.; Hardegree-Ullman, K.; Robertson, A.; Crawfod, B.; Zellem, R.; Nieberding, M. N.; Raphael, B. A.; Tombleson, R.; Cook, K.; Hoglund, S.; Hofmann, R.; Jones, C.; Towner, A. P.; Small, L.; Walker-LaFollette, A.; Sanford, B.; Sagan, T.

    2014-01-01

    We observed the primary transits of the exoplanets CoRoT-1b, HAT-P-1b, HAT-P-13b, HAT-P-22b, TrES-2b, TrES-4b, WASP-12b, WASP-33b, WASP-44b, WASP-48b, and WASP77A-b in the near-ultraviolet photometric band in an attempt to detect their magnetic fields and update their planetary parameters. Vidotto et al. (2011) suggest that the magnetic fields of these targets could be constrained if their near-UV light curves show an early ingress compared to their optical light curves, while their egress remain unaffected. We do not observe this effect in any of our targets, however, we have determined an upper limit on their magnetic field strengths. Our results are consistent with observations of TrES-3b and HAT-P-16b which both have had upper limits on their magnetic fields found using this method. We find abnormally low field strengths for all our targets. Due to this result we advocate for follow-up studies on the magnetic fields of all our targets using other detection methods (such as radio emission and magnetic star-planet interactions) and other telescopes capable of achieving a better near-UV cadence to verify our findings and the techniques of Vidotto et al. (2011). We find that the near-UV planetary radii of all our targets are consistent within error of their optical radii. Our data includes the only published near-UV light curves of CoRoT-1b, HAT-P-1b, HAT-P-13b, HAT-P-22b, TrES-2b, TrES-4b, WASP-33b, WASP-44b, WASP-48b, and WASP77A-b. We used an automated reduction pipeline, ExoDRPL, to perform aperture photometry on our data. In addition, we developed a modeling package called EXOMOP that utilizes the Levenberg-Marquardt minimization algorithm to find a least-squares best fit and a differential evolution Markov Chain Monte Carlo algorithm to find the best fit to the light curve. To constrain the red noise in both fitting models we used the residual permutation (rosary bead), time-averaging, and wavelet method.

  2. How essential are Argo observations to constrain a global ocean data assimilation system?

    NASA Astrophysics Data System (ADS)

    Turpin, V.; Remy, E.; Le Traon, P. Y.

    2016-02-01

    Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0-300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation-model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion

  3. 2D inverse modeling for potential fields on rugged observation surface using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Xi, Yufei; Liu, Tianyou

    2015-03-01

    The regular grid discretization is prevalent in the inverse modeling for gravity and magnetic data. However, this subdivision strategy performs lower precision to represent the rugged observation surface. To deal with this problem, we evaluate a non-structured discretization method in which the subsurface with rolling terrain is divided into numbers of Delaunay triangular cells and each mesh has the uniform physical property distributions. The gravity and magnetic anomalies of a complex-shaped anomalous body are represented as the summaries of the single anomaly produced by each triangle field source. When inverting for the potential field data, we specify a minimization objective function composed of data constraints and then use the preconditioned conjugate gradient algorithm to iteratively solve the matrix minimization equations, where the preconditioner is determined by the distances between triangular cells and surface observers. We test our method using synthetic data; all tests return favorable results. In the case studies involving the gravity and magnetic anomalies of the Mengku and Pobei deposits in Xinjiang, northwest China, the inferred magnetite orebodies and ultrabasic rocks distributions are verified by the additional drilling and geological information. The discretization of constrained Delaunay triangulation provides an useful approach of computing and inverting the potential field data on the situations of undulate topography and complicated objects.

  4. Multiple Observation Types Jointly Constrain Australian Terrestrial Carbon and Water Cycles

    NASA Astrophysics Data System (ADS)

    Haverd, Vanessa; Raupach, Michael; Briggs, Peter; Canadell, Pep; Davis, Steven; Isaac, Peter; Law, Rachel; Meyer, Mick; Peters, Glenn; Pickett-Heaps, Christopher; Roxburgh, Stephen; Sherman, Bradford; van Gorsel, Eva; Viscarra Rossel, Raphael; Wang, Ziyuan

    2013-04-01

    Information about the carbon cycle potentially constrains the water cycle, and vice versa. This paper explores the utility of multiple observation sets to constrain carbon and water fluxes and stores in a land surface model, and a resulting determination of the Australian terrestrial carbon budget. Observations include streamflow from 416 gauged catchments, measurements of evapotranspiration (ET) and net ecosystem production (NEP) from 12 eddy-flux sites, litterfall data, and data on carbon pools. The model is a version of CABLE (the Community Atmosphere-Biosphere-Land Exchange model), coupled with CASAcnp (a biogeochemical model) and SLI (Soil-Litter-Iso, a soil hydrology model including liquid and vapour water fluxes and the effects of litter). By projecting observation-prediction residuals onto model uncertainty, we find that eddy flux measurements provide a significantly tighter constraint on Australian continental net primary production (NPP) than the other data types. However, simultaneous constraint by multiple data types is important for mitigating bias from any single type. Results emerging from the multiply-constrained model are as follows (with all values applying over 1990-2011 and all ranges denoting ±1 standard error): (1) on the Australian continent, a predominantly semi-arid region, over half (0.64±0.05) of the water loss through ET occurs through soil evaporation and bypasses plants entirely; (2) mean Australian NPP is 2200±400 TgC/y, making the NPP/precipitation ratio about the same for Australia as the global land average; (3) annually cyclic ("grassy") vegetation and persistent ("woody") vegetation respectively account for 0.56±0.14 and 0.43±0.14 of NPP across Australia; (4) the average interannual variability of Australia's NEP (±180 TgC/y) is larger than Australia's total anthropogenic greenhouse gas emissions in 2011 (149 TgCeq/y), and is dominated by variability in desert and savannah regions. The mean carbon budget over 1990

  5. Dielectric properties of Asteroid Vesta's surface as constrained by Dawn VIR observations

    NASA Astrophysics Data System (ADS)

    Palmer, Elizabeth M.; Heggy, Essam; Capria, Maria T.; Tosi, Federico

    2015-12-01

    Earth and orbital-based radar observations of asteroids provide a unique opportunity to characterize surface roughness and the dielectric properties of their surfaces, as well as potentially explore some of their shallow subsurface physical properties. If the dielectric and topographic properties of asteroid's surfaces are defined, one can constrain their surface textural characteristics as well as potential subsurface volatile enrichment using the observed radar backscatter. To achieve this objective, we establish the first dielectric model of asteroid Vesta for the case of a dry, volatile-poor regolith-employing an analogy to the dielectric properties of lunar soil, and adjusted for the surface densities and temperatures deduced from Dawn's Visible and InfraRed mapping spectrometer (VIR). Our model suggests that the real part of the dielectric constant at the surface of Vesta is relatively constant, ranging from 2.3 to 2.5 from the night- to day-side of Vesta, while the loss tangent shows slight variation as a function of diurnal temperature, ranging from 6 × 10-3 to 8 × 10-3. We estimate the surface porosity to be ∼55% in the upper meter of the regolith, as derived from VIR observations. This is ∼12% higher than previous estimation of porosity derived from previous Earth-based X- and S-band radar observation. We suggest that the radar backscattering properties of asteroid Vesta will be mainly driven by the changes in surface roughness rather than potential dielectric variations in the upper regolith in the X- and S-band.

  6. Catching the fish - Constraining stellar parameters for TX Piscium using spectro-interferometric observations

    NASA Astrophysics Data System (ADS)

    Klotz, D.; Paladini, C.; Hron, J.; Aringer, B.; Sacuto, S.; Marigo, P.; Verhoelst, T.

    2013-02-01

    Context. Stellar parameter determination is a challenging task when dealing with galactic giant stars. The combination of different investigation techniques has proven to be a promising approach. Aims: We analyse archive spectra obtained with the Short Wavelength Spectrometer (SWS) onboard ISO, and new interferometric observations from the Very Large Telescope MID-infrared Interferometric instrument (VLTI/MIDI) of a very well studied carbon-rich giant: TX Psc. The aim of this work is to determine stellar parameters using spectroscopy and interferometry. Methods: The observations are used to constrain the model atmosphere, and eventually the stellar evolutionary model in the region where the tracks map the beginning of the carbon star sequence. Two different approaches are used to determine stellar parameters: (i) the "classic" interferometric approach where the effective temperature is fixed by using the angular diameter in the N-band (from interferometry) and the apparent bolometric magnitude; (ii) parameters are obtained by fitting a grid of state-of-the-art hydrostatic models to spectroscopic and interferometric observations. Results: We find good agreement between the parameters of the two methods. The effective temperature and luminosity clearly place TX Psc in the carbon-rich AGB star domain in the H-R-diagram. Current evolutionary tracks suggest that TX Psc became a C-star just recently, which means that the star is still in a "quiet" phase compared to the subsequent strong-wind regime. This agrees with the C/O ratio being only slightly greater than one. Based on observations made with ESO telescopes at Paranal Observatory under program IDs 74.D-0601, 60.A-9224, 77.C-0440, 60.A-9006, 78.D-0112, 84.D-0805.

  7. The satellite and chemical transport model tandem: constraining TM5 with AURA observations

    NASA Astrophysics Data System (ADS)

    Verstraeten, Willem W.; Neu, Jessica L.; Williams, Jason E.; Bowman, Kevin W.; Worden, John R.; (K. F.) Boersma, Folkert

    2015-04-01

    Satellite-based studies focusing on tropospheric ozone (O3) and nitrogen dioxide (NO2) have the potential to close the gap left by previous studies on air quality. After all, satellites can provide large-scale robust observational evidence that both O3 precursor concentrations and tropospheric O3 levels are rapidly changing over source receptor areas. Chemical transport models (CTM) significantly contribute to our understanding on transport patterns, production and destruction of tropospheric air constituents, but the infrequently update of emission inventories and the slow implementation of updates on chemical reactions and reaction rates slow down the widespread use. Satellite observations of tropospheric NO2 have the potential to improve and update anthropogenic NOx emissions in a near-continuous way and may provide information on the life time of NOx, impacting the production and destruction of many air constituents including O3. Here we show the increased ability of the CTM TM5 to reproduce the 2005-2010 observed strong and rapid rise in free tropospheric ozone of 0.8% per year over China from TES (Tropospheric Emission Spectrometer, onboard AURA), once OMI (Ozone Monitoring Instrument, onboard AURA) NO2 measurements were implemented in TM5 to update NOx emissions. What is more, MLS observations (Microwave Limb Sounder, onboard AURA) on stratospheric ozone demonstrate its potential to constrain the stratosphere-troposphere exchange (STE) in TM5 which is mainly driven by ECMWF meteorological fields. The use of MLS observations of stratospheric O3 improved the TM5 modelled trends in tropospheric O3 significantly. Thanks to the TM5 input updates from satellite observations, the impact of Asian O3 and its precursors on the western United States could be quantified showing a large import from China to the West. Here we also show that deriving NOx life times from OMI NO2 observations to evaluate new rate constants of the reaction NO2 + OH => HNO3 in TM5 is a

  8. Constraining the Properties of Small Stars and Small Planets Observed by K2

    NASA Astrophysics Data System (ADS)

    Dressing, Courtney D.; Newton, Elisabeth R.; Charbonneau, David; Schlieder, Josh; Hawaii/California/Arizona/Indiana K2 Follow-up Consortium, HARPS-N Consortium

    2016-01-01

    We are using the results of the NASA K2 mission (the second career of the Kepler spacecraft) to study how the frequency and architectures of planetary systems orbiting M dwarfs throughout the ecliptic plane compare to those of the early M dwarf planetary systems observed by Kepler. In a previous analysis of the Kepler data set, we found that planets orbiting early M dwarfs are common: we measured a cumulative planet occurrence rate of 2.45 +/- 0.22 planets per M dwarf with periods of 0.5-200 days and planet radii of 1-4 Earth radii. Within a conservative habitable zone based on the moist greenhouse inner limit and maximum greenhouse outer limit, we estimated an occurrence rate of 0.15 (+0.18/-0.06) Earth-size planets and 0.09 (+0.10/-0.04) super-Earths per M dwarf HZ. Applying these occurrence rates to the population of nearby stars and assuming that mid- and late-M dwarfs host planets at the same rate as early M dwarfs, we predicted that the nearest potentially habitable Earth-size planet likely orbits an M dwarf a mere 2.6 ± 0.4 pc away. We are now testing the assumption of equal planet occurrence rates for M dwarfs of all types by inspecting the population of planets detected by K2 and conducting follow-up observations of planet candidate host stars to identify false positives and better constrain system parameters. I will present the results of recent observing runs with SpeX on the IRTF to obtain near-infrared spectra of low-mass stars targeted by K2 and determine the radii, temperatures, and metallicities of our target stars using empirical relations. We gratefully acknowledge funding from the NASA XRP Program, the John Templeton Foundation, and the NASA Sagan Fellowship Program.

  9. GRACE gravity observations constrain Weichselian ice thickness in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Root, B. C.; Tarasov, L.; Wal, W.

    2015-05-01

    The Barents Sea is subject to ongoing postglacial uplift since the melting of the Weichselian ice sheet that covered it. The regional ice sheet thickness history is not well known because there is only data at the periphery due to the locations of Franz Joseph Land, Svalbard, and Novaya Zemlya surrounding this paleo ice sheet. We show that the linear trend in the gravity rate derived from a decade of observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission can constrain the volume of the ice sheet after correcting for current ice melt, hydrology, and far-field gravitational effects. Regional ice-loading models based on new geologically inferred ice margin chronologies show a significantly better fit to the GRACE data than that of ICE-5G. The regional ice models contain less ice in the Barents Sea than present in ICE-5G (5-6.3 m equivalent sea level versus 8.5 m), which increases the ongoing difficulty in closing the global sea level budget at the Last Glacial Maximum.

  10. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Reed, Sasha C.; Smith, William K.; Cleveland, Cory C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    Background/Question/Methods Currently, the United States (U.S.) supplies roughly half the world’s biofuel (secondary bioenergy), with the Energy Independence and Security Act of 2007 (EISA) stipulating an additional three-fold increase in annual production by 2022. Implicit in such energy targets is an associated increase in annual biomass demand (primary bioenergy) from roughly 2.9 to 7.4 exajoules (EJ; 1018 Joules). Yet, many of the factors used to estimate future bioenergy potential are relatively unresolved, bringing into question the practicality of the EISA’s ambitious bioenergy targets. Here, our objective was to constrain estimates of primary bioenergy potential (PBP) for the conterminous U.S. using satellite-derived net primary productivity (NPP) data (measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S) as the most geographically explicit measure of terrestrial growth capacity. Results/Conclusions We show that the annual primary bioenergy potential (PBP) of the conterminous U.S. realistically ranges from approximately 5.9 (± 1.4) to 22.2 (± 4.4) EJ, depending on land use. The low end of this range represents current harvest residuals, an attractive potential energy source since no additional harvest land is required. In contrast, the high end represents an annual harvest over an additional 5.4 million km2 or 75% of vegetated land in the conterminous U.S. While we identify EISA energy targets as achievable, our results indicate that meeting such targets using current technology would require either an 80% displacement of current croplands or the conversion of 60% of total rangelands. Our results differ from previous evaluations in that we use high resolution, satellite-derived NPP as an upper-envelope constraint on bioenergy potential, which removes the need for extrapolation of plot-level observed yields over large spatial areas. Establishing realistically constrained estimates of bioenergy potential seems a

  11. CONSTRAINING HIGH-SPEED WINDS IN EXOPLANET ATMOSPHERES THROUGH OBSERVATIONS OF ANOMALOUS DOPPLER SHIFTS DURING TRANSIT

    SciTech Connect

    Miller-Ricci Kempton, Eliza; Rauscher, Emily

    2012-06-01

    Three-dimensional (3D) dynamical models of hot Jupiter atmospheres predict very strong wind speeds. For tidally locked hot Jupiters, winds at high altitude in the planet's atmosphere advect heat from the day side to the cooler night side of the planet. Net wind speeds on the order of 1-10 km s{sup -1} directed towards the night side of the planet are predicted at mbar pressures, which is the approximate pressure level probed by transmission spectroscopy. These winds should result in an observed blueshift of spectral lines in transmission on the order of the wind speed. Indeed, Snellen et al. recently observed a 2 {+-} 1 km s{sup -1} blueshift of CO transmission features for HD 209458b, which has been interpreted as a detection of the day-to-night (substellar to anti-stellar) winds that have been predicted by 3D atmospheric dynamics modeling. Here, we present the results of a coupled 3D atmospheric dynamics and transmission spectrum model, which predicts the Doppler-shifted spectrum of a hot Jupiter during transit resulting from winds in the planet's atmosphere. We explore four different models for the hot Jupiter atmosphere using different prescriptions for atmospheric drag via interaction with planetary magnetic fields. We find that models with no magnetic drag produce net Doppler blueshifts in the transmission spectrum of {approx}2 km s{sup -1} and that lower Doppler shifts of {approx}1 km s{sup -1} are found for the higher drag cases, results consistent with-but not yet strongly constrained by-the Snellen et al. measurement. We additionally explore the possibility of recovering the average terminator wind speed as a function of altitude by measuring Doppler shifts of individual spectral lines and spatially resolving wind speeds across the leading and trailing terminators during ingress and egress.

  12. Constraining strength/depth profiles using laboratory experiments and field structural observations

    NASA Astrophysics Data System (ADS)

    Evans, B.

    2012-04-01

    Strength/depth profiles are often used as standard models to constrain treatments of lithosphere-scale geodynamics. Such profiles have virtue because they are motivated by our understanding of inelastic deformation of rocks, and because they can be used in complex numerical calculations. But, by attempting to construct simple, generic mechanical models, often while lacking detailed descriptions of the sub-surface, such treatments may ignore important issues, including spatial heterogeneities in rock composition, in strain displacements, or in other thermodynamic parameters, including temperature, fluid pressure and composition. Further, these profiles usually assume constitutive equations that reflect combinations of a simple yield criterion with steady-state creep. Thus, transient mechanical behavior is neglected. Fortunately, a plethora of recent laboratory, field structural, and computational studies may now be used to shed light on mechanical behavior at a much broader range of temperature, pressure, strain rates, and strain. For example, new experiments provide a description of creep in minerals at pressures greater than 2 GPa, of friction at seismic velocities, and of strains larger than 5. Observations of field microstructures, coupled with mechanical descriptions gleaned from laboratory experiments and theoretical treatments of the thermodynamics and mechanics of deformation, provide important insights into the way that localization occurs in natural shear zones. Finally, Earth scientists have gained an improved understanding of the subtle, yet important, interplay among fluids, transport properties, and rock deformation, which are capable of producing rich patterns of deformation. Among several important and challenging issues that need work is spatial scaling of properties; it is particularly important to consider differences in length scales that are embedded in the various techniques of field and global geophysics, field geology, and experiments. Our

  13. The 2010 Haiti earthquake: A complex fault pattern constrained by seismologic and tectonic observations

    NASA Astrophysics Data System (ADS)

    Mercier de Lépinay, Bernard; Deschamps, Anne; Klingelhoefer, Frauke; Mazabraud, Yves; Delouis, Bertrand; Clouard, Valérie; Hello, Yann; Crozon, Jacques; Marcaillou, Boris; Graindorge, David; Vallée, Martin; Perrot, Julie; Bouin, Marie-Paule; Saurel, Jean-Marie; Charvis, Philippe; St-Louis, Mildor

    2011-11-01

    After the January 12, 2010, Haiti earthquake, we deployed a mainly offshore temporary network of seismologic stations around the damaged area. The distribution of the recorded aftershocks, together with morphotectonic observations and mainshock analysis, allow us to constrain a complex fault pattern in the area. Almost all of the aftershocks have a N-S compressive mechanism, and not the expected left-lateral strike-slip mechanism. A first-order slip model of the mainshock shows a N264°E north-dipping plane, with a major left-lateral component and a strong reverse component. As the aftershock distribution is sub-parallel and close to the Enriquillo fault, we assume that although the cause of the catastrophe was not a rupture along the Enriquillo fault, this fault had an important role as a mechanical boundary. The azimuth of the focal planes of the aftershocks are parallel to the north-dipping faults of the Transhaitian Belt, which suggests a triggering of failure on these discontinuities. In the western part, the aftershock distribution reflects the triggering of slip on similar faults, and/or, alternatively, of the south-dipping faults, such the Trois-Baies submarine fault. These observations are in agreement with a model of an oblique collision of an indenter of the oceanic crust of the Southern Peninsula and the sedimentary wedge of the Transhaitian Belt: the rupture occurred on a wrench fault at the rheologic boundary on top of the under-thrusting rigid oceanic block, whereas the aftershocks were the result of the relaxation on the hanging wall along pre-existing discontinuities in the frontal part of the Transhaitian Belt.

  14. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.

    2012-12-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations - notable differences include large differences in multi-decadal variability between proxy reconstructions, and big uncertainties in the effect of volcanic eruptions. Because the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. By constraining key aspects of the reconstructions and simulations, instrumental observations, inevitably from a limited period, can reduce reconstruction uncertainty throughout the millennium. A considerable quantity of early instrumental observations are preserved in the world's archives. One organisation which systematically made observations and collected the results was the English East-India Company (EEIC), and 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure have been preserved in the British Library. Similar records from voyages of exploration and scientific investigation are preserved in published literature and the records in National Archives. Some of these records have been extracted and digitised, providing hundreds of thousands of new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809

  15. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    NASA Astrophysics Data System (ADS)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  16. Constraining Ammonia in Air Quality Models with Remote Sensing Observations and Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Zhu, Liye

    Ammonia is an important species in the atmosphere as it contributes to air pollution, climate change and environmental health. Ammonia emissions are known to be primarily from agricultural sources, however there is persistent uncertainty in the magnitudes and seasonal trends of these sources, as ammonia has not traditionally been routinely monitored. The first detection of boundary layer ammonia from space by the NASA Tropospheric Emissions Spectrometer (TES) satellite has provided an exciting new means of reducing this uncertainty. In this thesis, I explore how forward and inverse modeling can be used with satellite observations to constrain ammonia emissions. Model simulations are used to build and validate the TES ammonia retrieval product. TES retrievals are then used to characterize global ammonia distributions and model estimates. Correlations between ammonia and carbon monoxide, observed simultaneously by TES, provide additional insight into observed and modeled ammonia from biomass burning. Next, through inverse modeling, I show that ammonia emissions are broadly underestimated throughout the U.S., particularly in the West. Optimized model simulations capture the range and variability of in-situ observation in April and October, while estimates in July are biased high. To understand these adjustments, several aspects of the retrieval are considered, such as spatial and temporal sampling biases. These investigations lead to revisions of fundamental aspects of how ammonia emissions are modeled, such as the diurnal variability of livestock ammonia emissions. While this improves comparison to hourly in situ measurements in the SE U.S., ammonia concentrations decrease throughout the globe, up to 17 ppb in India and Southeastern China. Lastly, the bi-directional air-surface exchange of ammonia is implemented for the first time in a global model and its adjoint. Ammonia bi-directional exchange generally increases ammonia gross emissions (10.9%) and surface

  17. Constraining atmospheric ammonia emissions through new observations with an open-path, laser-based sensor

    NASA Astrophysics Data System (ADS)

    Sun, Kang

    emission estimates. Finally, NH3 observations from the TES instrument on NASA Aura satellite were validated with mobile measurements and aircraft observations. Improved validations will help to constrain NH3 emissions at continental to global scales. Ultimately, these efforts will improve the understanding of NH3 emissions from all scales, with implications on the global nitrogen cycle and atmospheric chemistry-climate interactions.

  18. Neutrino Data from IceCube and its Predecessor at the South Pole, the Antarctic Muon and Neutrino Detector Array (AMANDA)

    DOE Data Explorer

    Abbasi, R.

    IceCube is a neutrino observatory for astrophysics with parts buried below the surface of the ice at the South Pole and an air-shower detector array exposed above. The international group of sponsors, led by the National Science Foundation (NSF), that designed and implemented the experiment intends for IceCube to operate and provide data for 20 years. IceCube records the interactions produced by astrophysical neutrinos with energies above 100 GeV, observing the Cherenkov radiation from charged particles produced in neutrino interactions. Its goal is to discover the sources of high-energy cosmic rays. These sources may be active galactic nuclei (AGNs) or massive, collapsed stars where black holes have formed.[Taken from http://www.icecube.wisc.edu/] The data from IceCube's predecessor experiment and detector, AMANDA, IceCube’s predecessor detector and experiment is also available at this website. AMANDA pioneered neutrino detection in ice. Over a period of years in the 1990s, detecting “strings” were buried and activated and by 2000, AMANDA was successfully recording an average of 1,000 neutrino events per year. This site also makes available many images and video from the two experiments.

  19. Potential sea-level rise from Antarctic ice-sheet instability constrained by observations.

    PubMed

    Ritz, Catherine; Edwards, Tamsin L; Durand, Gaël; Payne, Antony J; Peyaud, Vincent; Hindmarsh, Richard C A

    2015-12-01

    Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers. PMID:26580020

  20. Constraining the origins of Neptune's carbon monoxide abundance with CARMA millimeter-wave observations

    NASA Astrophysics Data System (ADS)

    Luszcz-Cook, S. H.; de Pater, I.

    2013-01-01

    We present observations of Neptune's 1- and 3-mm spectrum from the Combined Array for Research in Millimeter-wave Astronomy (CARMA). Radiative transfer analysis of the CO (2-1) and (1-0) rotation lines was performed to constrain the CO vertical abundance profile. We find that the data are well matched by a CO mole fraction of 0.1-0.1+0.2 parts per million (ppm) in the troposphere, and 1.1-0.3+0.2 ppm in the stratosphere. A flux of 0.5-20 × 108 CO molecules cm-2 s-1 to the upper stratosphere is implied. Using the Zahnle et al. (Zahnle, K., Schenk, P., Levison, H., Dones, L. [2003]. Icarus 163, 263-289) estimate for cometary impact rates at Neptune, we calculate the CO flux that could be formed from (sub)kilometer-sized comets; we find that if the diffusion rate near the tropopause is small (200 cm2 s-1), these impacts could produce a flux as high as 0.5-0.4+0.8×108 CO molecules cm-2 s-1. We also revisit the calculation of Neptune's internal CO contribution using revised calculations for the CO → CH4 conversion timescale in the deep atmosphere (Visscher, C., Moses, J.I. [2011]. Astrophys. J. 738, 72). We find that an upwelled CO mole fraction of 0.1 ppm implies a global O/H enrichment of at least 400, and likely more than 650, times the protosolar value.

  1. How wild is your model fire? Constraining WRF-Chem wildfire smoke simulations with satellite observations

    NASA Astrophysics Data System (ADS)

    Fischer, E. V.; Ford, B.; Lassman, W.; Pierce, J. R.; Pfister, G.; Volckens, J.; Magzamen, S.; Gan, R.

    2015-12-01

    Exposure to high concentrations of particulate matter (PM) present during acute pollution events is associated with adverse health effects. While many anthropogenic pollution sources are regulated in the United States, emissions from wildfires are difficult to characterize and control. With wildfire frequency and intensity in the western U.S. projected to increase, it is important to more precisely determine the effect that wildfire emissions have on human health, and whether improved forecasts of these air pollution events can mitigate the health risks associated with wildfires. One of the challenges associated with determining health risks associated with wildfire emissions is that the low spatial resolution of surface monitors means that surface measurements may not be representative of a population's exposure, due to steep concentration gradients. To obtain better estimates of ambient exposure levels for health studies, a chemical transport model (CTM) can be used to simulate the evolution of a wildfire plume as it travels over populated regions downwind. Improving the performance of a CTM would allow the development of a new forecasting framework that could better help decision makers estimate and potentially mitigate future health impacts. We use the Weather Research and Forecasting model with online chemistry (WRF-Chem) to simulate wildfire plume evolution. By varying the model resolution, meteorology reanalysis initial conditions, and biomass burning inventories, we are able to explore the sensitivity of model simulations to these various parameters. Satellite observations are used first to evaluate model skill, and then to constrain the model results. These data are then used to estimate population-level exposure, with the aim of better characterizing the effects that wildfire emissions have on human health.

  2. Potential sea-level rise from Antarctic ice-sheet instability constrained by observations

    NASA Astrophysics Data System (ADS)

    Ritz, Catherine; Edwards, Tamsin L.; Durand, Gaël; Payne, Antony J.; Peyaud, Vincent; Hindmarsh, Richard C. A.

    2015-12-01

    Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers.

  3. Constraining the shape distribution and binary fractions of asteroids observed by NEOWISE

    NASA Astrophysics Data System (ADS)

    Sonnett, Sarah M.; Mainzer, Amy; Grav, Tommy; Masiero, Joseph; Bauer, James; Vernazza, Pierre; Ries, Judit Gyorgyey; Kramer, Emily

    2015-11-01

    Knowing the shape distribution of an asteroid population gives clues to its collisional and dynamical history. Constraining light curve amplitudes (brightness variations) offers a first-order approximation to the shape distribution, provided all asteroids in the distribution were subject to the same observing biases. Asteroids observed by the NEOWISE space mission at roughly the same heliocentric distances have essentially the same observing biases and can therefore be inter-compared. We used the archival NEOWISE photometry of a statistically significant sample of Jovian Trojans, Hildas, and Main belt asteroids to compare the amplitude (and by proxy, shape) distributions of L4 vs. L5 Trojans, Trojans vs. Hildas of the same size range, and several subpopulations of Main belt asteroids.For asteroids with near-fluid rubble pile structures, very large light curve amplitudes can only be explained by close or contact binary systems, offering the potential to catalog and characterize binaries within a population and gleaning more information on its dynamical evolution. Because the structure of most asteroids is not known to a high confidence level, objects with very high light curve amplitudes can only be considered candidate binaries. In Sonnett et al. (2015), we identified several binary candidates in the Jovian Trojan and Hilda populations. We have since been conducting a follow-up campaign to obtain densely sampled light curves of the binary candidates to allow detailed shape and binary modeling, helping identify true binaries. Here, we present preliminary results from the follow-up campaign, including rotation properties.This research was carried out at the Jet Propulsion Laboratory (JPL), California Institute of Technology (CalTech) under a contract with the National Aeronautics and Space Administration (NASA) and was supported by the NASA Postdoctoral Program at JPL. We make use of data products from the Wide-field Infrared Survey Explorer, which is a joint project

  4. Paleoproterozoic Collisional Structures in the Hudson Bay Lithosphere Constrained by Multi-Observable Probabilistic Inversion

    NASA Astrophysics Data System (ADS)

    Darbyshire, F. A.; Afonso, J. C.; Porritt, R. W.

    2015-12-01

    The Paleozoic Hudson Bay intracratonic basin conceals a Paleoproterozoic Himalayan-scale continental collision, the Trans-Hudson Orogen (THO), which marks an important milestone in the assembly of the Canadian Shield. The geometry of the THO is complex due to the double-indentor geometry of the collision between the Archean Superior and Western Churchill cratons. Seismic observations at regional scale show a thick, seismically fast lithospheric keel beneath the entire region; an intriguing feature of recent models is a 'curtain' of slightly lower wavespeeds trending NE-SW beneath the Bay, which may represent the remnants of more juvenile material trapped between the two Archean continental cores. The seismic models alone, however, cannot constrain the nature of this anomaly. We investigate the thermal and compositional structure of the Hudson Bay lithosphere using a multi-observable probabilistic inversion technique. This joint inversion uses Rayleigh wave phase velocity data from teleseismic earthquakes and ambient noise, geoid anomalies, surface elevation and heat flow to construct a pseudo-3D model of the crust and upper mantle. Initially a wide range of possible mantle compositions is permitted, and tests are carried out to ascertain whether the lithosphere is stratified with depth. Across the entire Hudson Bay region, low temperatures and a high degree of chemical depletion characterise the mantle lithosphere. Temperature anomalies within the lithosphere are modest, as may be expected from a tectonically-stable region. The base of the thermal lithosphere lies at depths of >250 km, reaching to ~300 km depth in the centre of the Bay. Lithospheric stratification, with a more-depleted upper layer, is best able to explain the geophysical data sets and surface observables. Some regions, where intermediate-period phase velocities are high, require stronger mid-lithospheric depletion. In addition, a narrow region of less-depleted material extends NE-SW across the Bay

  5. Constraining the Evolution of Galaxies over the Interaction Sequence with Multiwavelength Observations and Simulations

    NASA Astrophysics Data System (ADS)

    Lanz, Lauranne

    2013-03-01

    Interactions are crucial for galaxy formation and profoundly affect their evolution. However, our understanding of the impact of interactions on star formation and activity of the central supermassive black hole remains incomplete. In the canonical picture of the interaction process, these processes are expected to undergo a strong enhancement, but some recent studies have not found this prediction to be true in a statistically meaningful sense. This thesis uses a sample of local interactions observed from the ultraviolet to the far-infrared and a suite of N-body hydrodynamic simulations of interactions to examine the evolution of star formation, stellar mass, dust properties, and spectral energy distributions (SEDs) over the interaction sequence. First, we present the SEDs of 31 interactions in 14 systems, which we fit with stellar population synthesis models combined with a thermal dust model. We examine the differences between mildly, moderately, and strongly interacting systems. The star formation rate (SFR), dust luminosity, and the 15-25 K dust component temperature increase as the interaction progresses from moderately to strongly interacting. However, the SFR per stellar mass remains constant across the interaction stages. Second, we create 14 hydrodynamic simulations of isolated and interacting galaxies and calculate simulated photometry in 25 bands using the SUNRISE radiative transfer code. By comparing observed and simulated SEDs, we identify the simulation properties necessary to reproduce an interaction's SED. The best matches originate from simulated systems of similar stellar mass, infrared luminosities, dust mass, and SFR to the observed systems. Although an SED alone is insufficient to identify the interaction stage, strongly interacting systems preferentially match SEDs from times close to coalescence in the simulations. Third, we describe a case study of a post-merger system, Fornax A, for which we constrain its parameters of its progenitors

  6. Relative merits of different types of rest-frame optical observations to constrain galaxy physical parameters

    NASA Astrophysics Data System (ADS)

    Pacifici, Camilla; Charlot, Stéphane; Blaizot, Jérémy; Brinchmann, Jarle

    2012-04-01

    We present a new approach to constrain galaxy physical parameters from the combined interpretation of stellar and nebular emission in wide ranges of observations. This approach relies on the Bayesian analysis of any type of galaxy spectral energy distribution using a comprehensive library of synthetic spectra assembled using state-of-the-art models of star formation and chemical enrichment histories, stellar population synthesis, nebular emission and attenuation by dust. We focus on the constraints set by five-band ugriz photometry and low- and medium-resolution spectroscopy at rest wavelengths λ= 3600-7400 Å on a few physical parameters of galaxies: the observer-frame absolute r-band stellar mass-to-light ratio, M*/Lr; the fraction of a current galaxy stellar mass formed during the last 2.5 Gyr, fSFH; the specific star formation rate, ψS; the gas-phase oxygen abundance, 12 + log(O/H); the total effective V-band absorption optical depth of the dust, ?; and the fraction of this arising from dust in the ambient interstellar medium, μ. Since these parameters cannot be known a priori for any galaxy sample, we assess the accuracy to which they can be retrieved from observations by simulating 'pseudo-observations' using models with known parameters. Assuming that these models are good approximations of true galaxies, we find that the combined analysis of stellar and nebular emission in low-resolution [50 Å full width at half-maximum (FWHM)] galaxy spectra provides valuable constraints on all physical parameters. The typical uncertainties in high-quality spectra are about 0.13 dex for M*/Lr, 0.23 for fSFH, 0.24 dex for ψS, 0.28 for 12 + log(O/H), 0.64 for ? and 0.16 for μ. The uncertainties in 12 + log(O/H) and ? tighten by about 20 per cent for galaxies with detectable emission lines and by another 45 per cent when the spectral resolution is increased to 5 Å FWHM. At this spectral resolution, the analysis of the combined stellar and nebular emission in the high

  7. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    NASA Astrophysics Data System (ADS)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  8. Photochemical modeling of H2O in Titan's atmosphere constrained by Herschel Observations

    NASA Astrophysics Data System (ADS)

    Lara, L. M.; Lellouch, E.; Moreno, R.; Courtin, R.; Hartogh, P.; Rengel, M.

    2012-04-01

    As a species subject to photolytic, chemical and condensation losses, H2O present in Titan's stratosphere must be of external origin. The discovery of CO2 by Voyager (Samuelson et al. 1981) pointed to an external supply of oxygen to Titan's atmosphere. Indeed, CO2, which also condenses, was recognized to be formed via CO+OH, where OH was likely produced by H2O photolysis. This view was supported by the ground-based discovery of CO (Lutz et al. 1983) and subsequent measurements confirming an abundance of ~50 ppm. The source of CO itself remained elusive, but inspired by the Cassini/CAPS discovery of a O+ influx rate (Hartle et al. 2006), Hörst et al. (2008) showed that an external source of O or O+ leads to the formation of CO, also pointing to the likely external origin of this compound. The most up-to-date model of Titan's oxygen chemistry by Hörst et al. (2008) adjusted the OH/H2O deposition rate as a function of the eddy diffusion coefficient below 200 km to match the observed CO2 mixing ratio (15 ppb, uniform over 100-200 km), and producing a H2O profile that was deemed consistent with ISO/SWS measurement of the H2O abundance at a nominal altitude of 400 km (Coustenis et al. 1998). Therefore, the Hörst et al. (2008) study provided an apparently self-consistent picture of the origin of oxygen compounds in Titan's atmosphere, with the three main species (CO, CO2 and H2O) being produced from a permanent external supply of oxygen in two distinct forms. However, recent measurements of several H2O lines by the HIFI and PACS instruments (Herschel Space Observatory) have shown that none of the H2O profiles calculated in Hörst et al. (2008) reproduces the observed lines (Moreno et al., this workshop), and neither does the Lara et al. (1996) H2O profile. Here we revisit the Lara et al. (1996) photochemical model by including (i) an updated eddy diffusion coefficient profile (K(z)), constrained by the C2H6 vertical distribution (ii) an adjustable O+/OH/H2O influx. Our

  9. Using geophysical observations to constrain dynamic models of large-scale continental deformation in Asia

    NASA Astrophysics Data System (ADS)

    Flesch, L. M.; Holt, W. E.; Haines, A. J.

    2003-04-01

    The deformation of continental lithosphere is controlled by a variety of factors, including (1) body forces, (2) basal tractions, (3) boundary forces, and (4) rheology. Obtaining unique solutions that describe the dynamics of continental lithosphere is extremely challenging. Limitations are associated with inadequate observations that can uniquely constrain the dynamics as well as inadequate numerical methods. However, the compilation of space geodetic, seismic, and geologic data over the past 10-15 years have made it possible to make significant strides toward understanding the dynamics of large-scale continental deformation. The first step in making inferences about continental dynamics involves a quantification of the kinematics of active deformation (measurement of the velocity gradient tensor field). We interpolate both GPS velocity vectors and Quaternary strain rates with continuous spline functions (bi-cubic Bessel interpolation) to define a model velocity gradient tensor field solution (strain rates, rotation rates, and relative motions). In our methodology grid areas can be defined to be small enough such that fault zones are narrow and regions between faults (crustal blocks) possess rigid behavior. Our dynamic models are solutions to equations for a thin sheet, accounting for body forces associated with horizontal density variations and edge forces associated with accommodation of relative plate motion. The formalism can also include basal tractions associated with coupling between lithosphere and deeper mantle circulation. These dynamic models allow for lateral variations of viscosity and they allow for different power-law rheologies with power law exponents ranging from n = 1-9. Thus our dynamic models account for possible block-like behavior (high effective viscosity) as well as concentrated strain within shear zones. Kinematic results to date for central Asia show block-like behavior for large regions such as South China, Tarim Basin, Amurian block

  10. Constraining sterile neutrino warm dark matter with Chandra observations of the Andromeda galaxy

    SciTech Connect

    Watson, Casey R.; Polley, Nicholas K.; Li, Zhiyuan E-mail: zyli@astro.ucla.edu

    2012-03-01

    We use the Chandra unresolved X-ray emission spectrum from a 12'–28' (2.8–6.4 kpc) annular region of the Andromeda galaxy to constrain the radiative decay of sterile neutrino warm dark matter. By excising the most baryon-dominated, central 2.8 kpc of the galaxy, we reduce the uncertainties in our estimate of the dark matter mass within the field of view and improve the signal-to-noise ratio of prospective sterile neutrino decay signatures relative to hot gas and unresolved stellar emission. Our findings impose the most stringent limit on the sterile neutrino mass to date in the context of the Dodelson-Widrow model, m{sub s} < 2.2 keV (95% C.L.). Our results also constrain alternative sterile neutrino production scenarios at very small active-sterile neutrino mixing angles.

  11. Constraining the unexplored period between the dark ages and reionization with observations of the global 21 cm signal

    SciTech Connect

    Pritchard, Jonathan R.; Loeb, Abraham

    2010-07-15

    Observations of the frequency dependence of the global brightness temperature of the redshifted 21 cm line of neutral hydrogen may be possible with single dipole experiments. In this paper, we develop a Fisher matrix formalism for calculating the sensitivity of such instruments to the 21 cm signal from reionization and the dark ages. We show that rapid reionization histories with duration {Delta}z < or approx. 2 can be constrained, provided that local foregrounds can be well modeled by low order polynomials. It is then shown that observations in the range {nu}=50-100 MHz can feasibly constrain the Ly{alpha} and x-ray emissivity of the first stars forming at z{approx}15-25, provided that systematic temperature residuals can be controlled to less than 1 mK. Finally, we demonstrate the difficulty of detecting the 21 cm signal from the dark ages before star formation.

  12. Mercury's thermo-chemical evolution from numerical models constrained by Messenger observations

    NASA Astrophysics Data System (ADS)

    Tosi, N.; Breuer, D.; Plesa, A. C.; Wagner, F.; Laneuville, M.

    2012-04-01

    The Messenger spacecraft, in orbit around Mercury for almost one year, has been delivering a great deal of new information that is changing dramatically our understanding of the solar system's innermost planet. Tracking data of the Radio Science experiment yielded improved estimates of the first coefficients of the gravity field that permit to determine the normalized polar moment of inertia of the planet (C/MR2) and the ratio of the moment of inertia of the mantle to that of the whole planet (Cm/C). These two parameters provide a strong constraint on the internal mass distribution and, in particular, on the core mass fraction. With C/MR2 = 0.353 and Cm/C = 0.452 [1], interior structure models predict a core radius as large as 2000 km [2], leaving room for a silicate mantle shell with a thickness of only ~ 400 km, a value significantly smaller than that of 600 km usually assumed in parametrized [3] as well as in numerical models of Mercury's mantle dynamics and evolution [4]. Furthermore, the Gamma-Ray Spectrometer measured the surface abundance of radioactive elements, revealing, besides uranium and thorium, the presence of potassium. The latter, being moderately volatile, rules out traditional formation scenarios from highly refractory materials, favoring instead a composition not much dissimilar from a chondritic model. Considering a 400 km thick mantle, we carry out a large series of 2D and 3D numerical simulations of the thermo-chemical evolution of Mercury's mantle. We model in a self-consistent way the formation of crust through partial melting using Lagrangian tracers to account for the partitioning of radioactive heat sources between mantle and crust and variations of thermal conductivity. Assuming the relative surface abundance of radiogenic elements observed by Messenger to be representative of the bulk mantle composition, we attempt at constraining the degree to which uranium, thorium and potassium are concentrated in the silicate mantle through a broad

  13. The 2014 Napa valley earthquake constrained by InSAR and GNSS observations

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Fernández, José; Palano, Mimmo; Albano, Matteo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2015-04-01

    loosely constrained station coordinates, and other parameters, along with the associated variance-covariance matrices. These solutions were used as quasi observations in a Kalman filter to estimate a consistent set of daily coordinates (i.e. time-series) for all sites involved. The resulting time-series were aligned to a North American fixed reference frame. The visual inspection of the time-series for the stations closely located to the epicentral area of the seismic event allowed detecting a significant offset related to a coseismic deformation. Both data sets have been integrated to determine the 3D displacement field produced by the earthquake. It shows clear characteristics of a strike slip event with an approximately NW striking fault plane.

  14. Local propagation speed constrained estimation of the slowness vector from non-planar array observations.

    PubMed

    Nouvellet, Adrien; Roueff, François; Le Pichon, Alexis; Charbit, Maurice; Vergoz, Julien; Kallel, Mohamed; Mejri, Chourouq

    2016-01-01

    The estimation of the slowness vector of infrasound waves propagating across an array is a critical process leading to the determination of parameters of interest such as the direction of arrival. The sensors of an array are often considered to be located in a horizontal plane. However, due to topography, the altitudes of the sensors are not identical and introduce a bias on the estimate if neglected. However, the unbiased 3D estimation procedure, while suppressing the bias, leads to an increase of the variance. Accounting for an a priori constraint on the slowness vector significantly reduces the variance and could therefore improve the performance of the estimation if the introduced bias by incorrect a priori information remains negligible. This study focuses on measuring the benefits of this approach with a thorough investigation of the bias and variance of the constrained 3D estimator, which is not available in the existing literature. This contribution provides such computations based on an asymptotic Gaussian approximation. Simulations are carried out to assess the theoretical results both with synthetic and real data. Thus, a constrained 3D estimator is proposed yielding the best bias/variance compromise if good knowledge of the propagation wave speed is accessible. PMID:26827049

  15. An offline constrained data assimilation technique for aerosols: Improving GCM simulations over South Asia using observations from two satellite sensors

    NASA Astrophysics Data System (ADS)

    Baraskar, Ankit; Bhushan, Mani; Venkataraman, Chandra; Cherian, Ribu

    2016-05-01

    Aerosol properties simulated by general circulation models (GCMs) exhibit large uncertainties due to biases in model processes and inaccuracies in aerosol emission inputs. In this work, we propose an offline, constrained optimization based procedure to improve these simulations by assimilating them with observational data. The proposed approach explicitly incorporates the non-negativity constraint on the aerosol optical depth (AOD) which is a key metric to quantify aerosol distributions. The resulting optimization problem is quadratic programming in nature and can be easily solved by available optimization routines. The utility of the approach is demonstrated by performing offline assimilation of GCM simulated aerosol optical properties and radiative forcing over South Asia (40-120 E, 5-40 N), with satellite AOD measurements from two sensors, namely Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-Angle Imaging SpectroRadiometer (MISR). Uncertainty in observational data used in the assimilation is computed by developing different error bands around regional AOD observations, based on their quality assurance flags. The assimilation, evaluated on monthly and daily scales, compares well with Aerosol Robotic Network (AERONET) observations as determined by goodness of fit statistics. Assimilation increased both model predicted atmospheric absorption and clear sky radiative forcing by factors consistent with recent estimates in literature. Thus, the constrained assimilation algorithm helps in systematically reducing uncertainties in aerosol simulations.

  16. Constraining Very High-Energy Gamma Ray Sources Using IceCube Neutrino Observations

    NASA Astrophysics Data System (ADS)

    Vance, Gregory; Feintzeig, J.; Karle, A.; IceCube Collaboration

    2014-01-01

    Modern gamma ray astronomy has revealed the most violent, energetic objects in the known universe, from nearby supernova remnants to distant active galactic nuclei. In an effort to discover more about the fundamental nature of such objects, we present searches for astrophysical neutrinos in coincidence with known gamma ray sources. Searches were conducted using data from IceCube Neutrino Observatory, a cubic-kilometer neutrino detector that is sensitive to astrophysical particles with energies above 1 TeV. The detector is situated at the South Pole, and uses more than 5,000 photomultiplier tubes to detect Cherenkov light from the interactions of particles within the ice. Existing models of proton-proton interactions allow us to link gamma ray fluxes to the production of high-energy neutrinos, so neutrino data from IceCube can be used to constrain the mechanisms by which gamma ray sources create such energetic photons. For a few particularly bright sources, such as the blazar Markarian 421, IceCube is beginning to reach the point where actual constraints can be made. As more years of data are analyzed, the limits will improve and stronger constraints will become possible. This work was supported in part by the National Science Foundation's REU Program through NSF Award AST-1004881 to the University of Wisconsin-Madison.

  17. Constraining parameters in marine pelagic ecosystem models - is it actually feasible with typical observations of standing stocks?

    NASA Astrophysics Data System (ADS)

    Löptien, U.; Dietze, H.

    2015-07-01

    In a changing climate, marine pelagic biogeochemistry may modulate the atmospheric concentrations of climate-relevant species such as CO2 and N2O. To date, projections rely on earth system models, featuring simple pelagic biogeochemical model components, embedded into 3-D ocean circulation models. Most of these biogeochemical model components rely on the hyperbolic Michaelis-Menten (MM) formulation which specifies the limiting effect of light and nutrients on carbon assimilation by autotrophic phytoplankton. The respective MM constants, along with other model parameters, of 3-D coupled biogeochemical ocean-circulation models are usually tuned; the parameters are changed until a "reasonable" similarity to observed standing stocks is achieved. Here, we explore with twin experiments (or synthetic "observations") the demands on observations that allow for a more objective estimation of model parameters. We start with parameter retrieval experiments based on "perfect" (synthetic) observations which we distort, step by step, by low-frequency noise to approach realistic conditions. Finally, we confirm our findings with real-world observations. In summary, we find that MM constants are especially hard to constrain because even modest noise (10 %) inherent to observations may hinder the parameter retrieval already. This is of concern since the MM parameters are key to the model's sensitivity to anticipated changes in the external conditions. Furthermore, we illustrate problems caused by high-order parameter dependencies when parameter estimation is based on sparse observations of standing stocks. Somewhat counter to intuition, we find that more observational data can sometimes degrade the ability to constrain certain parameters.

  18. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Smith, W. Kolby; Cleveland, Cory C.; Reed, Sasha C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    United States (U.S.) energy policy includes an expectation that bioenergy will be a substantial future energy source. In particular, the Energy Independence and Security Act of 2007 (EISA) aims to increase annual U.S. biofuel (secondary bioenergy) production by more than 3-fold, from 40 to 136 billion liters ethanol, which implies an even larger increase in biomass demand (primary energy), from roughly 2.9 to 7.4 EJ yr–1. However, our understanding of many of the factors used to establish such energy targets is far from complete, introducing significgant uncertainty into the feasibility of current estimates of bioenergy potential. Here, we utilized satellite-derived net primary productivity (NPP) data—measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S.—to estimate primary bioenergy potential (PBP). Our results indicate that PBP of the conterminous U.S. ranges from roughly 5.9 to 22.2 EJ yr–1, depending on land use. The low end of this range represents the potential when harvesting residues only, while the high end would require an annual biomass harvest over an area more than three times current U.S. agricultural extent. While EISA energy targets are theoretically achievable, we show that meeting these targets utilizing current technology would require either an 80% displacement of current crop harvest or the conversion of 60% of rangeland productivity. Accordingly, realistically constrained estimates of bioenergy potential are critical for effective incorporation of bioenergy into the national energy portfolio.

  19. Model-data assimilation of multiple phenological observations to constrain and predict leaf area index.

    PubMed

    Viskari, Toni; Hardiman, Brady; Desai, Ankur R; Dietze, Michael C

    2015-03-01

    Our limited ability to accurately simulate leaf phenology is a leading source of uncertainty in models of ecosystem carbon cycling. We evaluate if continuously updating canopy state variables with observations is beneficial for predicting phenological events. We employed ensemble adjustment Kalman filter (EAKF) to update predictions of leaf area index (LAI) and leaf extension using tower-based photosynthetically active radiation (PAR) and moderate resolution imaging spectrometer (MODIS) data for 2002-2005 at Willow Creek, Wisconsin, USA, a mature, even-aged, northern hardwood, deciduous forest. The ecosystem demography model version 2 (ED2) was used as the prediction model, forced by offline climate data. EAKF successfully incorporated information from both the observations and model predictions weighted by their respective uncertainties. The resulting. estimate reproduced the observed leaf phenological cycle in the spring and the fall better than a parametric model prediction. These results indicate that during spring the observations contribute most in determining the correct bud-burst date, after which the model performs well, but accurately modeling fall leaf senesce requires continuous model updating from observations. While the predicted net ecosystem exchange (NEE) of CO2 precedes tower observations and unassimilated model predictions in the spring, overall the prediction follows observed NEE better than the model alone. Our results show state data assimilation successfully simulates the evolution of plant leaf phenology and improves model predictions of forest NEE. PMID:26263674

  20. Constraining ammonia dairy emissions during NASA DISCOVER-AQ California: surface and airborne observation comparisons with CMAQ simulations

    NASA Astrophysics Data System (ADS)

    Miller, D. J.; Liu, Z.; Sun, K.; Tao, L.; Nowak, J. B.; Bambha, R.; Michelsen, H. A.; Zondlo, M. A.

    2014-12-01

    Agricultural ammonia (NH3) emissions are highly uncertain in current bottom-up inventories. Ammonium nitrate is a dominant component of fine aerosols in agricultural regions such as the Central Valley of California, especially during winter. Recent high resolution regional modeling efforts in this region have found significant ammonium nitrate and gas-phase NH3 biases during summer. We compare spatially-resolved surface and boundary layer gas-phase NH3 observations during NASA DISCOVER-AQ California with Community Multi-Scale Air Quality (CMAQ) regional model simulations driven by the EPA NEI 2008 inventory to constrain wintertime NH3 model biases. We evaluate model performance with respect to aerosol partitioning, mixing and deposition to constrain contributions to modeled NH3 concentration biases in the Central Valley Tulare dairy region. Ammonia measurements performed with an open-path mobile platform on a vehicle are gridded to 4 km resolution hourly background concentrations. A peak detection algorithm is applied to remove local feedlot emission peaks. Aircraft NH3, NH4+ and NO3- observations are also compared with simulations extracted along the flight tracks. We find NH3 background concentrations in the dairy region are underestimated by three to five times during winter and NH3 simulations are moderately correlated with observations (r = 0.36). Although model simulations capture NH3 enhancements in the dairy region, these simulations are biased low by 30-60 ppbv NH3. Aerosol NH4+ and NO3- are also biased low in CMAQ by three and four times respectively. Unlike gas-phase NH3, CMAQ simulations do not capture typical NH4+ or NO3- enhancements observed in the dairy region. In contrast, boundary layer height simulations agree well with observations within 13%. We also address observational constraints on simulated NH3 deposition fluxes. These comparisons suggest that NEI 2008 wintertime dairy emissions are underestimated by a factor of three to five. We test

  1. Constraining Atmospheric Particle Size in Gale Crater Using REMS UV Measurements and Mastcam Observations at 440 and 880 nm

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.; de la Torre-Juárez, M.; Vicente-Retortillo, A.; Martinez, G.

    2015-12-01

    Optical depth measured in Gale crater has been shown to vary seasonally, and this variation is potentially linked to a change in dust size visible from the surface. The Mast Camera (Mastcam) on the Mars Science Laboratory (MSL) has performed cross-sky brightness surveys similar to those obtained at the Phoenix Lander site. Since particle size can be constrained by observing airborne dust across multiple wavelengths and angles, surveys at 440 and 880 nm can be used to characterize atmospheric dust within and above the crater. In addition, Rover Environmental Monitoring Station (REMS) on MSL provides downward radiation flux from 250 nm (UVD) to 340 nm (UVA), which would further constrain aerosol properties. The dust, which is not spherical and likely contains irregular particles, can be modeled using randomly oriented triaxial ellipsoids with predetermined microphysical optical properties and fit to sky survey observations to retrieve an effective radius. This work provides a discussion on the constraints of particle size distribution using REMS measurements as well as shape of the particle in Gale crater in comparison to Mastcam at the specified wavelengths.

  2. CONSTRAINING THE PLANETARY SYSTEM OF FOMALHAUT USING HIGH-RESOLUTION ALMA OBSERVATIONS

    SciTech Connect

    Boley, A. C.; Payne, M. J.; Ford, E. B.; Shabram, M.; Corder, S.; Dent, W. R. F.

    2012-05-01

    The dynamical evolution of planetary systems leaves observable signatures in debris disks. Optical images trace micron-sized grains, which are strongly affected by stellar radiation and need not coincide with their parent body population. Observations of millimeter-sized grains accurately trace parent bodies, but previous images lack the resolution and sensitivity needed to characterize the ring's morphology. Here we present ALMA 350 GHz observations of the Fomalhaut debris ring. These observations demonstrate that the parent body population is 13-19 AU wide with a sharp inner and outer boundary. We discuss three possible origins for the ring and suggest that debris confined by shepherd planets is the most consistent with the ring's morphology.

  3. Constraining properties of GRB magnetar central engines using the observed plateau luminosity and duration correlation

    NASA Astrophysics Data System (ADS)

    Rowlinson, A.; Gompertz, B. P.; Dainotti, M.; O'Brien, P. T.; Wijers, R. A. M. J.; van der Horst, A. J.

    2014-09-01

    An intrinsic correlation has been identified between the luminosity and duration of plateaus in the X-ray afterglows of gamma-ray bursts (GRBs; Dainotti et al. 2008), suggesting a central engine origin. The magnetar central engine model predicts an observable plateau phase, with plateau durations and luminosities being determined by the magnetic fields and spin periods of the newly formed magnetar. This paper analytically shows that the magnetar central engine model can explain, within the 1σ uncertainties, the correlation between plateau luminosity and duration. The observed scatter in the correlation most likely originates in the spread of initial spin periods of the newly formed magnetar and provides an estimate of the maximum spin period of ˜35 ms (assuming a constant mass, efficiency and beaming across the GRB sample). Additionally, by combining the observed data and simulations, we show that the magnetar emission is most likely narrowly beamed and has ≲20 per cent efficiency in conversion of rotational energy from the magnetar into the observed plateau luminosity. The beaming angles and efficiencies obtained by this method are fully consistent with both predicted and observed values. We find that short GRBs and short GRBs with extended emission lie on the same correlation but are statistically inconsistent with being drawn from the same distribution as long GRBs, this is consistent with them having a wider beaming angle than long GRBs.

  4. Ability of the current global observing network to constrain N2O sources and sinks

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Wells, K. C.; Chaliyakunnel, S.; Griffis, T. J.; Henze, D. K.; Bousserez, N.

    2014-12-01

    The global observing network for atmospheric N2O combines flask and in-situ measurements at ground stations with sustained and campaign-based aircraft observations. In this talk we apply a new global model of N2O (based on GEOS-Chem) and its adjoint to assess the strengths and weaknesses of this network for quantifying N2O emissions. We employ an ensemble of pseudo-observation analyses to evaluate the relative constraints provided by ground-based (surface, tall tower) and airborne (HIPPO, CARIBIC) observations, and the extent to which variability (e.g. associated with pulsing or seasonality of emissions) not captured by the a priori inventory can bias the inferred fluxes. We find that the ground-based and HIPPO datasets each provide a stronger constraint on the distribution of global emissions than does the CARIBIC dataset on its own. Given appropriate initial conditions, we find that our inferred surface fluxes are insensitive to model errors in the stratospheric loss rate of N2O over the timescale of our analysis (2 years); however, the same is not necessarily true for model errors in stratosphere-troposphere exchange. Finally, we examine the a posteriori error reduction distribution to identify priority locations for future N2O measurements.

  5. Observationally-constrained estimates of aerosol optical depths (AODs) over East Asia via data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Lee, K.; Lee, S.; Song, C. H.

    2015-12-01

    Not only aerosol's direct effect on climate by scattering and absorbing the incident solar radiation, but also they indirectly perturbs the radiation budget by influencing microphysics and dynamics of clouds. Aerosols also have a significant adverse impact on human health. With an importance of aerosols in climate, considerable research efforts have been made to quantify the amount of aerosols in the form of the aerosol optical depth (AOD). AOD is provided with ground-based aerosol networks such as the Aerosol Robotic NETwork (AERONET), and is derived from satellite measurements. However, these observational datasets have a limited areal and temporal coverage. To compensate for the data gaps, there have been several studies to provide AOD without data gaps by assimilating observational data and model outputs. In this study, AODs over East Asia simulated with the Community Multi-scale Air Quality (CMAQ) model and derived from the Geostationary Ocean Color Imager (GOCI) observation are interpolated via different data assimilation (DA) techniques such as Cressman's method, Optimal Interpolation (OI), and Kriging for the period of the Distributed Regional Aerosol Gridded Observation Networks (DRAGON) Campaign (March - May 2012). Here, the interpolated results using the three DA techniques are validated intensively by comparing with AERONET AODs to examine the optimal DA method providing the most reliable AODs over East Asia.

  6. Constraining Middle Atmospheric Moisture in GEOS-5 Using EOS-MLS Observations

    NASA Technical Reports Server (NTRS)

    Jin, Jianjun; Pawson, Steven; =Wargan, Krzysztof; Livesey, Nathaniel

    2012-01-01

    Middle atmospheric water vapor plays an important role in climate and atmospheric chemistry. In the middle atmosphere, water vapor, after ozone and carbon dioxide, is an important radiatively active gas that impacts climate forcing and the energy balance. It is also the source of the hydroxyl radical (OH) whose abundances affect ozone and other constituents. The abundance of water vapor in the middle atmosphere is determined by upward transport of dehydrated air through the tropical tropopause layer, by the middle atmospheric circulation, production by the photolysis of methane (CH4), and other physical and chemical processes in the stratosphere and mesosphere. The Modern-Era Retrospective analysis for Research and Applications (MERRA) reanalysis with GEOS-5 did not assimilate any moisture observations in the middle atmosphere. The plan is to use such observations, available sporadically from research satellites, in future GEOS-5 reanalyses. An overview will be provided of the progress to date with assimilating the EOS-Aura Microwave Limb Sounder (MLS) moisture retrievals, alongside ozone and temperature, into GEOS-5. Initial results demonstrate that the MLS observations can significantly improve the middle atmospheric moisture field in GEOS-5, although this result depends on introducing a physically meaningful representation of background error covariances for middle atmospheric moisture into the system. High-resolution features in the new moisture field will be examined, and their relationships with ozone, in a two-year assimilation experiment with GEOS-5. Discussion will focus on how Aura MLS moisture observations benefit the analyses.

  7. Constraining the Sources and Sinks of Atmospheric Methane Using Stable Isotope Observations and Chemistry Climate Modeling

    NASA Astrophysics Data System (ADS)

    Feinberg, A.; Coulon, A.; Stenke, A.; Peter, T.

    2015-12-01

    Methane acts as both a greenhouse gas and a driver of atmospheric chemistry. There is a lack of consensus for the explanation behind the atmospheric methane trend in recent years (1980-2010). High uncertainties are associated with the magnitudes of individual methane source and sink processes. Methane isotopes have the potential to distinguish between the different methane fluxes, as each flux is characterized by an isotopic signature. Methane emissions from each source category are expressed explicitly in a chemistry climate model SOCOL, including wetlands, rice paddies, biomass burning, industry, etc. The model includes 48 methane tracers based on source type and geographical origin in order to track methane after it has been emitted. SOCOL simulations for the years 1980-2010 are performed in "nudged mode", so that model dynamics reflect observed meteorology. Available database estimates of the various surface emission fluxes are inputted into SOCOL. The model diagnostic methane tracers are compared to methane isotope observations from measurement networks. Inconsistencies between the model results and observations point to deficiencies in the available emission estimates or model sink processes. Because of their dependence on the OH sink, deuterated methane observations and methyl chloroform tracers are used to investigate the variability of OH mixing ratios in the model and the real world. The analysis examines the validity of the methane source and sink category estimates over the last 30 years.

  8. Constraining the Assimilation of SWOT Satellite Observations Using Hydraulic Geometry Relationships

    NASA Astrophysics Data System (ADS)

    Andreadis, K.; Mersel, M. K.; Durand, M. T.; Smith, L. C.; Alsdorf, D. E.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission that will be launched in 2019, will offer measurements of the spatial and temporal variability of surface water with unprecedented accuracy. These observations will include surface water elevation, slope, and river channel top width along with estimates of river discharge globally at a spatial resolution of about 50m. One potential source of uncertainty, for estimating discharge, is the inability of SWOT to measure the baseflow depth, i.e. depth of flow beneath the lowest water surface elevation observed during the mission lifetime. This study evaluates the potential of a data assimilation algorithm to reduce that uncertainty by estimating river channel bathymetry. A synthetic experiment is performed wherein a detailed hydraulic model is used to simulate river discharge and water surface elevations over two study areas: a 172 km reach in the middle Rio Grande River, and a 180 km reach in the Upper Mississippi River. These simulations are designated as "truth", and are then used to generate "virtual" SWOT observations with the correct orbital and error characteristics. Appropriate errors are added primarily to the "true" river channel bathymetry among other parameters (e.g. bank widths) to emulate data availability and accuracy globally for hydraulic modeling. Two assimilation techniques are evaluated that merge SWOT observations with a simple gradually-varied flow model to correct river bed topography: variational assimilation and a two-stage Ensemble Kalman Filter. Classic at-a-station hydraulic geometry theory, that posits the interrelationship of hydraulic characteristics as power functions of discharge, can be adapted to SWOT observations. Initial work has shown the potential value of these relationships to detecting in-channel versus overbank flow and deducing a relationship between SWOT observables and local discharge. These relationships are used as additional constraints to the assimilation

  9. Chemical Nature Of Titan’s Organic Aerosols Constrained from Spectroscopic and Mass Spectrometric Observations

    NASA Astrophysics Data System (ADS)

    Imanaka, Hiroshi; Cruikshank, D. P.

    2012-10-01

    The Cassini-Huygens observations greately extend our knowledge about Titan’s organic aerosols. The Cassini INMS and CAPS observations clearly demonstrate the formation of large organic molecules in the ionosphere [1, 2]. The VIMS and CIRS instruments have revealed spectral features of the haze covering the mid-IR and far-IR wavelengths [3, 4, 5, 6]. This study attempts to speculate the possible chemical nature of Titan’s aerosols by comparing the currently available observations with our laboratory study. We have conducted a series of cold plasma experiment to investigate the mass spectrometric and spectroscopic properties of laboratory aerosol analogs [7, 8]. Titan tholins and C2H2 plasma polymer are generated with cold plasma irradiations of N2/CH4 and C2H2, respectively. Laser desorption mass spectrum of the C2H2 plasma polymer shows a reasonable match with the CAPS positive ion mass spectrum. Furthermore, spectroscopic features of the the C2H2 plasma polymer in mid-IR and far-IR wavelegths qualitatively show reasonable match with the VIMS and CIRS observations. These results support that the C2H2 plasma polymer is a good candidate material for Titan’s aerosol particles at the altitudes sampled by the observations. We acknowledge funding supports from the NASA Cassini Data Analysis Program, NNX10AF08G, and from the NASA Exobiology Program, NNX09AM95G, and the Cassini Project. [1] Waite et al. (2007) Science 316, 870-875. [2] Crary et al. (2009) Planet. Space Sci. 57, 1847-1856. [3] Bellucci et al. (2009) Icarus 201, 198-216. [4] Anderson and Samuelson (2011) Icarus 212, 762-778. [5] Vinatier et al. (2010) Icarus 210, 852-866. [6] Vinatier et al. (2012) Icarus 219, 5-12. [7] Imanaka et al. (2004) Icarus 168, 344-366. [8] Imanaka et al. (2012) Icarus 218, 247-261.

  10. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    NASA Astrophysics Data System (ADS)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  11. Constraining Methane Flux Estimates Using Atmospheric Observations of Methane and 1^3C in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Miller, J. B.; Bruhwiler, L. M.

    2002-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood due in part to the large spatial and temporal variability of the methane flux. Model simulations of methane mixing ratios using most process-based source estimates typically over-predict the latitudinal gradient of atmospheric methane relative to the observations; however, the specific source processes responsible for this discrepancy have not been identified definitively. The aim of this work is to use the isotopic signatures of the sources to attribute these discrepancies to a source process or group of source processes and create global and regional budget estimates that are in agreement with both the atmospheric observations of methane and 1^3C in methane. To this end, observations of isotopic ratios of 1^3C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of the methane budget. Inverse modeling is a top-down approach which uses observations of trace gases in the atmosphere, an estimate of the spatial pattern of trace gas fluxes, and a model of atmospheric transport to estimate the sources and sinks. The atmospheric transport was represented by the TM3 three-dimensional transport model. The GLOBALVIEW 2001 methane observations were used along with flask measurements of 1^3C in methane at six of the CMDL-NOAA stations by INSTAAR. Initial results imply interesting differences from previous methane budget estimates. For example, the 1^3C isotope observations in methane call for an increase in southern hemisphere sources with a bacterial isotopic signature such as wetlands, rice paddies, termites, and ruminant animals. The

  12. A Bayesian observer model constrained by efficient coding can explain 'anti-Bayesian' percepts.

    PubMed

    Wei, Xue-Xin; Stocker, Alan A

    2015-10-01

    Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observer's prior beliefs. Second, it predicts that stimulus uncertainty differentially affects perceptual bias depending on whether the uncertainty is induced by internal or external noise. We found that both model predictions match reported perceptual biases in perceived visual orientation and spatial frequency, and were able to explain data that have not been explained before. The model is general and should prove applicable to other perceptual variables and tasks. PMID:26343249

  13. A source representation of microseisms constrained by HV spectral ratio observations

    NASA Astrophysics Data System (ADS)

    Dreger, D.; Rhie, J.

    2006-12-01

    The microseisms are generated by pressure variation on the sea floor caused by incident and reflected ocean waves, and dominant background noises at short periods. The observations of microseism wave fields in deep sedimentary basins (e.g., Santa Clara Valley) show that the maximum period of the horizontal to vertical (H/V) spectral ratio correlates with basin thickness. A similar correlation has been found in teleseismic arrival times and P-wave amplitude as well as local-earthquake S-wave relative amplification [Dolenc et al., 2005]. This observation infers that a study of microseism wave field, combined with other seismic data sets, can probably be used to invert for the velocity structures of the deep basins. To make this inversion possible, it is necessary to understand the excitation and propagation characteristics of microseisms. We will perform forward computations of microseism wave fields for source representations such as CLVDs and single-forces with the USGS 3D velocity model. Various spatial extensions as well as the frequency content of the source will be tested to match observed shifts in dominant H/V spectral ratio. The optimal source representation of the microseisms will be the first step to accomplish inversions for 3D seismic velocity structure in sedimentary basins using microseisms.

  14. Toward observationally constrained high space and time resolution CO2 urban emission inventories

    NASA Astrophysics Data System (ADS)

    Maness, H.; Teige, V. E.; Wooldridge, P. J.; Weichsel, K.; Holstius, D.; Hooker, A.; Fung, I. Y.; Cohen, R. C.

    2013-12-01

    The spatial patterns of greenhouse gas (GHG) emission and sequestration are currently studied primarily by sensor networks and modeling tools that were designed for global and continental scale investigations of sources and sinks. In urban contexts, by design, there has been very limited investment in observing infrastructure, making it difficult to demonstrate that we have an accurate understanding of the mechanism of emissions or the ability to track processes causing changes in those emissions. Over the last few years, our team has built a new high-resolution observing instrument to address urban CO2 emissions, the BErkeley Atmospheric CO2 Observing Network (BEACON). The 20-node network is constructed on a roughly 2 km grid, permitting direct characterization of the internal structure of emissions within the San Francisco East Bay. Here we present a first assessment of BEACON's promise for evaluating the effectiveness of current and upcoming local emissions policy. Within the next several years, a variety of locally important changes are anticipated--including widespread electrification of the motor vehicle fleet and implementation of a new power standard for ships at the port of Oakland. We describe BEACON's expected performance for detecting these changes, based on results from regional forward modeling driven by a suite of projected inventories. We will further describe the network's current change detection capabilities by focusing on known high temporal frequency changes that have already occurred; examples include a week of significant freeway traffic congestion following the temporary shutdown of the local commuter rail (the Bay Area Rapid Transit system).

  15. Constraining magnetic fields morphologies using mid-IR polarization: observations and modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Li, Dan; Pantin, Eric; Telesco, Charles M.

    2016-01-01

    Polarization arises from aligned dust grains in magnetic fields, and thus the direction of polarization can trace the direction of B fields. We present the mid-IR imaging and spectropolarimetry observations made with the GTC's CanariCam of the Herbig Ae star WL 16. WL 16 is embedded in/behind the ρ Ophiuchus molecular cloud with visual extinction of ~31 mag. It exhibits large and extended (~900 AU) emission, which is believed to come from the emission of PAHs and very small dust grains. Uniform polarization vectors from imaging polarization and the absorption-dominated polarization profile from spectropolarimetry consistently indicate a uniform foreground magnetic field oriented at about 30 deg from the North.We also model the predicted polarization patterns expected to arise from different magnetic field morphologies, which can be distinguished by high-resolution observations. As an example, we present the mid-IR polarization modeling of AB Aur, a well-studied Herbig Ae star. We incorporate polarization from dichroic absorption, emission and scattering in the modeling. The observed polarization structures are well reproduced by two components: emissive polarization arising from a poloidal B field and scattering polarization by 0.01-1 μm dust grains.

  16. Role of Stratospheric Water Vapor in Global Warming from GCM Simulations Constrained by MLS Observation

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Stek, P. C.; Su, H.; Jiang, J. H.; Livesey, N. J.; Santee, M. L.

    2014-12-01

    Over the past century, global average surface temperature has warmed by about 0.16°C/decade, largely due to anthropogenic increases in well-mixed greenhouse gases. However, the trend in global surface temperatures has been nearly flat since 2000, raising a question regarding the exploration of the drivers of climate change. Water vapor is a strong greenhouse gas in the atmosphere. Previous studies suggested that the sudden decrease of stratospheric water vapor (SWV) around 2000 may have contributed to the stall of global warming. Since 2004, the SWV observed by Microwave Limb Sounder (MLS) on Aura satellite has shown a slow recovery. The role of recent SWV variations in global warming has not been quantified. We employ a coupled atmosphere-ocean climate model, the NCAR CESM, to address this issue. It is found that the CESM underestimates the stratospheric water vapor by about 1 ppmv due to limited representations of the stratospheric dynamic and chemical processes important for water vapor variabilities. By nudging the modeled SWV to the MLS observation, we find that increasing SWV by 1 ppmv produces a robust surface warming about 0.2°C in global-mean when the model reaches equilibrium. Conversely, the sudden drop of SWV from 2000 to 2004 would cause a surface cooling about -0.08°C in global-mean. On the other hand, imposing the observed linear trend of SWV based on the 10-year observation of MLS in the CESM yields a rather slow surface warming, about 0.04°C/decade. Our model experiments suggest that SWV contributes positively to the global surface temperature variation, although it may not be the dominant factor that drives the recent global warming hiatus. Additional sensitivity experiments show that the impact of SWV on surface climate is mostly governed by the SWV amount at 100 hPa in the tropics. Furthermore, the atmospheric model simulations driven by observed sea surface temperature (SST) show that the inter-annual variation of SWV follows that of SST

  17. Fermi Large Area Telescope observation of high-energy solar flares: constraining emission scenarios

    NASA Astrophysics Data System (ADS)

    Omodei, Nicola; Pesce-Rollins, Melissa; Petrosian, Vahe; Liu, Wei; Rubio da Costa, Fatima

    2015-08-01

    The Fermi Large Area Telescope (LAT) is the most sensitive instrument ever deployed in space for observing gamma-ray emission >100 MeV. This has also been demonstrated by its detection of quiescent gamma-ray emission from pions produced by cosmic-ray protons interacting in the solar atmosphere, and from cosmic-ray electron interactions with solar optical photons. The Fermi LAT has also detected high-energy gamma-ray emission associated with GOES M-class and X-class X-ray flares, each accompanied by a coronal mass ejection and a solar energetic particle event increasing the number of detected solar flares by almost a factor of 10 with respect to previous space observations. During the impulsive phase, gamma rays with energies up to several hundreds of MeV have been recorded by the LAT. Emission up to GeV energies lasting several hours after the flare has also been recorded by the LAT. Of particular interest are the recent detections of two solar flares whose position behind the limb was confirmed by the STEREO-B satellite. While gamma-ray emission up to tens of MeV resulting from proton interactions has been detected before from occulted solar flares, the significance of these particular events lies in the fact that these are the first detections of >100 MeV gamma-ray emission from footpoint-occulted flares. We will present the Fermi-LAT, RHESSI and STEREO observations of these flares and discuss the various emission scenarios for these sources.

  18. Observationally-constrained carbonaceous aerosol source estimates for the Pearl River Delta area of China

    NASA Astrophysics Data System (ADS)

    Li, N.; Fu, T.-M.; Cao, J. J.; Zheng, J. Y.; He, Q. Y.; Long, X.; Zhao, Z. Z.; Cao, N. Y.; Fu, J. S.; Lam, Y. F.

    2015-11-01

    We simulated elemental carbon (EC) and organic carbon (OC) aerosols over the Pearl River Delta (PRD) area of China and compared the results to seasonal surface measurements, with the aim of quantifying carbonaceous aerosol sources from a "top-down" perspective. Our regional model was driven by current-best estimates of PRD EC (39.5 Gg C yr-1) and OC (32.8 Gg C yr-1) emissions and included updated secondary organic aerosol formation pathways. The simulated annual mean EC and OC concentrations were 4.0 and 7.7 μg C m-3, respectively, lower than the observed annual mean EC and OC concentrations (4.5 and 13.1 μg C m-3, respectively). We used multiple regression to match the simulated EC against seasonal mean observations. The resulting top-down estimate for EC emission in the PRD area was 52.9 ± 8.0 Gg C yr-1. We estimated the OC emission in the PRD area to be 60.2 ± 10.3 Gg C yr-1, based on the top-down EC emission estimate and the primary OC / EC ratios derived from bottom-up statistics. Using these top-down emission estimates, the simulated average annual mean EC and OC concentrations were improved to 4.4 and 9.5 μg C m-3, respectively, closer to the observations. Secondary sources accounted for 42 % of annual mean surface OC in our top-down simulations, with biogenic VOCs being the most important precursors.

  19. PANCHROMATIC OBSERVATIONS OF THE TEXTBOOK GRB 110205A: CONSTRAINING PHYSICAL MECHANISMS OF PROMPT EMISSION AND AFTERGLOW

    SciTech Connect

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Zhang, B.; Gorosabel, J.; Urata, Y.; Sugita, S.; Pozanenko, A.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E. E-mail: rfshen@astro.utoronto.ca; and others

    2012-06-01

    We present a comprehensive analysis of a bright, long-duration (T{sub 90} {approx} 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb, and BOOTES telescopes when the gamma-ray burst (GRB) was still radiating in the {gamma}-ray band, with optical light curve showing correlation with {gamma}-ray data. Nearly 200 s of observations were obtained simultaneously from optical, X-ray, to {gamma}-ray (1 eV to 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution during the prompt emission phase. In particular, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard synchrotron emission model in the fast cooling regime. Shortly after prompt emission ({approx}1100 s), a bright (R = 14.0) optical emission hump with very steep rise ({alpha} {approx} 5.5) was observed, which we interpret as the reverse shock (RS) emission. It is the first time that the rising phase of an RS component has been closely observed. The full optical and X-ray afterglow light curves can be interpreted within the standard reverse shock (RS) + forward shock (FS) model. In general, the high-quality prompt and afterglow data allow us to apply the standard fireball model to extract valuable information, including the radiation mechanism (synchrotron), radius of prompt emission (R{sub GRB} {approx} 3 Multiplication-Sign 10{sup 13} cm), initial Lorentz factor of the outflow ({Gamma}{sub 0} {approx} 250), the composition of the ejecta (mildly magnetized), the collimation angle, and the total energy budget.

  20. Panchromatic Observations of the Textbook GRB 110205A: Constraining Physical Mechanisms of Prompt Emission and Afterglow

    NASA Technical Reports Server (NTRS)

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Gorosabel, J.; Urata, Y.; Sugita, S.; Zhang, B.; Pozanenko, A.; Nissinen, M.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E.; Volnova, A.; Akerlof, C. W.; Anto, P.; Barthelmy, S. D.; Breeveld, A.; Carsenty, U.; Gehrels, N.; Sonbas, E.

    2011-01-01

    We present a comprehensive analysis of a bright, long duration (T(sub 90) approx. 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb and BOOTES telescopes when the GRB was still radiating in the gamma-ray band. Thanks to its long duration, nearly 200 s of observations were obtained simultaneously from optical, X-ray to gamma-ray (1 eV - 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution across 6 orders of magnitude in energy during the prompt emission phase. In particular, by fitting the time resolved prompt spectra, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard GRB synchrotron emission model in the fast cooling regime. Although the prompt optical emission is brighter than the extrapolation of the best fit X/ -ray spectra, it traces the -ray light curve shape, suggesting a relation to the prompt high energy emission. The synchrotron + synchrotron self-Compton (SSC) scenario is disfavored by the data, but the models invoking a pair of internal shocks or having two emission regions can interpret the data well. Shortly after prompt emission (approx. 1100 s), a bright (R = 14.0) optical emission hump with very steep rise ( alpha approx. 5.5) was observed which we interpret as the emission from the reverse shock. It is the first time that the rising phase of a reverse shock component has been closely observed.

  1. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.; Allan, R.; Freeman, E.; Wheeler, D.; Wilkinson, C.; Williamson, F.

    2012-05-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations. As the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. One organisation which systematically made observations and collected the results was the English East-India Company (EEIC), and their archives have been preserved in the British Library. Inspection of those archives revealed 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure, and subjective estimates of wind speed and direction, from voyages across the Atlantic and Indian Oceans between 1789 and 1834. Those records have been extracted and digitised, providing 273 000 new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809 eruption was modest (perhaps 0.5 °C). This provides a powerful out-of-sample validation for the proxy reconstructions - supporting their use for longer-term climate reconstructions. However, some of the climate model simulations in the CMIP5 ensemble show much larger volcanic effects than this - such simulations are unlikely to be accurate in this respect.

  2. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.; Allan, R.; Freeman, E.; Wheeler, D.; Wilkinson, C.; Williamson, F.

    2012-10-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores, etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations. As the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. One organisation which systematically made observations and collected the results was the English East India Company (EEIC), and their archives have been preserved in the British Library. Inspection of those archives revealed 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure, and subjective estimates of wind speed and direction, from voyages across the Atlantic and Indian Oceans between 1789 and 1834. Those records have been extracted and digitised, providing 273 000 new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809 eruption was modest (perhaps 0.5 °C). This provides an out-of-sample validation for the proxy reconstructions - supporting their use for longer-term climate reconstructions. However, some of the climate model simulations in the CMIP5 ensemble show much larger volcanic effects than this - such simulations are unlikely to be accurate in this respect.

  3. Constraining friction laws by experimental observations and numerical simulations of various rupture modes

    NASA Astrophysics Data System (ADS)

    Lu, X.; Lapusta, N.; Rosakis, A. J.

    2006-12-01

    Several different types of friction laws, such as linear slip-weakening law and variants of rate- and state- dependent friction laws, are widely used in earthquake modeling. It is important to understand how much complexity one needs to include in a friction law to properly capture the dynamics of frictional rupture. Observations suggest that earthquake ruptures propagate as slip pulses (Heaton, 1990). In the absence of local heterogeneities and bimaterial effect, only one mechanism, namely strong rate-weakening friction, is shown, theoretically and numerically, to be capable of generating pulses on homogeneous interfaces separating two identical materials. We have observed pulses in our recent experiments designed to reproduce such a setting (Rosakis, Lu, Lapusta, AGU, 2006). By exploring experimental parameter space, we have identified different dynamic rupture modes including pulse-like, crack-like, and mixed modes. This suggests that rate weakening may play an important role in rupture dynamics. The systematic transition between rupture modes in the experiments is consistent with the theoretical and numerical study of Zheng and Rice (1998), who studied the behavior of rate-weakening interfaces. They concluded that whether strong rate weakening results in a pulse-like or crack-like behavior depends on the combination of two parameters: the level of prestress before rupture propagation and the amount of rate weakening on the fault. If we use Dieterich-Ruina rate-and-state friction laws with enhanced rate weakening at high slip rates, as appropriate for flash heating, to describe frictional properties of Homalite, use reasonable friction parameters motivated by previous studies, and apply Zheng and Rice analysis, we can qualitatively explain the rupture modes observed in experiments. Our current work is focused on modeling the experimental setup numerically to confirm that one indeed requires rate dependence of friction to reproduce experimental results. This

  4. Convective thinning of the lithosphere: a model constrained by geoid observations

    NASA Astrophysics Data System (ADS)

    Dalloubeix, C.; Fleitout, L.

    1989-11-01

    Geoid observations in the oceans suggest that lithospheric thinning is effected by small-scale convection within hot-spot material trapped in dimples a few tens of kilometers deep at the base of the lithosphere. Significant thinning can occur within 5 Ma if the viscosity of the convecting material is 1016 Pa s. Partial melting can enhance considerably the vigour of convection and the same rate of lithospheric thinning is obtained for a viscosity about five times higher. These results are derived from convection models with a realistic temperature-dependent rheology using the mean field approximation.

  5. Observation-constrained Estimation of Aerosol Climate Impacts over S Asia

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Kotamarthi, V. R.; Jefferson, A.; Wilcox, E. M.; Bender, F.; Pistone, K.; Praveen, P. S.; Thomas, R. M.; Ramanathan, V.

    2012-12-01

    Climate impacts of elevated aerosols over S. Asia have been studied extensively. Despite different methods employed and uncertainties, one clear message is that these aerosols have a large impact on the regional energy balance. However, uncertainty in the elevated aerosol absorption, as well as poor fidelity in model representations of aerosol-cloud interactions contribute to the discrepancies in quantifying the aerosol influences on monsoon circulation and rainfall. The main goal of this study is to examine the latitudinal heating gradient and the aerosol impact on the hydrological cycle during the pre-monsoon, with observational constraints on the aerosol vertical distribution. We run a 12-km regional climate model (WRF-Chem) driven by the NCEP analysis data from August 2011 to March 2012. During this time period, the ground-based profiling of aerosol extinction, cloud liquid water, water vapor, and temperature were taken at Nainital (29.38°N, 79.45°E) as part of the Ganges Valley Experiment (GVAX). It is the first time that such vertical profiling data sets are available in the northern Indian subcontinent for such a long period. In the pre-monsoon season (Feb-Mar), the regional model simulations show good agreement in the aerosol optical depth (AOD; 0.1~0.2) and black carbon (BC; ~0.8 ug/m3) concentrations compared with the surface observations at Nainital. The observed diurnal variation in BC concentration, peaking in the afternoon and lowering at night, is also captured by the model as a result of the thermal convection from the polluted valley. The simulated OC/BC ratio is about 2~4 near the surface, which is lower than observations, implying that we may underestimate the secondary organic formation from the biomass burning or biogenic sources. Spectral measurements of aerosol absorption will be used to investigate the absorption of OC in the UV and visible bands. During this time, surface and in situ profiling of aerosols and clouds were also made during

  6. A NEW METHOD TO CONSTRAIN SUPERNOVA FRACTIONS USING X-RAY OBSERVATIONS OF CLUSTERS OF GALAXIES

    SciTech Connect

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-07-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112. We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% {+-} 5.4% to 37.1% {+-} 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 {+-} 0.34) Multiplication-Sign 10{sup 9} to (1.28 {+-} 0.43) Multiplication-Sign 10{sup 9}, from snapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kpc of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  7. A New Method to Constrain Supernova Fractions Using X-ray Observations of Clusters of Galaxies

    NASA Technical Reports Server (NTRS)

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-01-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112.We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% plus or minus 5.4% to 37.1% plus or minus 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 plus or minus 0.34) x 10(exp 9), to (1.28 plus or minus 0.43) x 10(exp 9), fromsnapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kiloparsecs of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  8. Using Surface Observations to Constrain the Direction and Magnitude of Mantle Flow Beneath Western North America

    NASA Astrophysics Data System (ADS)

    Holt, W. E.; Silver, P. G.

    2001-12-01

    While the motions of the surface tectonic plates are well determined, the accompanying horizontal mantle flow is not. Observations of surface deformation (GPS velocities and Quaternary fault slip rates) and upper mantle seismic anisotropy are combined for the first time, to provide a direct estimate of this flow field. We apply our investigation to western North America where seismic tomography shows a relatively thin lithosphere. Here the likely source of shear wave anisotropy results from a deformation fabric associated with the differential horizontal motion between the base of the lithosphere and the underlying mantle. For a vertically propagating shear wave recorded at a single station, and for mantle strains of order unity, the fast polarization direction, φ , of a split shear wave will be parallel to the direction of progressive simple shear, defined by this differential motion between lithosphere and underlying mantle. If the motion of the overlying lithospehre is known both within and across a plate boundary zone, such as western North America, then the direction and magnitude of mantle flow beneath the plate boundary zone can be uniquely determined with three or more observations of fast polarization directions. Within the Pacific-North American Plate boundary zone in western North America we find that the mantle velocity is 5.0+/-1.5 cm/yr and directed E-NE in a hotspot frame, nearly opposite to the direction of North American plate motion (WSW). The flow is only weakly coupled to the motion of the surface plates, producing a weak drag force. This flow field is most likely due to mantle density heterogeneity associated with the sinking of the old Farallon slab beneath North America. The last few decades have seen the development of two basically incompatible views of the plate-mantle system. The tectonophysical view assumes effective decoupling between the plate and a stationary mantle by a well developed asthenosphere. The plates are essentially 'self

  9. Constraining hot plasma in a non-flaring solar active region with FOXSI hard X-ray observations

    NASA Astrophysics Data System (ADS)

    Ishikawa, Shin-nosuke; Glesener, Lindsay; Christe, Steven; Ishibashi, Kazunori; Brooks, David H.; Williams, David R.; Shimojo, Masumi; Sako, Nobuharu; Krucker, Säm

    2014-12-01

    We present new constraints on the high-temperature emission measure of a non-flaring solar active region using observations from the recently flown Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload. FOXSI has performed the first focused hard X-ray (HXR) observation of the Sun in its first successful flight on 2012 November 2. Focusing optics, combined with small strip detectors, enable high-sensitivity observations with respect to previous indirect imagers. This capability, along with the sensitivity of the HXR regime to high-temperature emission, offers the potential to better characterize high-temperature plasma in the corona as predicted by nanoflare heating models. We present a joint analysis of the differential emission measure (DEM) of active region 11602 using coordinated observations by FOXSI, Hinode/XRT, and Hinode/EIS. The Hinode-derived DEM predicts significant emission measure between 1 MK and 3 MK, with a peak in the DEM predicted at 2.0-2.5 MK. The combined XRT and EIS DEM also shows emission from a smaller population of plasma above 8 MK. This is contradicted by FOXSI observations that significantly constrain emission above 8 MK. This suggests that the Hinode DEM analysis has larger uncertainties at higher temperatures and that > 8 MK plasma above an emission measure of 3 × 1044 cm-3 is excluded in this active region.

  10. Observation-constrained modeling of the ionospheric impact of negative sprites

    NASA Astrophysics Data System (ADS)

    Liu, Ningyu; Boggs, Levi D.; Cummer, Steven A.

    2016-03-01

    This paper reports observation and modeling of five negative sprites occurring above two Florida thunderstorms. The sprites were triggered by unusual types of negative cloud-to-ground (CG) lightning discharges with impulse charge moment change ranging from 600 to 1300 C km and charge transfer characterized by a timescale of 0.1-0.2 ms. The negative sprite typically consists of a few generally vertical elements that each contain a bright core and dimmer streamers extending from the core in both downward and upward directions. Modeling results using the measured charge moment change waveforms indicate that the lower ionosphere was significantly modified by the CGs and the lower ionospheric density might have been increased by nearly 4 orders of magnitude due to the most intense CG. Finally, streamer modeling results show that the ionospheric inhomogeneities produced by atmospheric gravity waves can initiate negative sprite streamers, assuming that they can modulate the ionization coefficient.

  11. Using Two-Ribbon Flare Observations and MHD Simulations to Constrain Flare Properties

    NASA Astrophysics Data System (ADS)

    Kazachenko, Maria D.; Lynch, Benjamin J.; Welsch, Brian

    2016-05-01

    Flare ribbons are emission structures that are frequently observed during flares in transition-region and chromospheric radiation. These typically straddle a polarity inversion line (PIL) of the radial magnetic field at the photosphere, and move apart as the flare progresses. The ribbon flux - the amount of unsigned photospheric magnetic flux swept out by flare ribbons - is thought to be related to the amount coronal magnetic reconnection, and hence provides a key diagnostic tool for understanding the physical processes at work in flares and CMEs. Previous measurements of the magnetic flux swept out by flare ribbons required time-consuming co-alignment between magnetograph and intensity data from different instruments, explaining why those studies only analyzed, at most, a few events. The launch of the Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA), both aboard the Solar Dynamics Observatory (SDO), presented a rare opportunity to compile a much larger sample of flare-ribbon events than could readily be assembled before. We created a dataset of 363 events of both flare ribbon positions and fluxes, as a function of time, for all C9.-class and greater flares within 45 degrees of disk center observed by SDO from June 2010 till April 2015. For this purpose, we used vector magnetograms (2D magnetic field maps) from HMI and UV images from AIA. A critical problem with using unprocessed AIA data is the existence of spurious intensities in AIA data associated with strong flare emission, most notably "blooming" (spurious smearing of saturated signal into neighboring pixels, often in streaks). To overcome this difficulty, we have developed an algorithmic procedure that effectively excludes artifacts like blooming. We present our database and compare statistical properties of flare ribbons, e.g. evolutions of ribbon reconnection fluxes, reconnection flux rates and vertical currents with the properties from MHD simulations.

  12. Coupled Decadal Variability in the North Pacific: An Observationally-Constrained Idealized Model

    NASA Astrophysics Data System (ADS)

    Qiu, B.; Schneider, N.; Chen, S.

    2006-12-01

    Air-sea coupled variability is investigated in this study by focusing on the observed sea surface temperature signals in the Kuroshio Extension (KE) region of 32°--38°N and 142°E--180°. This region corresponds to where both the oceanic circulation variability and the heat exchange variability across the air-sea interface are the largest in the midlatitude North Pacific. SST variability in the KE region has a dominant timescale of ~ 10 yr and this decadal variation is caused largely by the regional, wind-induced sea surface height changes that represent the lateral migration and strengthening/weakening of the KE jet. The importance of the air-sea coupling in influencing KE jet is explored by dividing the large-scale wind forcing into those associated with the intrinsic atmospheric variability and those induced by the SST changes in the KE region. The latter signals are extracted from the NCEP-NCAR reanalysis data using the lagged correlation analysis. In the absence of the SST feedback, the intrinsic atmospheric forcing enhances the decadal and longer timescale SST variance through oceanic advection, but fails to capture the observed decadal spectral peak. When the SST feedback is present, a warm (cold) KE SST anomaly works to generate a positive (negative) wind stress curl in the eastern North Pacific basin, resulting in negative (positive) local SSH anomalies through Ekman divergence (convergence). As these wind-forced SSH anomalies propagate into the KE region in the west, they shift the KE jet and alter the sign of the pre-existing SST anomalies. Given the spatial pattern of the SST-induced wind stress curl forcing, the optimal coupling in the midlatitude North Pacific occurs at the period of ~ 10 yr, slightly longer than the basin crossing time of the baroclinic Rossby waves along the KE latitude.

  13. Constraining GRB as Source for UHE Cosmic Rays through Neutrino Observations

    NASA Astrophysics Data System (ADS)

    Chen, P.

    2013-07-01

    The origin of ultra-high energy cosmic rays (UHECR) has been widely regarded as one of the major questions in the frontiers of particle astrophysics. Gamma ray bursts (GRB), the most violent explosions in the universe second only to the Big Bang, have been a popular candidate site for UHECR productions. The recent IceCube report on the non-observation of GRB induced neutrinos therefore attracts wide attention. This dilemma requires a resolution: either the assumption of GRB as UHECR accelerator is to be abandoned or the expected GRB induced neutrino yield was wrong. It has been pointed out that IceCube has overestimated the neutrino flux at GRB site by a factor of ~5. In this paper we point out that, in addition to the issue of neutrino production at source, the neutrino oscillation and the possible neutrino decay during their flight from GRB to Earth should further reduce the detectability of IceCube, which is most sensitive to the muon-neutrino flavor as far as point-source identification is concerned. Specifically, neutrino oscillation will reduce the muon-neutrino flavor ratio from 2/3 per neutrino at GRB source to 1/3 on Earth, while neutrino decay, if exists and under the assumption of normal hierarchy of mass eigenstates, would result in a further reduction of muon-neutrino ratio to 1/8. With these in mind, we note that there have been efforts in recent years in pursuing other type of neutrino telescopes based on Askaryan effect, which can in principle observe and distinguish all three flavors with comparable sensitivities. Such new approach may therefore be complementary to IceCube in shedding more lights on this cosmic accelerator question.

  14. Comparing inversion techniques for constraining CO2 fluxes in the Brazilian Amazon Basin with aircraft observations

    NASA Astrophysics Data System (ADS)

    Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.

    2012-12-01

    aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.

  15. Constraining methane release due to serpentinization by the observed D/H ratio on Mars

    NASA Astrophysics Data System (ADS)

    Chassefière, Eric; Leblanc, François

    2011-10-01

    It has been suggested that Mars' atmospheric CH 4 could be produced by crustal hydrothermal systems. The two most plausible mechanisms proposed so far, not exclusive from each other, are homogeneous formation by fluid-rock interaction during magmatic events and serpentinization of ultramafic rocks. The first goal of the present paper is to provide an upper limit on the release rate of serpentinization-derived CH 4. Due to the release of numerous H 2 molecules together with one CH 4 molecule, followed by thermal escape of all released H atoms to space and subsequent H isotopic fractionation, even a relatively modest serpentinization-derived CH 4 release acting over geological time scales may result in a significant enrichment of D wrt H in Mars' cryo-hydrosphere, including atmosphere, polar caps and subsurface reservoirs. By assuming that the CH 4 release rate has been proportional to the volcanic extrusion rate during the last 4 billion years, we calculate the present D/H ratio resulting from the crustal oxidation due to serpentinization, including the additional effect of sulfur oxidation. We show that this rate doesn't exceed 20% (within a factor of 2) of the estimated present value of the CH 4 release rate. If not, the present D/H ratio on Mars would be larger than observed (~ 5 SMOW). This result suggests that, either the production of CH 4 is sporadic with a present release rate larger than the average rate, or there are other significant sources of CH 4 like homogeneous formation from mantle carbon degassing or bacterial activity. Second, assuming further that most of the H isotopic fractionation observed today is due to serpentinization, we show that a ~ 400 m thick global equivalent layer of water may have been stored in serpentine since the late Noachian. This result doesn't depend on the chemical form of the released hydrogen (H 2 or CH 4). Such a quantity is generally considered as the amount required for explaining the formation of valley networks on

  16. Transient Earth system responses to cumulative carbon dioxide emissions: linearities, uncertainties, and probabilities in an observation-constrained model ensemble

    NASA Astrophysics Data System (ADS)

    Steinacher, M.; Joos, F.

    2016-02-01

    Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ˜ 1000-member ensemble of the Bern3D-LPJ carbon-climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.

  17. A model of earthquake triggering probabilities and application to dynamic deformations constrained by ground motion observations

    USGS Publications Warehouse

    Gomberg, J.; Felzer, K.

    2008-01-01

    We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.

  18. Constraining model transient climate response using independent observations of solar-cycle forcing and response

    NASA Astrophysics Data System (ADS)

    Tung, Ka Kit; Zhou, Jiansong; Camp, Charles D.

    2008-09-01

    The phenomenon of 11-year solar cycles has a well-measured forcing, and the response in surface temperature is confirmed using multiple datasets, including reanalysis (NCEP/NCAR and ERA-40) and blended in situ land-ocean data (GISS and HadCRUT3). Missing coverage in the historical in situ station data reduces the amplitude of the response compared to the geographically complete reanalysis data, but all extracted signals are statistically robust. A transient climate sensitivity parameter can be defined once forcing and response are known. The coupled atmosphere-ocean models participating in the 4th Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC) span a large range in their transient climate response (TCR). Using observational results on the response to the 11-year solar variation, we derive a constraint for the TCR. It is seen that, compared with our derived constraint, most models assessed by IPCC AR4 have too low a TCR, even lower than that derived from the station data.

  19. Interannual and Seasonal Variability of Biomass Burning Emissions Constrained by Satellite Observations

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan N.; Martin, Randall V.; Staudt, Amanda C.; Yevich, Rosemarie; Logan, Jennifer A.

    2003-01-01

    We present a methodology for estimating the seasonal and interannual variation of biomass burning designed for use in global chemical transport models. The average seasonal variation is estimated from 4 years of fire-count data from the Along Track Scanning Radiometer (ATSR) and 1-2 years of similar data from the Advanced Very High Resolution Radiometer (AVHRR) World Fire Atlases. We use the Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data product as a surrogate to estimate interannual variability in biomass burning for six regions: Southeast Asia, Indonesia and Malaysia, Brazil, Central America and Mexico, Canada and Alaska, and Asiatic Russia. The AI data set is available from 1979 to the present with an interruption in satellite observations from mid-1993 to mid-1996; this data gap is filled where possible with estimates of area burned from the literature for different regions. Between August 1996 and July 2000, the ATSR fire-counts are used to provide specific locations of emissions and a record of interannual variability throughout the world. We use our methodology to estimate mean seasonal and interannual variations for emissions of carbon monoxide from biomass burning, and we find that no trend is apparent in these emissions over the last two decades, but that there is significant interannual variability.

  20. Deep source model for Nevado del Ruiz Volcano, Colombia, constrained by interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, P.; Samsonov, S. V.; López, C. M.; Ordoñez, M.

    2015-12-01

    Nevado del Ruiz (NRV) is part of a large volcano complex in the northern Andes of Colombia with a large glacier that erupted in 1985, generating a lahar killing over 23,000 people in the city of Armero and 2,000 people in the town of Chinchina. NRV is the most active volcano in Colombia and since 2012 has generated small eruptions, with no casualties, and constant gas and ash emissions. Interferometric synthetic aperture radar (InSAR) observations from ascending and descending track RADARSAT-2 data show a large (>20 km) wide inflation pattern apparently starting in late 2011 to early 2012 and continuing to the time of this study in early 2015 at a LOS rate of over 3-4 cm/yr (Fig. 1). Volcano pressure volume models for both a point source (Mogi) and a spheroidal (Yang) source find solutions over 14 km beneath the surface, or 10 km below sea level, and centered 10 km to the SW of Nevado del Ruiz volcano. The spheroidal source has a roughly horizontal long axis oriented parallel to the Santa Isabel - Nevado del Ruiz volcanic line and perpendicular to the ambient compressive stress direction. Its solution provides a statistically significant improvement in fit compared to the point source, though consideration of spatially correlated noise sources may diminish this significance. Stress change computations do not favor one model over the other but show that propagating dikes would become trapped in sills, leading to a more complex pathway to the surface and possibly explaining the significant lateral distance between the modeled sources and Nevado del Ruiz volcano.

  1. Constraining the variation of the fine-structure constant with observations of narrow quasar absorption lines

    SciTech Connect

    Songaila, A.; Cowie, L. L.

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure in even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of

  2. Constraining the Variation of the Fine-structure Constant with Observations of Narrow Quasar Absorption Lines

    NASA Astrophysics Data System (ADS)

    Songaila, A.; Cowie, L. L.

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure in even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10-5, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (- 0.59 ± 0.55) × 10-5 in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10-5, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (- 0.47 ± 0.53) × 10-5. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (- 0.01 ± 0.26) × 10-5. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of

  3. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    NASA Astrophysics Data System (ADS)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  4. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    SciTech Connect

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-08-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms.

  5. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, E.; Barret, B.; Massart, S.; Le Flochmoen, E.; Piacentini, A.; El Amraoui, L.; Pannekoucke, O.; Cariolle, D.

    2014-01-01

    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. We consider a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE) in combination with a linear ozone chemistry scheme to examine the impact of assimilating observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI). The assimilation of the two instruments is performed by means of a variational algorithm (4D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozonesonde measurements to verify the presence of observations and model biases. Furthermore, a longer analysis of 6 months (July-December 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (root mean square error, RMSE) of the modeled ozone columns from 30 to 15% in the upper troposphere/lower stratosphere (UTLS, 70-225 hPa). The assimilation of IASI tropospheric ozone observations (1000-225 hPa columns, TOC - tropospheric O3 column) decreases the RMSE of the model from 40 to 20% in the tropics (30° S-30° N), whereas it is not effective at higher latitudes. Results are confirmed by a comparison with additional ozone data sets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be insensitive to the assimilation parameters. We conclude that the combination of a simplified ozone chemistry scheme with frequent satellite observations is a valuable tool for the long-term analysis of stratospheric and free-tropospheric ozone.

  6. An Experimental Path to Constraining the Origins of the Jupiter Trojans Using Observations, Theoretical Predictions, and Laboratory Simulants

    NASA Astrophysics Data System (ADS)

    Blacksberg, Jordana; Eiler, John; Brown, Mike; Ehlmann, Bethany; Hand, Kevin; Hodyss, Robert; Mahjoub, Ahmed; Poston, Michael; Liu, Yang; Choukroun, Mathieu; Carey, Elizabeth; Wong, Ian

    2014-11-01

    Hypotheses based on recent dynamical models (e.g. the Nice Model) shape our current understanding of solar system evolution, suggesting radical rearrangement in the first hundreds of millions of years of its history, changing the orbital distances of Jupiter, Saturn, and a large number of small bodies. The goal of this work is to build a methodology to concretely tie individual solar system bodies to dynamical models using observables, providing evidence for their origins and evolutionary pathways. Ultimately, one could imagine identifying a set of chemical or mineralogical signatures that could quantitatively and predictably measure the radial distance at which icy and rocky bodies first accreted. The target of the work presented here is the Jupiter Trojan asteroids, predicted by the Nice Model to have initially formed in the Kuiper belt and later been scattered inward to co-orbit with Jupiter. Here we present our strategy which is fourfold: (1) Generate predictions about the mineralogical, chemical, and isotopic compositions of materials accreted in the early solar system as a function of distance from the Sun. (2) Use temperature and irradiation to simulate evolutionary processing of ices and silicates, and measure the alteration in spectral properties from the UV to mid-IR. (3) Characterize simulants to search for potential fingerprints of origin and processing pathways, and (4) Use telescopic observations to increase our knowledge of the Trojan asteroids, collecting data on populations and using spectroscopy to constrain their compositions. In addition to the overall strategy, we will present preliminary results on compositional modeling, observations, and the synthesis, processing, and characterization of laboratory simulants including ices and silicates. This work has been supported by the Keck Institute for Space Studies (KISS). The research described here was carried out at the Jet Propulsion Laboratory, Caltech, under a contract with the National

  7. Searches for neutrinos from gamma ray bursts with the AMANDA-II and IceCube detectors

    NASA Astrophysics Data System (ADS)

    Strahler, Erik Albert

    2009-11-01

    Gamma-ray bursts (GRBs) are the most energetic phenomenon in the universe, releasing isotropic equivalent energies of [Special characters omitted.] ergs over short time scales. While it is possible to wholly explain the keV-GeV observed photons by purely electromagnetic processes, it is natural to consider the implications of concurrent hadronic (proton) acceleration in these sources. Such processes make GRBs one of the leading candidates for the sources of the ultra high-energy cosmic rays as well as sources of associated high energy (TeV-PeV) neutrinos. We have performed searches for such neutrinos from 85 northern sky GRBs with the AMANDA-II neutrino detector. No signal is observed and upper limits are set on the emission from these sources. Additionally, we have performed a search for 41 northern sky GRBs using the 22-string configuration of the IceCube neutrino telescope, employing an unbinned maximum- likelihood method and individual modeling of the predicted emission from each burst. This search is consistent with the background-only hypothesis and we set upper limits on the emission.

  8. Constraining lightning channel growth dynamics by comparison of time domain electromagnetic simulations to Huntsville Alabama Marx Meter Array observations

    NASA Astrophysics Data System (ADS)

    Carlson, B. E.; Bitzer, P. M.; Burchfield, J.

    2015-12-01

    Major unknowns in lightning research include the mechanism and dynamics of lightning channel extension. Such processes are most simple during the initial growth of the channel, when the channel is relatively short and has not yet branched extensively throughout the cloud. During this initial growth phase, impulsive electromagnetic emissions (preliminary breakdown pulses) can be well-described as produced by current pulses generated as the channel extends, but the overall growth rate, channel geometry, and degree of branching are not known. We approach such issues by examining electric field change measurements made with the Huntsville Alabama Marx Meter Array (HAMMA) during the first few milliseconds of growth of a lightning discharge. We compare HAMMA observations of electromagnetic emissions and overall field change to models of lightning channel growth and development and attempt to constrain channel growth rate, degree of branching, channel physical properties, and uniformity of thunderstorm electric field. Preliminary comparisons suggest that the lightning channel branches relatively early in the discharge, though more complete and detailed analysis will be presented.

  9. Constraining the Dense Matter Equation of State with ATHENA-WFI observations of Neutron Stars in Quiescent LMXBs

    NASA Astrophysics Data System (ADS)

    Guillot, Sebastien; Oezel, F.

    2015-09-01

    The study of neutron star quiescent low-mass X-ray binaries (qLMXBs) will address one of the main science goals of the Athena x-ray observatory. The study of the soft X-ray thermal emission from the neutron star surface in qLMXBs is a crucial tool to place constrains on the dense matter equation of state. I will briefly review this method, its strength and current weaknesses and limitations, as well as the current constraints on the equation of state from qLMXBs. The superior sensitivity of Athena will permit the acquisition of unprecedentedly high signal-to-noise spectra from these sources. It has been demonstrated that a single qLMXB, even with high S/N, will not place useful constraints on the EoS. However, a combination of qLMXBs spectra has shown promises of obtaining tight constraints on the equation of state. I will discuss the expected prospects for observations of qLMXBs inside globular clusters -- those that Athena will be able to resolve. I will also present the constraints on the equation of state that Athena will be able to obtain from these qLMXBs and from a population of qLMXBs in the field of the Galaxy, with distance measurements provided by Gaia.

  10. Constraining the highly eccentric orbit of the companio of the binary TNO 1998 WW31 by observations at its pericenter

    NASA Astrophysics Data System (ADS)

    Veillet, Christian

    2001-07-01

    The discovery that the Trans Neptunian Object {TNO} 1998 WW31 has a satellite was announced on IAU Circular 7610 {16th April 2001}. 1998 WW31 has been observed on three HST orbits, thanks to an allocation of DD time. The combination of all the ground based obse rvations and these three high precision HST positions allowed a first determination of the motion of the faint component with resp ect to the primary body. Instead of the circular orbit assumed before the HST observations, we found a highly eccentric orbit whose eccentricity, 0.7, is poorly constrained by the observations, mainly made far from the pericenter. Some models could even accommod ate an eccentricity as high as 0.9. These results will be presented a the AAS DPS meeting on Nov 27 and a paper is being submitted to Nature. A normal proposal has been made for the next cycle, but we now know that the companion will pass at the pericenter betw een January and March of 2002, and the next occurrence will be only in September 2003 {+/- a couple of months with the current uncer tainties}. With a separation of the two components at pericenter close to 0.15", there is no way it can be observed from the ground with any sufficient accuracy. Hubble's unparalleled resolution will enable us to assess the ellipticity of the orbit in a definite way, providing an important constraint to the models proposed for the creation of such a binary system. It is our intent, as soon as our Nature paper is published, to implement an outreach site showing the evolution of our knowledge of the system with the acqu isition of new data from HST and from the ground, as a "case study" demonstrating the "prediction-correction" scheme widely used in science, with the advantage of simple basic physics {Keplerian motion, simple assumptions on physical characteristics like density o r albedo} easily accessible to young students, but still bringing important conclusions on the nature of the objects themselves.

  11. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, E.; Barret, B.; Massart, S.; Le Flochmoen, E.; Piacentini, A.; El Amraoui, L.; Pannekoucke, O.; Cariolle, D.

    2013-08-01

    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. In this study we examine the impact of assimilating ozone observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI) in a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE). The assimilation of the two instruments is performed by means of a variational algorithm (4-D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozone-sondes measurements to verify the presence of observations and model biases. It is found that the IASI Tropospheric Ozone Column (TOC, 1000-225 hPa) should be bias-corrected prior to assimilation and MLS lowermost level (215 hPa) excluded from the analysis. Furthermore, a longer analysis of 6 months (July-August 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (Root Mean Square Error, RMSE) of the modeled ozone columns from 30% to 15% in the Upper-Troposphere/Lower-Stratosphere (UTLS, 70-225 hPa) and from 25% to 20% in the free troposphere. The positive effect of assimilating IASI tropospheric observations is very significant at low latitudes (30° S-30° N), whereas it is not demonstrated at higher latitudes. Results are confirmed by a comparison with additional ozone datasets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be little sensitive to the assimilation parameters and the model chemical scheme, due to the high frequency of satellite observations compared to the average life-time of free

  12. Analysis and implications of the miscarriages of justice of Amanda Knox and Raffaele Sollecito.

    PubMed

    Gill, Peter

    2016-07-01

    The case of the 'murder of Meredith Kercher' has been the subject of intense media scrutiny since 2007 when the offence was committed. Three individuals were arrested and accused of the crime. Amanda Knox and Raffaele Sollecito were exonerated in March 2015. Another defendant, Rudy Guede, remains convicted as the sole perpetrator. He was implicated by multiple DNA profiles recovered from the murder room and the bathroom. However, the evidence against Guede contrasted strongly with the limited evidence against two co-defendants, Amanda Knox and Raffaele Sollecito. There were no DNA profiles pertaining to Amanda Knox in the murder room itself. She was separately implicated by a knife recovered remote from the crime scene (discovered in a cutlery drawer at Sollecito's apartment), along with DNA profiles in a bathroom that she had shared with the victim. Upon analysis a low level trace of DNA attributed to the murder victim was found on the blade of a knife, along with DNA profiles attributed to Amanda Knox from the handle. However, there was no evidence of blood on the knife blade itself. A separate key piece of evidence was a DNA profile attributed to Raffaele Sollecito recovered from a forcibly removed bra-clasp found in the murder room. There followed an extraordinary series of trials and retrials where the pair were convicted, exonerated, re-convicted and finally, in March 2015 they were finally exonerated (no further appeal is possible). Since Knox and Sollecito have been found innocent it is opportune to carry out an extensive review of the case to discover the errors that led to conviction so that similar mistakes do not occur in the future. It is accepted that the DNA profiles attributed to them were transferred by methods unrelated to the crime event itself. There is a wealth of material available from the judgements and other reports which can be analysed in order to show the errors of thinking. The final judgement of the case-the Marasca-Bruno motivation

  13. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  14. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, Emanuele; Barret, Brice; Massart, Sebastien; Piacentini, Andrea; Pannekoucke, Olivier; Cariolle, Daniel

    2013-04-01

    Ozone acts as the main shield against UV radiation in the stratosphere, it contributes to the greenhouse effect in the troposphere and it is a major pollutant in the planetary boundary layer. In the last decades models and satellite observations reached a mature level, providing estimates of ozone with an accuracy of few percents in the stratosphere. On the other hand, tropospheric ozone still represents a challenge, because its signal is less detectable by space-borne sensors, its modelling depends on the knowledge of gaseous emissions at the surface, and stratosphere/troposphere exchanges might rapidly increase its abundance by several times. Moreover there is generally lack of in-situ observations of tropospheric ozone in many regions of the world. For these reasons the assimilation of satellite data into chemical transport models represents a promising technique to overcome limitations of both satellites and models. The objective of this study is to assess the value of vertically resolved observations from the Infrared Atmospheric Sounding Interferometer (IASI) and the Microwave Limb Sounder (MLS) to constrain both the tropospheric and stratospheric ozone profile in a global model. While ozone total columns and stratospheric profiles from UV and microwave sensors are nowadays routinely assimilated in operational models, still few studies have explored the assimilation of ozone products from IR sensors such as IASI, which provide better sensitivity in the troposphere. We assimilate both MLS ozone profiles and IASI tropospheric (1000-225 hPa) ozone columns in the Météo France chemical transport model MOCAGE for 2008. The model predicts ozone concentrations on a 2x2 degree global grid and for 60 vertical levels, ranging from the surface up to 0.1 hPa. The assimilation is based on a 4D-VAR algorithm, employs a linear chemistry scheme and accounts for the satellite vertical sensitivity via the averaging kernels. The assimilation of the two products is first tested

  15. Limits on the high-energy gamma and neutrino fluxes from the SGR 1806-20 giant flare of 27 December 2004 with the AMANDA-II detector.

    PubMed

    Achterberg, A; Ackermann, M; Adams, J; Ahrens, J; Andeen, K; Atlee, D W; Bahcall, J N; Bai, X; Baret, B; Bartelt, M; Barwick, S W; Bay, R; Beattie, K; Becka, T; Becker, J K; Becker, K-H; Berghaus, P; Berley, D; Bernardini, E; Bertrand, D; Besson, D Z; Blaufuss, E; Boersma, D J; Bohm, C; Bolmont, J; Böser, S; Botner, O; Bouchta, A; Braun, J; Burgess, C; Burgess, T; Castermans, T; Chirkin, D; Christy, B; Clem, J; Cowen, D F; D'Agostino, M V; Davour, A; Day, C T; De Clercq, C; Demirörs, L; Descamps, F; Desiati, P; Deyoung, T; Diaz-Velez, J C; Dreyer, J; Dumm, J P; Duvoort, M R; Edwards, W R; Ehrlich, R; Eisch, J; Ellsworth, R W; Evenson, P A; Fadiran, O; Fazely, A R; Feser, T; Filimonov, K; Fox, B D; Gaisser, T K; Gallagher, J; Ganugapati, R; Geenen, H; Gerhardt, L; Goldschmidt, A; Goodman, J A; Gozzini, R; Grullon, S; Gross, A; Gunasingha, R M; Gurtner, M; Hallgren, A; Halzen, F; Han, K; Hanson, K; Hardtke, D; Hardtke, R; Harenberg, T; Hart, J E; Hauschildt, T; Hays, D; Heise, J; Helbing, K; Hellwig, M; Herquet, P; Hill, G C; Hodges, J; Hoffman, K D; Hommez, B; Hoshina, K; Hubert, D; Hughey, B; Hulth, P O; Hultqvist, K; Hundertmark, S; Hülss, J-P; Ishihara, A; Jacobsen, J; Japaridze, G S; Jones, A; Joseph, J M; Kampert, K-H; Karle, A; Kawai, H; Kelley, J L; Kestel, M; Kitamura, N; Klein, S R; Klepser, S; Kohnen, G; Kolanoski, H; Köpke, L; Krasberg, M; Kuehn, K; Landsman, H; Leich, H; Liubarsky, I; Lundberg, J; Madsen, J; Mase, K; Matis, H S; McCauley, T; McParland, C P; Meli, A; Messarius, T; Mészáros, P; Miyamoto, H; Mokhtarani, A; Montaruli, T; Morey, A; Morse, R; Movit, S M; Münich, K; Nahnhauer, R; Nam, J W; Niessen, P; Nygren, D R; Ogelman, H; Olbrechts, Ph; Olivas, A; Patton, S; Peña-Garay, C; Pérez de Los Heros, C; Piegsa, A; Pieloth, D; Pohl, A C; Porrata, R; Pretz, J; Price, P B; Przybylski, G T; Rawlins, K; Razzaque, S; Refflinghaus, F; Resconi, E; Rhode, W; Ribordy, M; Rizzo, A; Robbins, S; Roth, P; Rott, C; Rutledge, D; Ryckbosch, D; Sander, H-G; Sarkar, S; Schlenstedt, S; Schmidt, T; Schneider, D; Seckel, D; Seo, S H; Seunarine, S; Silvestri, A; Smith, A J; Solarz, M; Song, C; Sopher, J E; Spiczak, G M; Spiering, C; Stamatikos, M; Stanev, T; Steffen, P; Stezelberger, T; Stokstad, R G; Stoufer, M C; Stoyanov, S; Strahler, E A; Straszheim, T; Sulanke, K-H; Sullivan, G W; Sumner, T J; Taboada, I; Tarasova, O; Tepe, A; Thollander, L; Tilav, S; Toale, P A; Turcan, D; van Eijndhoven, N; Vandenbroucke, J; Van Overloop, A; Voigt, B; Wagner, W; Walck, C; Waldmann, H; Walter, M; Wang, Y-R; Wendt, C; Wiebusch, C H; Wikström, G; Williams, D R; Wischnewski, R; Wissing, H; Woschnagg, K; Xu, X W; Yodh, G; Yoshida, S; Zornoza, J D

    2006-12-01

    On 27 December 2004, a giant gamma flare from the Soft Gamma-Ray Repeater 1806-20 saturated many satellite gamma-ray detectors, being the brightest transient event ever observed in the Galaxy. AMANDA-II was used to search for down-going muons indicative of high-energy gammas and/or neutrinos from this object. The data revealed no significant signal, so upper limits (at 90% C.L.) on the normalization constant were set: 0.05(0.5) TeV-1 m;{-2} s;{-1} for gamma=-1.47 (-2) in the gamma flux and 0.4(6.1) TeV-1 m;{-2} s;{-1} for gamma=-1.47 (-2) in the high-energy neutrino flux. PMID:17155787

  16. Limits on the High-Energy Gamma and Neutrino Fluxes from the SGR 1806-20 Giant Flare of 27 December 2004 with the AMANDA-II Detector

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Atlee, D. W.; Bahcall, J. N.; Bai, X.; Baret, B.; Bartelt, M.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, C.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; de Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feser, T.; Filimonov, K.; Fox, B. D.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grullon, S.; Groß, A.; Gunasingha, R. M.; Gurtner, M.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Harenberg, T.; Hart, J. E.; Hauschildt, T.; Hays, D.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Hülß, J.-P.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Jones, A.; Joseph, J. M.; Kampert, K.-H.; Karle, A.; Kawai, H.; Kelley, J. L.; Kestel, M.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Krasberg, M.; Kuehn, K.; Landsman, H.; Leich, H.; Liubarsky, I.; Lundberg, J.; Madsen, J.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mokhtarani, A.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Ögelman, H.; Olbrechts, Ph.; Olivas, A.; Patton, S.; Peña-Garay, C.; Pérez de Los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Refflinghaus, F.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Roth, P.; Rott, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Seckel, D.; Seo, S. H.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Solarz, M.; Song, C.; Sopher, J. E.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Steffen, P.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Toale, P. A.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.

    2006-12-01

    On 27 December 2004, a giant γ flare from the Soft Gamma-Ray Repeater 1806-20 saturated many satellite gamma-ray detectors, being the brightest transient event ever observed in the Galaxy. AMANDA-II was used to search for down-going muons indicative of high-energy gammas and/or neutrinos from this object. The data revealed no significant signal, so upper limits (at 90% C.L.) on the normalization constant were set: 0.05(0.5)TeV-1m-2s-1 for γ=-1.47 (-2) in the gamma flux and 0.4(6.1)TeV-1m-2s-1 for γ=-1.47 (-2) in the high-energy neutrino flux.

  17. Constraining U.S. ammonia emissions using TES remote sensing observations and the GEOS-Chem adjoint model

    EPA Science Inventory

    Ammonia (NH(3)has significant impacts on biodiversity, eutrophication, and acidification. Widespread uncertainty in the magnitude and seasonality of NH3 emissions hinders efforts to address these issues. In this work, we constrain U.S. NH3 sources using obse...

  18. Search for Ultra High-Energy Neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Ackermann, M.

    2007-11-19

    A search for diffuse neutrinos with energies in excess of 10{sup 5} GeV is conducted with AMANDA-II data recorded between 2000 and 2002. Above 10{sup 7} GeV, the Earth is essentially opaque to neutrinos. This fact, combined with the limited overburden of the AMANDA-II detector (roughly 1.5 km), concentrates these ultra high-energy neutrinos at the horizon. The primary background for this analysis is bundles of downgoing, high-energy muons from the interaction of cosmic rays in the atmosphere. No statistically significant excess above the expected background is seen in the data, and an upper limit is set on the diffuse all-flavor neutrino flux of E{sup 2} {Phi}{sub 90%CL} < 2.7 x 10{sup -7} GeV cm{sup -2}s{sup -1} sr{sup -1} valid over the energy range of 2 x 10{sup 5} GeV to 10{sup 9} GeV. A number of models which predict neutrino fluxes from active galactic nuclei are excluded at the 90% confidence level.

  19. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  20. Search for Ultra-High-Energy Neutrinos with AMANDA-II

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bai, X.; Baret, B.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Beimforde, M.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; De Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Hasegawa, Y.; Hauschildt, T.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kiryluk, J.; Kislat, F.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Leier, D.; Liubarsky, I.; Lundberg, J.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meagher, K.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Olivas, A.; Ono, M.; Patton, S.; Pérez de los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Robbins, W. J.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schultz, O.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Song, C.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Viscomi, V.; Vogt, C.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Waldenmaier, T.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wiedemann, C.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.; IceCube Collaboration

    2008-03-01

    A search for diffuse neutrinos with energies in excess of 105 GeV is conducted with AMANDA-II data recorded between 2000 and 2002. Above 107 GeV, the Earth is essentially opaque to neutrinos. This fact, combined with the limited overburden of the AMANDA-II detector (roughly 1.5 km), concentrates these ultra-high-energy neutrinos at the horizon. The primary background for this analysis is bundles of downgoing, high-energy muons from the interaction of cosmic rays in the atmosphere. No statistically significant excess above the expected background is seen in the data, and an upper limit is set on the diffuse all-flavor neutrino flux of E2Φ90% CL < 2.7 × 10-7 GeV cm-2 s-1 sr-1 valid over the energy range of 2 × 105 to 109 GeV. A number of models that predict neutrino fluxes from active galactic nuclei are excluded at the 90% confidence level.

  1. A model of Greenland ice sheet deglaciation constrained by observations of relative sea level and ice extent

    NASA Astrophysics Data System (ADS)

    Lecavalier, Benoit S.; Milne, Glenn A.; Simpson, Matthew J. R.; Wake, Leanne; Huybrechts, Philippe; Tarasov, Lev; Kjeldsen, Kristian K.; Funder, Svend; Long, Antony J.; Woodroffe, Sarah; Dyke, Arthur S.; Larsen, Nicolaj K.

    2014-10-01

    An ice sheet model was constrained to reconstruct the evolution of the Greenland Ice Sheet (GrIS) from the Last Glacial Maximum (LGM) to present to improve our understanding of its response to climate change. The study involved applying a glaciological model in series with a glacial isostatic adjustment and relative sea-level (RSL) model. The model reconstruction builds upon the work of Simpson et al. (2009) through four main extensions: (1) a larger constraint database consisting of RSL and ice extent data; model improvements to the (2) climate and (3) sea-level forcing components; (4) accounting for uncertainties in non-Greenland ice. The research was conducted primarily to address data-model misfits and to quantify inherent model uncertainties with the Earth structure and non-Greenland ice. Our new model (termed Huy3) fits the majority of observations and is characterised by a number of defining features. During the LGM, the ice sheet had an excess of 4.7 m ice-equivalent sea-level (IESL), which reached a maximum volume of 5.1 m IESL at 16.5 cal ka BP. Modelled retreat of ice from the continental shelf progressed at different rates and timings in different sectors. Southwest and Southeast Greenland began to retreat from the continental shelf by ˜16 to 14 cal ka BP, thus responding in part to the Bølling-Allerød warm event (c. 14.5 cal ka BP); subsequently ice at the southern tip of Greenland readvanced during the Younger Dryas cold event. In northern Greenland the ice retreated rapidly from the continental shelf upon the climatic recovery out of the Younger Dryas to present-day conditions. Upon entering the Holocene (11.7 cal ka BP), the ice sheet soon became land-based. During the Holocene Thermal Maximum (HTM; 9-5 cal ka BP), air temperatures across Greenland were marginally higher than those at present and the GrIS margin retreated inland of its present-day southwest position by 40-60 km at 4 cal ka BP which produced a deficit volume of 0.16 m IESL

  2. Searching for quantum gravity with high-energy atmospheric neutrinos and AMANDA-II

    NASA Astrophysics Data System (ADS)

    Kelley, John Lawrence

    2008-06-01

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLI and QD parameters using a maximum likelihood method. Given the absence of new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  3. Determination of the Atmospheric Neutrino Flux and Searches for New Physics with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Collaboration, IceCube

    2009-06-02

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLI and QD parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  4. A search for neutrino-induced electromagnetic showers in the 2008 combined IceCube and AMANDA detectors

    NASA Astrophysics Data System (ADS)

    Rutledge, Douglas Lowery

    The Antarctic Muon and Neutrino Detector Array (AMANDA) and its successor experiment, IceCube, are both Cherenkov detectors deployed very near the geographic South Pole. The Cherenkov technique uses the light emitted by charged particles that travel faster than the propagation velocity of light in the detector medium. This can be used to detect the daughter particles from the interaction in the ice of neutrinos of all flavors. The topology of neutrino interaction events is strongly dependent on the neutrino flavor, allowing separate measurements to be made. Electrons resulting from neutrino interactions leave spherical events by depositing all of their energy within a small region. Events of this type are often referred to as "Cascades." Muons propagate over long distances, leaving Cherenkov light distributed over a line. The principal event topology for taus is called "Double Bangs," with two spatially separated cascades. There are many potential benefits to running a search for neutrino-induced cascades using the combined readout from both the IceCube and the AMANDA detectors. AMANDA is sensitive to lower energies, owing to its denser distribution of PMTs. IceCube has a much larger volume, allowing it to make better measurements of the background. This allows for better background rejection techniques, and thus a higher final signal rate. This work presents a search for cascades from the atmospheric neutrino flux using the combined data from AMANDA's Transient Waveform Recorder (TWR) data acquisition system, and IceCube's 40 string detector configuration. After the 200 Hz background rate is removed the final measured rate of cascade candidates is 2.5 x 10-7 Hz+3.8x10-7-9.9x10 -8 Hz(stat) +/- 9.8 x 10-8 Hz(syst). The dataset used in this work was collected over 187 days from April to November in 2008.

  5. On the convergence of ionospheric constrained precise point positioning (IC-PPP) based on undifferential uncombined raw GNSS observations.

    PubMed

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-01-01

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190

  6. On the Convergence of Ionospheric Constrained Precise Point Positioning (IC-PPP) Based on Undifferential Uncombined Raw GNSS Observations

    PubMed Central

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-01-01

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190

  7. The Global Aerosol Synthesis and Science Project (GASSP): Using a Comprehensive Synthesis of Aerosol Observations and Statistical Modelling to Constrain Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Reddington, C.; Lee, L.; Carslaw, K. S.; Liu, D.; Allan, J. D.; Coe, H.; Pringle, K.; Stier, P.; Partridge, D.; Schutgens, N.

    2014-12-01

    Over the past few decades there has been enormous investment in atmospheric aerosol measurements across the globe. However, ultimately only a small fraction of these measurements are used to test and improve models. GASSP aims to bring together as much aerosol measurement data as possible in combination with a novel application of statistical methods to test and improve atmospheric model processes and improve our understanding of global aerosol and climate. Presently, we have synthesised a vast array of diverse aerosol measurements from aircraft, ground stations and ships, combining campaign and long-term measurements conducted over the past two decades. These data include in-situ measurements of cloud condensation nuclei and aerosol particle number concentrations, sizes and chemical composition. By combining different aerosol measurements we can ensure that the model skill is consistent across a range of aerosol properties in a range of environments. We will present spatial maps and time series of these data, identifying key regions where gaps currently exist in the dataset and where future contribution from the measurement community will be most crucial. We have also performed a sensitivity analysis of the output from a global aerosol model, which has identified the important sources of parameter uncertainty in all model grid cells throughout a single year. Cluster analysis of this data shows which model uncertainties can be constrained by observations in any particular global region during the year. Similarities and distinctions between clusters allows us to identify how observations made around the globe have the potential to constrain the global aerosol model and identify which model uncertainties will remain irreducible with the current suite of observations. As a first step we have used synthetic observations to constrain the model uncertainties and quantify the potential of real observations for model constraint. We then use these results to target real

  8. Explaining postseismic and aseismic transient deformation in subduction zones with rate and state friction modeling constrained by lab and geodetic observations

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Dedontney, N. L.; Rice, J. R.

    2007-12-01

    Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600

  9. Search for high energy neutrino induced cascades with the AMANDA-B10 detector

    NASA Astrophysics Data System (ADS)

    Toboada Fermin, Ignacio Jose

    2002-08-01

    The Antarctic Muon And Neutrino Detector Array, AMANDA, is a Cherenkov detector deployed deep in the ice cap at the South Pole. Charged particles traveling faster than the speed of light in ice produce Cherenkov radiation that is detected by Photo-Multiplier Tubes. Using the information obtained by the Photo- Multiplier Tubes, the physical characteristics, such as direction and energy, can be reconstructed. High energy neutrinos of all flavors can produce particle cascades when interacting with matter. In ice, cascades are typically a few meters long, much smaller than the dimensions of AMANDA. Electron neutrinos produce cascades via both the charged and neutral current interactions. Muon and tau neutrinos produce cascades via the neutral current interaction. Isolated cascades are also produced by tau neutrinos via charged current interactions, because the resulting tau, at energies below a few hundred TeV, will travel only a few meters before decaying. Advantages of the cascade channel, compared to neutrino induced muons are better energy resolution and an order of magnitude lower background from atmospheric neutrinos when searching for extra terrestrial neutrinos. Data collected in 1997 were searched for high energy neutrino induced cascades. A total of 1.18 × 109 events were recorded for an effective live-time of 130.1 days. The overwhelming majority of the events recorded were produced by down-going cosmic-ray induced muons. Bright muon energy losses are the main background when searching for high energy extra- terrestrial neutrino induced cascades. The sensitivity of the detector to cascades has been studied using in-situ light sources. No evidence for the existence of a diffuse flux of high energy neutrinos has been found. Limits have been set for fluxes following an E -2 power law spectrum. For ne+n e the limit is FE2<5.7-7.1×10- 6 Ge/Vdot s-1 dot sr-2 90%C.L. For ne+n e+nm+n m+nt+n t the limit is

  10. Constraining precipitation initiation in marine stratocumulus using aircraft observations and LES with high spectral resolution bin microphysics

    NASA Astrophysics Data System (ADS)

    Witte, M.; Chuang, P. Y.; Rossiter, D.; Ayala, O.; Wang, L. P.

    2015-12-01

    Turbulence has been suggested as one possible mechanism to accelerate the onset of autoconversion and widen the process "bottleneck" in the formation of warm rain. While direct observation of the collision-coalescence process remains beyond the reach of present-day instrumentation, co-located sampling of atmospheric motion and the drop size spectrum allows for comparison of in situ observations with simulation results to test representations of drop growth processes. This study evaluates whether observations of drops in the autoconversion regime can be replicated using our best theoretical understanding of collision-coalescence. A state-of-the-art turbulent collisional growth model is applied to a bin microphysics scheme within a large-eddy simulation such that the full range of cloud drop growth mechanisms are represented (i.e. CCN activation, condensation, collision-coalescence, mixing, etc.) at realistic atmospheric conditions. The spectral resolution of the microphysics scheme has been quadrupled in order to (a) more closely match the resolution of the observational instrumentation and (b) limit numerical diffusion, which leads to spurious broadening of the drop size spectrum at standard mass-doubling resolution. We compare simulated cloud drop spectra with those obtained from aircraft observations to assess the quality and limits of our theoretical knowledge. The comparison is performed for two observational cases from the Physics of Stratocumulus Top (POST) field campaign: 12 August 2008 (drizzling night flight, Rmax~2 mm/d) and 15 August 2008 (nondrizzling day flight, Rmax<0.5 mm/d). Both flights took place off the coast of Monterey, CA and the two cases differ in their radiative cooling rates, shear, cloud-top temperature and moisture jumps, and entrainment rates. Initial results from a collision box model suggest that enhancements of approximately 2 orders of magnitude over theoretical turbulent collision rates may be necessary to reproduce the

  11. Gravitational-wave Observations May Constrain Gamma-Ray Burst Models: The Case of GW150914–GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Mészáros, P.; Burns, E.; Connaughton, V.

    2016-08-01

    The possible short gamma-ray burst (GRB) observed by Fermi/GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10‑3 cm‑3) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  12. Gravitational-wave Observations May Constrain Gamma-Ray Burst Models: The Case of GW150914–GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Mészáros, P.; Burns, E.; Connaughton, V.

    2016-08-01

    The possible short gamma-ray burst (GRB) observed by Fermi/GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (˜10‑3 cm‑3) and a high Lorentz factor (˜2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  13. Fault and anthropogenic processes in central California constrained by satellite and airborne InSAR and in-situ observations

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Lundgren, Paul

    2016-07-01

    , but are subject to severe decorrelation. The L-band ALOS and UAVSAR SAR sensors provide improved coherence compared to the shorter wavelength radar data. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. Modeling selected fault transects reveals a distinct change in surface creep and shallow slip deficit from the central creeping section towards the Parkfield transition. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground subsidence in the SJV due to over-exploitation of groundwater. Groundwater related deformation is spatially and temporally variable and is composed of both recoverable elastic and non-recoverable inelastic components. InSAR time series are compared to GPS and well-water hydraulic head in-situ time series to understand water storage processes and mass loading changes. We are currently developing poroelastic finite element method models to assess the influence of anthropogenic processes on surface deformation and fault mechanics. Ongoing work is to better constrain both tectonic and non-tectonic processes and understand their interaction and implication for regional earthquake hazard.

  14. Constraining the dark fluid

    SciTech Connect

    Kunz, Martin; Liddle, Andrew R.; Parkinson, David; Gao Changjun

    2009-10-15

    Cosmological observations are normally fit under the assumption that the dark sector can be decomposed into dark matter and dark energy components. However, as long as the probes remain purely gravitational, there is no unique decomposition and observations can only constrain a single dark fluid; this is known as the dark degeneracy. We use observations to directly constrain this dark fluid in a model-independent way, demonstrating, in particular, that the data cannot be fit by a dark fluid with a single constant equation of state. Parametrizing the dark fluid equation of state by a variety of polynomials in the scale factor a, we use current kinematical data to constrain the parameters. While the simplest interpretation of the dark fluid remains that it is comprised of separate dark matter and cosmological constant contributions, our results cover other model types including unified dark energy/matter scenarios.

  15. Effect of time-varying tropospheric models on near-regional and regional infrasound propagation as constrained by observational data

    NASA Astrophysics Data System (ADS)

    McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris

    2008-06-01

    The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.

  16. A maximum-likelihood search for neutrino point sources with the AMANDA-II detector

    NASA Astrophysics Data System (ADS)

    Braun, James R.

    Neutrino astronomy offers a new window to study the high energy universe. The AMANDA-II detector records neutrino-induced muon events in the ice sheet beneath the geographic South Pole, and has accumulated 3.8 years of livetime from 2000 - 2006. After reconstructing muon tracks and applying selection criteria, we arrive at a sample of 6595 events originating from the Northern Sky, predominantly atmospheric neutrinos with primary energy 100 GeV to 8 TeV. We search these events for evidence of astrophysical neutrino point sources using a maximum-likelihood method. No excess above the atmospheric neutrino background is found, and we set upper limits on neutrino fluxes. Finally, a well-known potential dark matter signature is emission of high energy neutrinos from annihilation of WIMPs gravitationally bound to the Sun. We search for high energy neutrinos from the Sun and find no excess. Our limits on WIMP-nucleon cross section set new constraints on MSSM parameter space.

  17. Multi-year search for a diffuse flxu of muon neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Achterberg, A.; Collaboration, IceCube

    2008-04-13

    A search for TeV-PeV muon neutrinos from unresolved sources was performed on AMANDA-II data collected between 2000 and 2003 with an equivalent livetime of 807 days. This diffuse analysis sought to find an extraterrestrial neutrino flux from sources with non-thermal components. The signal is expected to have a harder spectrum than the atmospheric muon and neutrino backgrounds. Since no excess of events was seen in the data over the expected background, an upper limit of E{sup 2}{Phi}{sub 90%C.L.} < 7.4 x 10{sup -8} GeV cm{sup -2} s{sup -1} sr{sup -1} is placed on the diffuse flux of muon neutrinos with a {Phi} {proportional_to} E{sup -2} spectrum in the energy range 16 TeV to 2.5 PeV. This is currently the most sensitive {Phi} {proportional_to} E{sup -2} diffuse astrophysical neutrino limit. We also set upper limits for astrophysical and prompt neutrino models, all of which have spectra different than {Phi} {proportional_to} E{sup -2}.

  18. Multiyear search for a diffuse flux of muon neutrinos with AMANDA-II

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bai, X.; Baret, B.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; de Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Hart, J. E.; Hasegawa, Y.; Hauschildt, T.; Hays, D.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Jones, A.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kislat, F.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Labare, M.; Landsman, H.; Lauer, R.; Leich, H.; Leier, D.; Liubarsky, I.; Lundberg, J.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mokhtarani, A.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Ögelman, H.; Olivas, A.; Patton, S.; Peña-Garay, C.; Pérez de Los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Roth, P.; Rothmaier, F.; Rott, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Solarz, M.; Song, C.; Sopher, J. E.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; Viscomi, V.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wiedemann, C.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.

    2007-08-01

    A search for TeV-PeV muon neutrinos from unresolved sources was performed on AMANDA-II data collected between 2000 and 2003 with an equivalent live time of 807 days. This diffuse analysis sought to find an extraterrestrial neutrino flux from sources with nonthermal components. The signal is expected to have a harder spectrum than the atmospheric muon and neutrino backgrounds. Since no excess of events was seen in the data over the expected background, an upper limit of E2Φ90%C.L.<7.4×10-8GeVcm-2s-1sr-1 is placed on the diffuse flux of muon neutrinos with a Φ∝E-2 spectrum in the energy range 16 TeV to 2.5 PeV. This is currently the most sensitive Φ∝E-2 diffuse astrophysical neutrino limit. We also set upper limits for astrophysical and prompt neutrino models, all of which have spectra different from Φ∝E-2.

  19. Survey Observation of S-bearing Species toward Neptune's Atmosphere to Constrain the Origin of Abundant Volatile Gases

    NASA Astrophysics Data System (ADS)

    Iino, T.; Mizuno, A.; Nagahama, T.; Nakajima, T.

    2013-09-01

    We present our recent sub-mm waveband observation result of CS, CO and HCN gases on Neptune's atmosphere. Obtained abundance of both CO and HCN were comparable to previous observations. In turn, CS gas, which was produced largely after the impact of comet Shoemaker-Levy 9 on Jupiter in 1994 was not detected. Obtained [CS]/[CO] value was at least 300 times more lower than the case of SL9 event while the calculated lifetime of CS gas by thermo-chemical simulation is quite longer than other S-bearing species. The interpretation of the absence of CS bring the new mystery of the origin of trace gases on Neptune's atmosphere.

  20. Five years of searches for point sources of astrophysical neutrinos with the AMANDA-II neutrino telescope

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Atlee, D. W.; Bahcall, J. N.; Bai, X.; Baret, B.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, C.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; de Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feser, T.; Filimonov, K.; Fox, B. D.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grullon, S.; Groß, A.; Gunasingha, R. M.; Gurtner, M.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Harenberg, T.; Hart, J. E.; Hauschildt, T.; Hays, D.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hulth, P. O.; Hultqvist, K.; Hundertmark, S.; Hülß, J.-P.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Jones, A.; Joseph, J. M.; Kampert, K.-H.; Karle, A.; Kawai, H.; Kelley, J. L.; Kestel, M.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Kowalski, M.; Köpke, L.; Krasberg, M.; Kuehn, K.; Landsman, H.; Leich, H.; Leier, D.; Leuthold, M.; Liubarsky, I.; Lundberg, J.; Lünemann, J.; Madsen, J.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mokhtarani, A.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Ögelman, H.; Olivas, A.; Patton, S.; Peña-Garay, C.; Pérez de Los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Roth, P.; Rott, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Seckel, D.; Seo, S. H.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Solarz, M.; Song, C.; Sopher, J. E.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Steffen, P.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.

    2007-05-01

    We report the results of a five-year survey of the northern sky to search for point sources of high energy neutrinos. The search was performed on the data collected with the AMANDA-II neutrino telescope in the years 2000 to 2004, with a live time of 1001 days. The sample of selected events consists of 4282 upward going muon tracks with high reconstruction quality and an energy larger than about 100 GeV. We found no indication of point sources of neutrinos and set 90% confidence level flux upper limits for an all-sky search and also for a catalog of 32 selected sources. For the all-sky search, our average (over declination and right ascension) experimentally observed upper limit Φ0=((E)/(1TeV))γ·(dΦ)/(dE) to a point source flux of muon and tau neutrino (detected as muons arising from taus) is Φνμ+ν¯μ0+Φντ+ν¯τ0=11.1×10-11TeV-1cm-2s-1, in the energy range between 1.6 TeV and 2.5 PeV for a flavor ratio Φνμ+ν¯μ0/Φντ+ν¯τ0=1 and assuming a spectral index γ=2. It should be noticed that this is the first time we set upper limits to the flux of muon and tau neutrinos. In previous papers we provided muon neutrino upper limits only neglecting the sensitivity to a signal from tau neutrinos, which improves the limits by 10% to 16%. The value of the average upper limit presented in this work corresponds to twice the limit on the muon neutrino flux Φνμ+ν¯μ0=5.5×10-11TeV-1cm-2s-1. A stacking analysis for preselected active galactic nuclei and a search based on the angular separation of the events were also performed. We report the most stringent flux upper limits to date, including the results of a detailed assessment of systematic uncertainties.

  1. The linkage between stratospheric water vapor and surface temperature in an observation-constrained coupled general circulation model

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Su, Hui; Jiang, Jonathan H.; Livesey, Nathaniel J.; Santee, Michelle L.; Froidevaux, Lucien; Read, William G.; Anderson, John

    2016-06-01

    We assess the interactions between stratospheric water vapor (SWV) and surface temperature during the past two decades using satellite observations and the Community Earth System Model (CESM). From 1992 to 2013, to first order, the observed SWV exhibited three distinct piece-wise trends: a steady increase from 1992 to 2000, an abrupt drop from 2000 to 2004, and a gradual recovery after 2004, while the global-mean surface temperature experienced a strong increase until 2000 and a warming hiatus after 2000. The atmosphere-only CESM shows that the seasonal variation of tropical-mean (30°S-30°N) SWV is anticorrelated with that of the tropical-mean sea surface temperature (SST), while the correlation between the tropical SWV and SST anomalies on the interannual time scale is rather weak. By nudging the modeled SWV to prescribed profiles in coupled atmosphere-slab ocean experiments, we investigate the impact of SWV variations on surface temperature change. We find that a uniform 1 ppmv (0.5 ppmv) SWV increase (decrease) leads to an equilibrium global mean surface warming (cooling) of 0.12 ± 0.05 °C (-0.07 ± 0.05 °C). Sensitivity experiments show that the equilibrium response of global mean surface temperature to SWV perturbations over the extratropics is larger than that over the tropics. The observed sudden drop of SWV from 2000 to 2004 produces a global mean surface cooling of about -0.048 ± 0.041 °C, which suggests that a persistent change in SWV would make an imprint on long-term variations of global-mean surface temperature. A constant linear increase in SWV based on the satellite-observed rate of SWV change yields a global mean surface warming of 0.03 ± 0.01 °C/decade over a 50-year period, which accounts for about 19 % of the observed surface temperature increase prior to the warming hiatus. In the same experiment, trend analyses during different periods reveal a multi-year adjustment of surface temperature before the response to SWV forcing becomes

  2. Constraining the recent increase of the North Atlantic CO2 uptake by bringing together observations and models

    NASA Astrophysics Data System (ADS)

    Lebehot, Alice; Halloran, Paul; Watson, Andrew; McNeall, Doug; Schuster, Ute

    2016-04-01

    The North Atlantic Ocean is one of the strongest sinks for anthropogenic carbon dioxide (CO2) on the planet. To predict the North Atlantic response to the on-going increase in atmospheric CO2, we need to understand, with robust estimates of uncertainty, how it has changed in the recent past. Although the number of sea surface pCO2 observations has increased by about a factor 5 since 2002, the non-uniform temporal and spatial distribution of these measurements makes it difficult to estimate basin-wide CO2 uptake variability. To fill the observation gaps, and generate basin-wide pCO2 estimates, Multi Linear Regression (MLR) mapping techniques have been used (e.g. Watson et al., 2009). While this approach is powerful, it does not allow one to directly estimate the uncertainty in predictions away from the location of observations. To overcome this challenge we subsample, then using the MLR approach, predict, the CMIP5 model data, data for which we know the 'true' pCO2 and can therefore quantify the error in the prediction. Making the assumption that the CMIP5 models are a set of equally plausible realisations of reality, we use this approach to assign an uncertainty to a new basin-wide estimate of North Atlantic CO2 uptake over the past 20 years. Examining this time-series we find that the real world exhibits a strong increase in CO2 uptake, which is not matched by any of the CMIP5 models.

  3. Constraining magnetic-activity modulations in three solar-like stars observed by CoRoT and NARVAL

    NASA Astrophysics Data System (ADS)

    Mathur, S.; García, R. A.; Morgenthaler, A.; Salabert, D.; Petit, P.; Ballot, J.; Régulo, C.; Catala, C.

    2013-02-01

    Context. Stellar activity cycles are the manifestation of dynamo process running in the stellar interiors. They have been observed from years to decades thanks to the measurement of stellar magnetic proxies on the surface of the stars, such as the chromospheric and X-ray emissions, and to the measurement of the magnetic field with spectropolarimetry. However, all of these measurements rely on external features that cannot be visible during, for example, a Maunder-type minimum. With the advent of long observations provided by space asteroseismic missions, it has been possible to penetrate the stars and study their properties. Moreover, the acoustic-mode properties are also perturbed by the presence of these dynamos. Aims: We track the temporal variations of the amplitudes and frequencies of acoustic modes allowing us to search for signature of magnetic activity cycles, as has already been done in the Sun and in the CoRoT target HD 49933. Methods: We used asteroseimic tools and more classical spectroscopic measurements performed with the NARVAL spectropolarimeter to check that there are hints of any activity cycle in three solar-like stars observed continuously for more than 117 days by the CoRoT satellite: HD 49385, HD 181420, and HD 52265. To consider that we have found a hint of magnetic activity in a star we require finding a change in the amplitude of the p modes that should be anti-correlated with a change in their frequency shifts, as well as a change in the spectroscopic observations in the same direction as the asteroseismic data. Results: Our analysis gives very small variation in the seismic parameters preventing us from detecting any magnetic modulation. However, we are able to provide a lower limit of any magnetic-activity change in the three stars that should be longer than 120 days, which is the length of the time series. Moreover we computed the upper limit for the line-of-sight magnetic field component being 1, 3, and 0.6 G for HD 49385, HD 181420

  4. A Synthesized Model-Observation Approach to Constraining Gross Urban CO2 Fluxes Using 14CO2 and carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    LaFranchi, B. W.; Campbell, J. E.; Cameron-Smith, P. J.; Bambha, R.; Michelsen, H. A.

    2013-12-01

    Urbanized regions are responsible for a disproportionately large percentage (30-40%) of global anthropogenic greenhouse gas (GHG) emissions, despite covering only 2% of the Earth's surface area [Satterthwaite, 2008]. As a result, policies enacted at the local level in these urban areas can, in aggregate, have a large global impact, both positive and negative. In order to address the scientific questions that are required to drive these policy decisions, methods are needed that resolve gross CO2 flux components from the net flux. Recent work suggests that the critical knowledge gaps in CO2 surface fluxes could be addressed through the combined analysis of atmospheric carbonyl sulfide (COS) and radiocarbon in atmospheric CO2 (14CO2) [e.g. Campbell et al., 2008; Graven et al., 2009]. The 14CO2 approach relies on mass balance assumptions about atmospheric CO2 and the large differences in 14CO2 abundance between fossil and natural sources of CO2 [Levin et al., 2003]. COS, meanwhile, is a potentially transformative tracer of photosynthesis because its variability in the atmosphere has been found to be influenced primarily by vegetative uptake, scaling linearly will gross primary production (GPP) [Kettle et al., 20027]. Taken together, these two observations provide constraints on two of the three main components of the CO2 budget at the urban scale: photosynthesis and fossil fuel emissions. The third component, respiration, can then be determined by difference if the net flux is known. Here we present a general overview of our synthesized model-observation approach for improving surface flux estimates of CO2 for the upwind fetch of a ~30m tower located in Livermore, CA, USA, a suburb (pop. ~80,000) at the eastern edge of the San Francisco Bay Area. Additionally, we will present initial results from a one week observational intensive, which includes continuous CO2, CH4, CO, SO2, NOx, and O3 observations in addition to measurements of 14CO2 and COS from air samples

  5. Magnetotelluric observations over the Rhine Graben, France: a simple impedance tensor analysis helps constrain the dominant electrical features

    NASA Astrophysics Data System (ADS)

    Mareschal, M.; Jouanne, V.; Menvielle, M.; Chouteau, M.; Grandis, H.; Tarits, P.

    1992-12-01

    A simple impedance tensor analysis of four magnetotelluric soundings recorded over the ECORS section of the Rhine Graben shows that for periods shorter than about 30 s, induction dominates over channelling. For longer periods, 2-D induction galvanically distorted by surface heterogeneities and/or current chanelled in the Graben can explain the observations; the role of chanelling becomes dominant at periods of the order of a few hundred seconds. In the area considered, induction appears to be controlled by inclusions of saline water in a porous limestone layer (Grande Oolithe) and not by the limits of the Graben with its crystalline shoulder (Vosges). The simple analysis is supported by tipper analyses and by the results of schematic 2-D modelling.

  6. THE POWER OF IMAGING: CONSTRAINING THE PLASMA PROPERTIES OF GRMHD SIMULATIONS USING EHT OBSERVATIONS OF Sgr A*

    SciTech Connect

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Sadowski, Aleksander

    2015-01-20

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  7. Combining physical galaxy models with radio observations to constrain the SFRs of high-z dusty star-forming galaxies

    NASA Astrophysics Data System (ADS)

    Lo Faro, B.; Silva, L.; Franceschini, A.; Miller, N.; Efstathiou, A.

    2015-03-01

    We complement our previous analysis of a sample of z ˜ 1-2 luminous and ultraluminous infrared galaxies [(U)LIRGs], by adding deep Very Large Array radio observations at 1.4 GHz to a large data set from the far-UV to the submillimetre, including Spitzer and Herschel data. Given the relatively small number of (U)LIRGs in our sample with high signal-to-noise (S/N) radio data, and to extend our study to a different family of galaxies, we also include six well-sampled near-infrared (near-IR)-selected BzK galaxies at z ˜ 1.5. From our analysis based on the radtran spectral synthesis code GRASIL, we find that, while the IR luminosity may be a biased tracer of the star formation rate (SFR) depending on the age of stars dominating the dust heating, the inclusion of the radio flux offers significantly tighter constraints on SFR. Our predicted SFRs are in good agreement with the estimates based on rest-frame radio luminosity and the Bell calibration. The extensive spectrophotometric coverage of our sample allows us to set important constraints on the star formation (SF) history of individual objects. For essentially all galaxies, we find evidence for a rather continuous SFR and a peak epoch of SF preceding that of the observation by a few Gyr. This seems to correspond to a formation redshift of z ˜ 5-6. We finally show that our physical analysis may affect the interpretation of the SFR-M⋆ diagram, by possibly shifting, with respect to previous works, the position of the most dust obscured objects to higher M⋆ and lower SFRs.

  8. The Power of Imaging: Constraining the Plasma Properties of GRMHD Simulations using EHT Observations of Sgr A*

    NASA Astrophysics Data System (ADS)

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Saḑowski, Aleksander

    2015-01-01

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  9. Incorporating Sedimentological Observations, Hydrogeophysics and conceptual Knowledge to Constrain 3D Numerical Heterogeneity Models of Coarse Alluvial Systems

    NASA Astrophysics Data System (ADS)

    Huber, E.; Huggenberger, P.

    2012-12-01

    Accurate predictions on groundwater flow and transport behavior within fluvial and glaciofluvial sediments, but also interaction with surface water bodies, rely on knowledge of distributed aquifer properties. The complexity of the depositional and erosional processes in fluvial systems leads to highly heterogeneous distributions of hydrogeological parameters. The system dynamics, such as aggradation rates and channel mobility of alluvial systems; its influence on the preservation potential of the key depositional elements in the geological record; and its influence on the heterogeneity scales and the relevance for groundwater hydraulics is topic of the presentation. The aims of our work are to find a relation between surface morphological structures and the sedimentary structures in vertical profiles (i.e. gravel pits or GPR sections) and to derive rules for the interpretation of horizontal time-slices from 3D GPR data. Based on these data we set-up conceptual models of the structures of coarse alluvial systems at different scales which can be tested by stochastic methods. Relevant depositional elements and a hierarchy or genetic relationship of such elements will be defined based on the knowledge of depositional processes in alluvial systems inferred from: field observations after major flood events; 2D and 3D GPR data; and from existing data derived from laboratory flumes. Extensive geophysical field experiments within the Tagliamento alluvial system gave new insights to the sedimentary structures developing at high flows. Owing to the fact that rivers often destroy at least part of their bed during or shortly after large floods and subsequently rebuild, it is not easy to establish a simple relationship between surface morphology and the sedimentary structures found in vertical sections of many alluvial outcrops. According to these findings we suppose that surface or near-surface structures will not catch the essence of heterogeneity of alluvial aquifers

  10. Modelled Black Carbon Radiative Forcing and Atmospheric Lifetime in AeroCom Phase II Constrained by Aircraft Observations

    SciTech Connect

    Samset, B. H.; Myhre, G.; Herber, Andreas; Kondo, Yutaka; Li, Shao-Meng; Moteki, N.; Koike, Makoto; Oshima, N.; Schwarz, Joshua P.; Balkanski, Y.; Bauer, S.; Bellouin, N.; Berntsen, T.; Bian, Huisheng; Chin, M.; Diehl, Thomas; Easter, Richard C.; Ghan, Steven J.; Iversen, T.; Kirkevag, A.; Lamarque, Jean-Francois; Lin, Guang; Liu, Xiaohong; Penner, Joyce E.; Schulz, M.; Seland, O.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, Kostas; Zhang, Kai

    2014-11-27

    Black carbon (BC) aerosols absorb solar radiation, and are generally held to exacerbate global warming through exerting a positive radiative forcing1. However, the total contribution of BC to the ongoing changes in global climate is presently under debate2-8. Both anthropogenic BC emissions and the resulting spatial and temporal distribution of BC concentration are highly uncertain2,9. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood, leading to large estimated uncertainty in BC concentration at high altitudes and far from emission sources10. These uncertainties limit our ability to quantify both the historical, present and future anthropogenic climate impact of BC. Here we compare vertical profiles of BC concentration from four recent aircraft measurement campaigns with 13 state of the art aerosol models, and show that recent assessments may have overestimated present day BC radiative forcing. Further, an atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in transport dominated remote regions. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in the multi-model median direct BC forcing from fossil fuel and biofuel burning over the industrial era.

  11. Constraining the recent mass balance of Pine Island and Thwaites glaciers, West Antarctica, with airborne observations of snow accumulation

    NASA Astrophysics Data System (ADS)

    Medley, B.; Joughin, I.; Smith, B. E.; Das, S. B.; Steig, E. J.; Conway, H.; Gogineni, S.; Lewis, C.; Criscitiello, A. S.; McConnell, J. R.; van den Broeke, M. R.; Lenaerts, J. T. M.; Bromwich, D. H.; Nicolas, J. P.; Leuschen, C.

    2014-07-01

    In Antarctica, uncertainties in mass input and output translate directly into uncertainty in glacier mass balance and thus in sea level impact. While remotely sensed observations of ice velocity and thickness over the major outlet glaciers have improved our understanding of ice loss to the ocean, snow accumulation over the vast Antarctic interior remains largely unmeasured. Here, we show that an airborne radar system, combined with ice-core glaciochemical analysis, provide the means necessary to measure the accumulation rate at the catchment-scale along the Amundsen Sea coast of West Antarctica. We used along-track radar-derived accumulation to generate a 1985-2009 average accumulation grid that resolves moderate- to large-scale features (>25 km) over the Pine Island-Thwaites glacier drainage system. Comparisons with estimates from atmospheric models and gridded climatologies generally show our results as having less accumulation in the lower-elevation coastal zone but greater accumulation in the interior. Ice discharge, measured over discrete time intervals between 1994 and 2012, combined with our catchment-wide accumulation rates provide an 18-year mass balance history for the sector. While Thwaites Glacier lost the most ice in the mid-1990s, Pine Island Glacier's losses increased substantially by 2006, overtaking Thwaites as the largest regional contributor to sea-level rise. The trend of increasing discharge for both glaciers, however, appears to have leveled off since 2008.

  12. Constraining the recent mass balance of Pine Island and Thwaites glaciers, West Antarctica with airborne observations of snow accumulation

    NASA Astrophysics Data System (ADS)

    Medley, B.; Joughin, I.; Smith, B. E.; Das, S. B.; Steig, E. J.; Conway, H.; Gogineni, S.; Lewis, C.; Criscitiello, A. S.; McConnell, J. R.; van den Broeke, M. R.; Lenaerts, J. T. M.; Bromwich, D. H.; Nicolas, J. P.; Leuschen, C.

    2014-02-01

    In Antarctica, uncertainties in mass input and output translate directly into uncertainty in glacier mass balance and thus in sea level impact. While remotely sensed observations of ice velocity and thickness over the major outlet glaciers have improved our understanding of ice loss to the ocean, snow accumulation over the vast Antarctic interior remains largely unmeasured. Here, we show that an airborne radar system, combined with ice-core glaciochemical analysis, provide the means necessary to measure the accumulation rate at the catchment-scale along the Amundsen Sea Coast of West Antarctica. We used along-track radar-derived accumulation to generate a 1985-2009 average accumulation grid that resolves moderate- to large-scale features (> 25 km) over the Pine Island-Thwaites glacier drainage system. Comparisons with estimates from atmospheric models and gridded climatologies generally show our results as having less accumulation in lower-elevation coastal zone but greater accumulation in the interior. Ice discharge, measured over discrete time intervals between 1994 and 2012, combined with our catchment-wide accumulation rates provide an 18 yr mass balance history for the sector. While Thwaites Glacier lost the most ice in the mid-1990s, Pine Island Glacier's losses increased substantially by 2006, overtaking Thwaites as the largest regional contributor to sea-level rise. The trend of increasing discharge for both glaciers, however, appears to have leveled off since 2008.

  13. Modeled black carbon radiative forcing and atmospheric lifetime in AeroCom Phase II constrained by aircraft observations

    NASA Astrophysics Data System (ADS)

    Samset, B. H.; Myhre, G.; Herber, A.; Kondo, Y.; Li, S.-M.; Moteki, N.; Koike, M.; Oshima, N.; Schwarz, J. P.; Balkanski, Y.; Bauer, S. E.; Bellouin, N.; Berntsen, T. K.; Bian, H.; Chin, M.; Diehl, T.; Easter, R. C.; Ghan, S. J.; Iversen, T.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Penner, J. E.; Schulz, M.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Zhang, K.

    2014-08-01

    Atmospheric black carbon (BC) absorbs solar radiation, and exacerbates global warming through exerting positive radiative forcing (RF). However, the contribution of BC to ongoing changes in global climate is under debate. Anthropogenic BC emissions, and the resulting distribution of BC concentration, are highly uncertain. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood. Here we discuss whether recent assessments may have overestimated present day BC radiative forcing in remote regions. We compare vertical profiles of BC concentration from four recent aircraft measurement campaigns to simulations by 13 aerosol models participating in the AeroCom Phase II intercomparision. An atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in remote ocean regions, in line with other recent studies. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in AeroCom Phase II median direct BC forcing, from fossil fuel and biofuel burning, over the industrial era. The sensitivity of modeled forcing to BC vertical profile and lifetime highlights an urgent need for further flight campaigns, close to sources and in remote regions, to provide improved quantification of BC effects for use in climate policy.

  14. Modelled black carbon radiative forcing and atmospheric lifetime in AeroCom Phase II constrained by aircraft observations

    NASA Astrophysics Data System (ADS)

    Samset, B. H.; Myhre, G.; Herber, A.; Kondo, Y.; Li, S.-M.; Moteki, N.; Koike, M.; Oshima, N.; Schwarz, J. P.; Balkanski, Y.; Bauer, S. E.; Bellouin, N.; Berntsen, T. K.; Bian, H.; Chin, M.; Diehl, T.; Easter, R. C.; Ghan, S. J.; Iversen, T.; Kirkevåg, A.; Lamarque, J.-F.; Lin, G.; Liu, X.; Penner, J. E.; Schulz, M.; Seland, Ø.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, K.; Zhang, K.

    2014-11-01

    Atmospheric black carbon (BC) absorbs solar radiation, and exacerbates global warming through exerting positive radiative forcing (RF). However, the contribution of BC to ongoing changes in global climate is under debate. Anthropogenic BC emissions, and the resulting distribution of BC concentration, are highly uncertain. In particular, long-range transport and processes affecting BC atmospheric lifetime are poorly understood. Here we discuss whether recent assessments may have overestimated present-day BC radiative forcing in remote regions. We compare vertical profiles of BC concentration from four recent aircraft measurement campaigns to simulations by 13 aerosol models participating in the AeroCom Phase II intercomparison. An atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in remote ocean regions, in line with other recent studies. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in AeroCom Phase II median direct BC forcing, from fossil fuel and biofuel burning, over the industrial era. The sensitivity of modelled forcing to BC vertical profile and lifetime highlights an urgent need for further flight campaigns, close to sources and in remote regions, to provide improved quantification of BC effects for use in climate policy.

  15. Postglacial Rebound Model ICE-6G_C (VM5a) Constrained by Geodetic and Geologic Observations

    NASA Astrophysics Data System (ADS)

    Peltier, W. R.; Argus, D. F.; Drummond, R.

    2014-12-01

    We fit the revised global model of glacial isostatic adjustment ICE-6G_C (VM5a) to all available data, consisting of several hundred GPS uplift rates, a similar number of 14C dated relative sea level histories, and 62 geologic estimates of changes in Antarctic ice thickness. The mantle viscosity profile, VM5a is a simple multi-layer fit to prior model VM2 of Peltier (1996, Science). However, the revised deglaciation history, ICE-6G (VM5a), differs significantly from previous models in the Toronto series. (1) In North America, GPS observations of vertical uplift of Earth's surface from the Canadian Base Network require the thickness of the Laurentide ice sheet at Last Glacial Maximum to be significantly revised. At Last Glacial Maximum the new model ICE-6G_C in this region, relative to ICE-5G, roughly 50 percent thicker east of Hudson Bay (in and northern Quebec and Labrador region) and roughly 30 percent thinner west of Hudson Bay (in Manitoba, Saskatchewan, and the Northwest Territories).the net change in mass, however, is small. We find that rates of gravity change determined by GRACE when corrected for the predictions of ICE-6G_C (VM5a) are significantly smaller than residuals determined on the basis of earlier models. (2) In Antarctica, we fit GPS uplift rates, geologic estimates of changes in ice thickness, and geologic constraints on the timing of ice loss. The resulting deglaciation history also differs significantly from prior models. The contribution of Antarctic ice loss to global sea level rise since Last Glacial Maximum in ICE-6G_C is 13.6 meters, less than in ICE-5G (17.5 m), but significantly larger than in both the W12A model of Whitehouse et al. [2012] (8 m) and the IJ05 R02 model of Ivins et al. [2013] (7.5 m). In ICE-6G_C rapid ice loss occurs in Antarctica from 11.5 to 8 thousands years ago, with a rapid onset at 11.5 ka thereby contributing significantly to Meltwater Pulse 1B. In ICE-6G_C (VM5a), viscous uplift of Antarctica is increasing

  16. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Astrophysics Data System (ADS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 M_⊙) and luminous ( = 5 × 10^{6} L_⊙) star Eta Carinae, with its spectacular bipolar ``Homunculus'' nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e ˜ 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i ˜ 40°, an argument of periapsis ω ˜ 255°, and a projected orbital axis with a position angle of ˜ 312° east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  17. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 Stellar Mass) and luminous (= 5 x 10(exp 6) Stellar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  18. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2010-01-01

    The extremely massive (> 90 Solar Mass) and luminous (= 5 x 10(exp 6) Solar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the galaxy. However, many of its underlying physical parameters remain a mystery. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision in Eta Car, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-1) space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  19. 3D thermo-mechanical model of the orogeny in Pamir constrained by geological and geophysical observations

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Tympel, J.; Ratschbacher, L.

    2015-12-01

    geological observations. The model also replicates evolution of surface topography including the collapse of high Pamir Plateau in N-S and E-W directions, resulting in exhumation of gneiss domes. We demonstrate that extensive westward outflow of material and the relatively small initial width of Pamir are the key factors that controlled its evolution.

  20. Joint inversions of three types of electromagnetic data explicitly constrained by seismic observations: results from the central Okavango Delta, Botswana

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Blake, Sarah; Podgorski, Joel E.; Wagner, Frederic; Green, Alan G.; Maurer, Hansruedi; Jones, Alan G.; Muller, Mark; Ntibinyane, Ongkopotse; Tshoso, Gomotsang

    2015-09-01

    The Okavango Delta of northern Botswana is one of the world's largest inland deltas or megafans. To obtain information on the character of sediments and basement depths, audiomagnetotelluric (AMT), controlled-source audiomagnetotelluric (CSAMT) and central-loop transient electromagnetic (TEM) data were collected on the largest island within the delta. The data were inverted individually and jointly for 1-D models of electric resistivity. Distortion effects in the AMT and CSAMT data were accounted for by including galvanic distortion tensors as free parameters in the inversions. By employing Marquardt-Levenberg inversion, we found that a 3-layer model comprising a resistive layer overlying sequentially a conductive layer and a deeper resistive layer was sufficient to explain all of the electromagnetic data. However, the top of the basal resistive layer from electromagnetic-only inversions was much shallower than the well-determined basement depth observed in high-quality seismic reflection images and seismic refraction velocity tomograms. To resolve this discrepancy, we jointly inverted the electromagnetic data for 4-layer models by including seismic depths to an interface between sedimentary units and to basement as explicit a priori constraints. We have also estimated the interconnected porosities, clay contents and pore-fluid resistivities of the sedimentary units from their electrical resistivities and seismic P-wave velocities using appropriate petrophysical models. In the interpretation of our preferred model, a shallow ˜40 m thick freshwater sandy aquifer with 85-100 Ωm resistivity, 10-32 per cent interconnected porosity and <13 per cent clay content overlies a 105-115 m thick conductive sequence of clay and intercalated salt-water-saturated sands with 15-20 Ωm total resistivity, 1-27 per cent interconnected porosity and 15-60 per cent clay content. A third ˜60 m thick sandy layer with 40-50 Ωm resistivity, 10-33 per cent interconnected porosity and <15

  1. Constraining the Lyα escape fraction with far-infrared observations of Lyα emitters

    SciTech Connect

    Wardlow, Julie L.; Calanog, J.; Cooray, A.; Malhotra, S.; Zheng, Z.; Rhoads, J.; Finkelstein, S.; Bock, J.; Bridge, C.; Ciardullo, R.; Gronwall, C.; Conley, A.; Farrah, D.; Gawiser, E.; Heinis, S.; Ibar, E.; Ivison, R. J.; Marsden, G.; Oliver, S. J.; Riechers, D.; and others

    2014-05-20

    We study the far-infrared properties of 498 Lyα emitters (LAEs) at z = 2.8, 3.1, and 4.5 in the Extended Chandra Deep Field-South, using 250, 350, and 500 μm data from the Herschel Multi-tiered Extragalactic Survey and 870 μm data from the LABOCA ECDFS Submillimeter Survey. None of the 126, 280, or 92 LAEs at z = 2.8, 3.1, and 4.5, respectively, are individually detected in the far-infrared data. We use stacking to probe the average emission to deeper flux limits, reaching 1σ depths of ∼0.1 to 0.4 mJy. The LAEs are also undetected at ≥3σ in the stacks, although a 2.5σ signal is observed at 870 μm for the z = 2.8 sources. We consider a wide range of far-infrared spectral energy distributions (SEDs), including an M82 and an Sd galaxy template, to determine upper limits on the far-infrared luminosities and far-infrared-derived star formation rates of the LAEs. These star formation rates are then combined with those inferred from the Lyα and UV emission to determine lower limits on the LAEs' Lyα escape fraction (f {sub esc}(Lyα)). For the Sd SED template, the inferred LAEs f {sub esc}(Lyα) are ≳ 30% (1σ) at z = 2.8, 3.1, and 4.5, which are all significantly higher than the global f {sub esc}(Lyα) at these redshifts. Thus, if the LAEs f {sub esc}(Lyα) follows the global evolution, then they have warmer far-infrared SEDs than the Sd galaxy template. The average and M82 SEDs produce lower limits on the LAE f {sub esc}(Lyα) of ∼10%-20% (1σ), all of which are slightly higher than the global evolution of f {sub esc}(Lyα), but consistent with it at the 2σ-3σ level.

  2. Search for Neutrino-induced Cascades from Gamma-Ray Bursts with AMANDA

    NASA Astrophysics Data System (ADS)

    Achterberg, A.; Ackermann, M.; Adams, J.; Ahrens, J.; Andeen, K.; Auffenberg, J.; Bahcall, J. N.; Bai, X.; Baret, B.; Barwick, S. W.; Bay, R.; Beattie, K.; Becka, T.; Becker, J. K.; Becker, K.-H.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Burgess, C.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cowen, D. F.; D'Agostino, M. V.; Davour, A.; Day, C. T.; De Clercq, C.; Demirörs, L.; Descamps, F.; Desiati, P.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Filimonov, K.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Griesel, T.; Grullon, S.; Groß, A.; Gunasingha, R. M.; Gurtner, M.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hardtke, D.; Hardtke, R.; Hart, J. E.; Hasegawa, Y.; Hauschildt, T.; Hays, D.; Heise, J.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G. C.; Hodges, J.; Hoffman, K. D.; Hommez, B.; Hoshina, K.; Hubert, D.; Hughey, B.; Hulth, P. O.; Hultqvist, K.; Hülß, J.-P.; Hundertmark, S.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Jones, A.; Joseph, J. M.; Kampert, K.-H.; Karg, T.; Karle, A.; Kawai, H.; Kelley, J. L.; Kitamura, N.; Klein, S. R.; Klepser, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Labare, M.; Landsman, H.; Leich, H.; Leier, D.; Liubarsky, I.; Lundberg, J.; Lünemann, J.; Madsen, J.; Mase, K.; Matis, H. S.; McCauley, T.; McParland, C. P.; Meli, A.; Messarius, T.; Mészáros, P.; Miyamoto, H.; Mokhtarani, A.; Montaruli, T.; Morey, A.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Ögelman, H.; Olivas, A.; Patton, S.; Peña-Garay, C.; Pérez de los Heros, C.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Pretz, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Razzaque, S.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Robbins, S.; Roth, P.; Rott, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Seckel, D.; Semburg, B.; Seo, S. H.; Seunarine, S.; Silvestri, A.; Smith, A. J.; Solarz, M.; Song, C.; Sopher, J. E.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Steffen, P.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Sumner, T. J.; Taboada, I.; Tarasova, O.; Tepe, A.; Thollander, L.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Viscomi, V.; Voigt, B.; Wagner, W.; Walck, C.; Waldmann, H.; Walter, M.; Wang, Y.-R.; Wendt, C.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zornoza, J. D.

    2007-07-01

    Using the neutrino telescope AMANDA-II, we have conducted two analyses searching for neutrino-induced cascades from gamma-ray bursts. No evidence of astrophysical neutrinos was found, and limits are presented for several models. We also present neutrino effective areas which allow the calculation of limits for any neutrino production model. The first analysis looked for a statistical excess of events within a sliding window of 1 or 100 s (for short and long burst classes, respectively) during the years 2001-2003. The resulting upper limit on the diffuse flux normalization times E2 for the Waxman-Bahcall model at 1 PeV is 1.6×10-6 GeV cm-2 s-1 sr-1 (a factor of 120 above the theoretical prediction). For this search 90% of the neutrinos would fall in the energy range 50 TeV to 7 PeV. The second analysis looked for neutrino-induced cascades in coincidence with 73 bursts detected by BATSE in the year 2000. The resulting upper limit on the diffuse flux normalization times E2, also at 1 PeV, is 1.5×10-6 GeV cm-2 s-1 sr-1 (a factor of 110 above the theoretical prediction) for the same energy range. The neutrino-induced cascade channel is complementary to the up-going muon channel. We comment on its advantages for searches of neutrinos from GRBs and its future use with IceCube.

  3. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2015-03-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  4. Thermal-based modeling of coupled carbon, water and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2014-10-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light-use-efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of Absorbed Photosynthetically Active Radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote sensing-based Two-Source Energy Balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is non-linearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in-situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  5. The Energy Spectrum of Atmospheric Neutrinos between 2 and 200 TeV with the AMANDA-II Detector

    SciTech Connect

    IceCube Collaboration; Abbasi, R.

    2010-05-11

    The muon and anti-muon neutrino energy spectrum is determined from 2000-2003 AMANDA telescope data using regularised unfolding. This is the first measurement of atmospheric neutrinos in the energy range 2-200 TeV. The result is compared to different atmospheric neutrino models and it is compatible with the atmospheric neutrinos from pion and kaon decays. No significant contribution from charm hadron decays or extraterrestrial neutrinos is detected. The capabilities to improve the measurement of the neutrino spectrum with the successor experiment IceCube are discussed.

  6. Constraining the structure of the transition disk HD 135344B (SAO 206462) by simultaneous modeling of multiwavelength gas and dust observations

    NASA Astrophysics Data System (ADS)

    Carmona, A.; Pinte, C.; Thi, W. F.; Benisty, M.; Ménard, F.; Grady, C.; Kamp, I.; Woitke, P.; Olofsson, J.; Roberge, A.; Brittain, S.; Duchêne, G.; Meeus, G.; Martin-Zaïdi, C.; Dent, B.; Le Bouquin, J. B.; Berger, J. P.

    2014-07-01

    Context. Constraining the gas and dust disk structure of transition disks, particularly in the inner dust cavity, is a crucial step toward understanding the link between them and planet formation. HD 135344B is an accreting (pre-)transition disk that displays the CO 4.7 μm emission extending tens of AU inside its 30 AU dust cavity. Aims: We constrain HD 135344B's disk structure from multi-instrument gas and dust observations. Methods: We used the dust radiative transfer code MCFOST and the thermochemical code ProDiMo to derive the disk structure from the simultaneous modeling of the spectral energy distribution (SED), VLT/CRIRES CO P(10) 4.75 μm, Herschel/PACS [O i] 63 μm, Spitzer/IRS, and JCMT 12CO J = 3-2 spectra, VLTI/PIONIER H-band visibilities, and constraints from (sub-)mm continuum interferometry and near-IR imaging. Results: We found a disk model able to describe the current gas and dust observations simultaneously. This disk has the following structure. (1) To simultaneously reproduce the SED, the near-IR interferometry data, and the CO ro-vibrational emission, refractory grains (we suggest carbon) are present inside the silicate sublimation radius (0.08 100 to account for the 870 μm continuum upper limit and the CO P(10) line flux. (5) The gas-to-dust ratio in the outer disk (30

  7. Comparison of Satellite-Derived TOA Shortwave Clear-Sky Fluxes to Estimates from GCM Simulations Constrained by Satellite Observations of Land Surface Characteristics

    NASA Technical Reports Server (NTRS)

    Anantharaj, Valentine G.; Nair, Udaysankar S.; Lawrence, Peter; Chase, Thomas N.; Christopher, Sundar; Jones, Thomas

    2010-01-01

    Clear-sky, upwelling shortwave flux at the top of the atmosphere (S(sub TOA raised arrow)), simulated using the atmospheric and land model components of the Community Climate System Model 3 (CCSM3), is compared to corresponding observational estimates from the Clouds and Earth's Radiant Energy System (CERES) sensor. Improvements resulting from the use of land surface albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to constrain the simulations are also examined. Compared to CERES observations, CCSM3 overestimates global, annual averaged S(sub TOA raised arrow) over both land and oceans. However, regionally, CCSM3 overestimates S(sub TOA raised arrow) over some land and ocean areas while underestimating it over other sites. CCSM3 underestimates S(sub TOA raised arrow) over the Saharan and Arabian Deserts and substantial differences exist between CERES observations and CCSM3 over agricultural areas. Over selected sites, after using groundbased observations to remove systematic biases that exist in CCSM computation of S(sub TOA raised arrow), it is found that use of MODIS albedo improves the simulation of S(sub TOA raised arrow). Inability of coarse resolution CCSM3 simulation to resolve spatial heterogeneity of snowfall over high altitude sites such as the Tibetan Plateau causes overestimation of S(sub TOA raised arrow) in these areas. Discrepancies also exist in the simulation of S(sub TOA raised arrow) over ocean areas as CCSM3 does not account for the effect of wind speed on ocean surface albedo. This study shows that the radiative energy budget at the TOA is improved through the use of MODIS albedo in Global Climate Models.

  8. Inclusion of In-Situ Velocity Measurements into the UCSD Time-Dependent Tomography to Constrain and Better-Forecast Remote-Sensing Observations

    NASA Astrophysics Data System (ADS)

    Jackson, B. V.; Hick, P. P.; Bisi, M. M.; Clover, J. M.; Buffington, A.

    2010-08-01

    The University of California, San Diego (UCSD) three-dimensional (3-D) time-dependent tomography program has been used successfully for a decade to reconstruct and forecast coronal mass ejections from interplanetary scintillation observations. More recently, we have extended this tomography technique to use remote-sensing data from the Solar Mass Ejection Imager (SMEI) on board the Coriolis spacecraft; from the Ootacamund (Ooty) radio telescope in India; and from the European Incoherent SCATter (EISCAT) radar telescopes in northern Scandinavia. Finally, we intend these analyses to be used with observations from the Murchison Widefield Array (MWA), or the LOw Frequency ARray (LOFAR) now being developed respectively in Australia and Europe. In this article we demonstrate how in-situ velocity measurements from the Advanced Composition Explorer (ACE) space-borne instrumentation can be used in addition to remote-sensing data to constrain the time-dependent tomographic solution. Supplementing the remote-sensing observations with in-situ measurements provides additional information to construct an iterated solar-wind parameter that is propagated outward from near the solar surface past the measurement location, and throughout the volume. While the largest changes within the volume are close to the radial directions that incorporate the in-situ measurements, their inclusion significantly reduces the uncertainty in extending these measurements to global 3-D reconstructions that are distant in time and space from the spacecraft. At Earth, this can provide a finely-tuned real-time measurement up to the latest time for which in-situ measurements are available, and enables more-accurate forecasting beyond this than remote-sensing observations alone allow.

  9. Mothers, daughters and midlife (self)-discoveries: gender and aging in the Amanda Cross' Kate Fansler series.

    PubMed

    Domínguez-Rué, Emma

    2012-12-01

    In the same way that many aspects of gender cannot be understood aside from their relationship to race, class, culture, nationality and/or sexuality, the interactions between gender and aging constitute an interesting field for academic research, without which we cannot gain full insight into the complex and multi-faceted nature of gender studies. Although the American writer and Columbia professor Carolyn Gold Heilbrun (1926-2003) is more widely known for her best-selling mystery novels, published under the pseudonym of Amanda Cross, she also authored remarkable pieces of non-fiction in which she asserted her long-standing commitment to feminism, while she also challenged established notions on women and aging and advocated for a reassessment of those negative views. To my mind, the Kate Fansler novels became an instrument to reach a massive audience of female readers who might not have read her non-fiction, but who were perhaps finding it difficult to reach fulfillment as women under patriarchy, especially upon reaching middle age. Taking her essays in feminism and literary criticism as a basis and her later fiction as substantiation to my argument, this paper will try to reveal the ways in which Heilbrun's seemingly more superficial and much more commercial mystery novels as Amanda Cross were used a catalyst that informed her feminist principles while vindicating the need to rethink about issues concerning literary representations of mature women and cultural stereotypes about motherhood. PMID:22939539

  10. Constraining the Structure of the Transition Disk HD 135344B (SAO 206462) by Simultaneous Modeling of Multiwavelength Gas and Dust Observations

    NASA Technical Reports Server (NTRS)

    Carmona, A.; Pinte, C.; Thi, W. F.; Benisty, M.; Menard, F.; Grady, C.; Kamp, I.; Woitke, P.; Olofsson, J.; Roberge, A.; Brittain, S.; Duchene, G.; Meeus, G.; Martin-Zaidi, C.; Dent, B.; Le Bouquin, J. E.; Berger, J. P.

    2014-01-01

    Context: Constraining the gas and dust disk structure of transition disks, particularly in the inner dust cavity, is a crucial step toward understanding the link between them and planet formation. HD 135344B is an accreting (pre-)transition disk that displays the CO 4.7 micrometer emission extending tens of AU inside its 30 AU dust cavity. Aims: We constrain HD 135344B's disk structure from multi-instrument gas and dust observations. Methods: We used the dust radiative transfer code MCFOST and the thermochemical code ProDiMo to derive the disk structure from the simultaneous modeling of the spectral energy distribution (SED), VLT/CRIRES CO P(10) 4.75 Micrometers, Herschel/PACS [O(sub I)] 63 Micrometers, Spitzer/IRS, and JCMT CO-12 J = 3-2 spectra, VLTI/PIONIER H-band visibilities, and constraints from (sub-)mm continuum interferometry and near-IR imaging. Results: We found a disk model able to describe the current gas and dust observations simultaneously. This disk has the following structure. (1) To simultaneously reproduce the SED, the near-IR interferometry data, and the CO ro-vibrational emission, refractory grains (we suggest carbon) are present inside the silicate sublimation radius (0.08 is less than R less than 0.2 AU). (2) The dust cavity (R is less than 30 AU) is filled with gas, the surface density of the gas inside the cavity must increase with radius to fit the CO ro-vibrational line profile, a small gap of a few AU in the gas distribution is compatible with current data, and a large gap of tens of AU in the gas does not appear likely. (4) The gas-to-dust ratio inside the cavity is >100 to account for the 870 Micrometers continuum upper limit and the CO P(10) line flux. (5) The gas-to-dust ratio in the outer disk (30 is less than R less than 200 AU) is less than 10 to simultaneously describe the [O(sub I)] 63 Micrometers line flux and the CO P(10) line profile. (6) In the outer disk, most of the gas and dust mass should be located in the midplane, and

  11. Fermi/LAT observations of dwarf galaxies highly constrain a dark matter interpretation of excess positrons seen in AMS-02, HEAT, and PAMELA

    NASA Astrophysics Data System (ADS)

    López, Alejandro; Savage, Christopher; Spolyar, Douglas; Adams, Douglas Q.

    2016-03-01

    It is shown that a Weakly Interacting Massive dark matter Particle (WIMP) interpretation for the positron excess observed in a variety of experiments, HEAT, PAMELA, and AMS-02, is highly constrained by the Fermi/LAT observations of dwarf galaxies. In particular, this paper examines the annihilation channels that best fit the current AMS-02 data (Boudaud et al., 2014), specifically focusing on channels and parameter space not previously explored by the Fermi/LAT collaboration. The Fermi satellite has surveyed the γ-ray sky, and its observations of dwarf satellites are used to place strong bounds on the annihilation of WIMPs into a variety of channels. For the single channel case, we find that dark matter annihilation into {bbar b,e+e-, μ+μ-, τ+τ-,4-e or 4-τ } is ruled out as an explanation of the AMS positron excess (here b quarks are a proxy for all quarks, gauge and Higgs bosons). In addition, we find that the Fermi/LAT 2σ upper limits, assuming the best-fit AMS-02 branching ratios, exclude multichannel combinations into bbar b and leptons. The tension between the results might relax if the branching ratios are allowed to deviate from their best-fit values, though a substantial change would be required. Of all the channels we considered, the only viable channel that survives the Fermi/LAT constraint and produces a good fit to the AMS-02 data is annihilation (via a mediator) to 4-μ, or mainly to 4-μ in the case of multichannel combinations.

  12. Pn wave geometrical spreading and attenuation in Northeast China and the Korean Peninsula constrained by observations from North Korean nuclear explosions

    NASA Astrophysics Data System (ADS)

    Zhao, Lian-Feng; Xie, Xiao-Bi; Tian, Bao-Feng; Chen, Qi-Fu; Hao, Tian-Yao; Yao, Zhen-Xing

    2015-11-01

    We investigate the geometric spreading and attenuation of seismic Pn waves in Northeast China and the Korean Peninsula. A high-quality broadband Pn wave data set generated by North Korean nuclear tests is used to constrain the parameters of a frequency-dependent log-quadratic geometric spreading function and a power law Pn Q model. The geometric spreading function and apparent Pn wave Q are obtained for Northeast China and the Korean Peninsula between 2.0 and 10.0 Hz. Using the two-station amplitude ratios of the Pn spectra and correcting them with the known spreading function, we remove the contributions of the source and crust from the apparent Pn Q and retrieve the P wave attenuation information along the pure upper mantle path. We then use both Pn amplitudes and amplitude ratios in a tomographic approach to obtain the upper mantle P wave attenuation in the studied area. The Pn wave spectra observed in China are compared with those recorded in Japan, and the result reveals that the high-frequency Pn signal across the oceanic path attenuated faster compared with those through the continental path.

  13. Improved western U.S. background ozone estimates via constraining nonlocal and local source contributions using Aura TES and OMI observations

    NASA Astrophysics Data System (ADS)

    Huang, Min; Bowman, Kevin W.; Carmichael, Gregory R.; Lee, Meemong; Chai, Tianfeng; Spak, Scott N.; Henze, Daven K.; Darmenov, Anton S.; Silva, Arlindo M.

    2015-04-01

    Western U.S. near-surface ozone (O3) concentrations are sensitive to transported background O3 from the eastern Pacific free troposphere, as well as U.S. anthropogenic and natural emissions. The current 75 ppbv U.S. O3 primary standard may be lowered soon, hence accurately estimating O3 source contributions, especially background O3 in this region has growing policy-relevant significance. In this study, we improve the modeled total and background O3, via repartitioning and redistributing the contributions from nonlocal and local anthropogenic/wildfires sources in a multi-scale satellite data assimilation system containing global Goddard Earth Observing System-Chemistry model (GEOS-Chem) and regional Sulfur Transport and dEposition Model (STEM). Focusing on NASA's ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) field campaign period in June-July 2008, we first demonstrate that the negative biases in GEOS-Chem free simulation in the eastern Pacific at 400-900 hPa are reduced via assimilating Aura-Tropospheric Emission Spectrometer (TES) O3 profiles. Using the TES-constrained boundary conditions, we then assimilated into STEM the tropospheric nitrogen dioxide (NO2) columns from Aura-Ozone Monitoring Instrument to indicate U.S. nitrogen oxides (NOx = NO2 + NO) emissions at 12 × 12 km2 grid scale. Improved model skills are indicated from cross validation against independent ARCTAS measurements. Leveraging Aura observations, we show anomalously high wildfire NOx emissions in this summer in Northern California and the Central Valley while lower anthropogenic emissions in multiple urban areas than those representing the year of 2005. We found strong spatial variability of the daily maximum 8 h average background O3 and its contribution to the modeled total O3, with the mean value of ~48 ppbv (~77% of the total).

  14. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger

  15. CO2 emissions from wildfires in Siberia: FRP measurement based estimates constrained by satellite and ground based observations of co-emitted species

    NASA Astrophysics Data System (ADS)

    Berezin, Evgeny V.; Konovalov, Igor B.; Ciais, Philippe; Broquet, Gregoire; Wu, Lin; Beekmann, Matthias; Hadji-Lazaro, Juliette; Clerbaux, Cathy; Andreae, Meinrat O.; Kaiser, Johannes W.; Schulze, Ernst-Detlef

    2013-04-01

    Wildfires play an important role in the global carbon balance, being one of the major processes of the carbon cycle and by providing a considerable contribution to the global carbon dioxide emissions. Meanwhile, significant discrepancies (especially on a regional scale) between the available wildfire emission estimates provided by different global and regional emission inventories indicate that the current knowledge of wildfire emissions and related processes is still deficient. Although studies of wildfire emissions of several important species have greatly benefited from the recent advent of satellite measurements of the tropospheric composition, the informativeness of direct satellite measurements of CO2 in such a context still remains rather limited. We develop a new approach [1] to estimate CO2 emissions, which is based on the use of satellite measurements of co-emitted species in combination with chemistry transport model simulations and "bottom-up" emission inventory data. In this study we apply this approach together with the earlier developed method [2] for estimation of wildfire emissions to constrain the parameters of a fire emission model and to estimate emissions of CO2, CO and aerosol from wildfires in such an important carbon-rich region of the world as Siberia. We employ the MODIS fire radiative power (FRP) measurements to obtain spatial-temporal fields of fire activity and other (IASI CO and MODIS AOD) satellite observations in combination with simulations performed with the CHIMERE chemistry transport model to estimate the FRP to biomass burning rate conversion factors for different vegetative land cover types. The conversion factors (which are believed to be much more uncertain than the available estimates of the emission factors for major species) derived from the CO and AOD measurements are additionally evaluated with independent ground based measurements in Zotino and Tomsk and are combined in the Bayesian way to obtain CO2 emission estimates

  16. Comparing Simulations of Rising Flux Tubes Through the Solar Convection Zone with Observations of Solar Active Regions: Constraining the Dynamo Field Strength

    NASA Astrophysics Data System (ADS)

    Weber, M. A.; Fan, Y.; Miesch, M. S.

    2013-10-01

    We study how active-region-scale flux tubes rise buoyantly from the base of the convection zone to near the solar surface by embedding a thin flux tube model in a rotating spherical shell of solar-like turbulent convection. These toroidal flux tubes that we simulate range in magnetic field strength from 15 kG to 100 kG at initial latitudes of 1∘ to 40∘ in both hemispheres. This article expands upon Weber, Fan, and Miesch ( Astrophys. J. 741, 11, 2011) (Article 1) with the inclusion of tubes with magnetic flux of 1020 Mx and 1021 Mx, and more simulations of the previously investigated case of 1022 Mx, sampling more convective flows than the previous article, greatly improving statistics. Observed properties of active regions are compared to properties of the simulated emerging flux tubes, including: the tilt of active regions in accordance with Joy's Law as in Article 1, and in addition the scatter of tilt angles about the Joy's Law trend, the most commonly occurring tilt angle, the rotation rate of the emerging loops with respect to the surrounding plasma, and the nature of the magnetic field at the flux tube apex. We discuss how these diagnostic properties constrain the initial field strength of the active-region flux tubes at the bottom of the solar convection zone, and suggest that flux tubes of initial magnetic field strengths of ≥ 40 kG are good candidates for the progenitors of large (1021 Mx to 1022 Mx) solar active regions, which agrees with the results from Article 1 for flux tubes of 1022 Mx. With the addition of more magnetic flux values and more simulations, we find that for all magnetic field strengths, the emerging tubes show a positive Joy's Law trend, and that this trend does not show a statistically significant dependence on the magnetic flux.

  17. Southern San Andreas-San Jacinto fault system slip rates estimated from earthquake cycle models constrained by GPS and interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Hetland, Eric A.; Liu, Zhen; Fielding, Eric J.

    2009-02-01

    We use ground geodetic and interferometric synthetic aperture radar satellite observations across the southern San Andreas (SAF)-San Jacinto (SJF) fault systems to constrain their slip rates and the viscosity structure of the lower crust and upper mantle on the basis of periodic earthquake cycle, Maxwell viscoelastic, finite element models. Key questions for this system are the SAF and SJF slip rates, the slip partitioning between the two main branches of the SJF, and the dip of the SAF. The best-fitting models generally have a high-viscosity lower crust (η = 1021 Pa s) overlying a lower-viscosity upper mantle (η = 1019 Pa s). We find considerable trade-offs between the relative time into the current earthquake cycle of the San Jacinto fault and the upper mantle viscosity. With reasonable assumptions for the relative time in the earthquake cycle, the partition of slip is fairly robust at around 24-26 mm/a for the San Jacinto fault system and 16-18 mm/a for the San Andreas fault. Models for two subprofiles across the SAF-SJF systems suggest that slip may transfer from the western (Coyote Creek) branch to the eastern (Clark-Superstition hills) branch of the SJF from NW to SE. Across the entire system our best-fitting model gives slip rates of 2 ± 3, 12 ± 9, 12 ± 9, and 17 ± 3 mm/a for the Elsinore, Coyote Creek, Clark, and San Andreas faults, respectively, where the large uncertainties in the slip rates for the SJF branches reflect the large uncertainty in the slip rate partitioning within the SJF system.

  18. Constraining inflation

    SciTech Connect

    Adshead, Peter; Easther, Richard E-mail: richard.easther@yale.edu

    2008-10-15

    We analyze the theoretical limits on slow roll reconstruction, an optimal algorithm for recovering the inflaton potential (assuming a single-field slow roll scenario) from observational data. Slow roll reconstruction is based upon the Hamilton-Jacobi formulation of the inflationary dynamics. We show that at low inflationary scales the Hamilton-Jacobi equations simplify considerably. We provide a new classification scheme for inflationary models, based solely on the number of parameters needed to specify the potential, and provide forecasts for the bounds on the slow roll parameters from future data sets. A minimal running of the spectral index, induced solely by the first two slow roll parameters ({epsilon} and {eta}), appears to be effectively undetectable by realistic cosmic microwave background (CMB) experiments. However, since the ability to detect any running increases with the lever arm in comoving wavenumber, we conjecture that high redshift 21 cm data may allow tests of second-order consistency conditions on inflation. Finally, we point out that the second-order corrections to the spectral index are correlated with the inflationary scale, and thus the amplitude of the CMB B mode.

  19. Quantifying the benefit of GOSAT total column CO2 observations for constraining the global carbon budget: An inter-comparison study with bottom-up CO2 flux estimates from MsTMIP (Invited)

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Michalak, A. M.; O'Dell, C.; Huntzinger, D. N.; Kawa, S. R.; Oda, T.; Qiu, X.; Schwalm, C. R.; Yadav, V.

    2013-12-01

    Space-based remote sensing observations, such as those available from the Greenhouse gases Observing SATellite 'IBUKI' (GOSAT) hold great promise for improving the scientific understanding of carbon cycle processes and budgets at regional and global scales. The degree to which the GOSAT CO2 total column (XCO2) observations can constrain global fine-scale fluxes with reasonable precision and accuracy, and the degree to which the dense but lower precision GOSAT data provide additional information relative to the high precision but sparse in situ observations, remain topics of ongoing research. In this study, XCO2 observations retrieved via the GOSAT-ACOS B3.3 algorithm, the Total Column Carbon Observing Network (TCCON) XCO2 retrievals, and CO2 measurements from surface flask sites are assimilated using a geostatistical ensemble square root filter (GEnSRF) to estimate global surface fluxes at high spatial and temporal resolutions (spatial: 1° × 1.25°; temporal: daily). Fluxes are estimated over a period of four consecutive years (June 2008 - May 2012), with only the in situ and TCCON observations constraining the first year surface fluxes, while fluxes for the remaining estimation periods are constrained by all three sets of observations. The estimated fluxes are compared with a suite of bottom-up estimates based on a combination of biospheric fluxes from models participating in the Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) plus anthropogenic flux estimates from the Open-source Data Inventory for Anthropogenic CO2 (ODIAC). Because GEnSRF has been designed to estimate fluxes independently of any a priori flux estimates from flux models and/or inventories, this data assimilation tool allows for a completely independent comparison with the bottom-up estimates. GOSAT observations are found to be particularly valuable for constraining fluxes: (a) during the summer season over the land, and (b) across all seasons over the oceans; in

  20. A province-scale block model of Walker Lane and western Basin and Range crustal deformation constrained by GPS observations (Invited)

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Bormann, J.; Blewitt, G.; Kreemer, C.

    2013-12-01

    The Walker Lane in the western Great Basin of the western United States is an 800 km long and 100 km wide zone of active intracontinental transtension that absorbs ~10 mm/yr, about 20% of the Pacific/North America plate boundary relative motion. Lying west of the Sierra Nevada/Great Valley microplate (SNGV) and adjoining the Basin and Range Province to the east, deformation is predominantly shear strain overprinted with a minor component of extension. The Walker Lane responds with faulting, block rotations, structural step-overs, and has distinct and varying partitioned domains of shear and extension. Resolving these complex deformation patterns requires a long term observation strategy with a dense network of GPS stations (spacing ~20 km). The University of Nevada, Reno operates the 373 station Mobile Array of GPS for Nevada transtension (MAGNET) semi-continuous network that supplements coverage by other networks such as EarthScope's Plate Boundary Observatory, which alone has insufficient density to resolve the deformation patterns. Uniform processing of data from these GPS mega-networks provides a synoptic view and new insights into the kinematics and mechanics of Walker Lane tectonics. We present velocities for thousands of stations with time series between 3 to 17 years in duration aligned to our new GPS-based North America fixed reference frame NA12. The velocity field shows a rate budget across the southern Walker Lane of ~10 mm/yr, decreasing northward to ~7 mm/yr at the latitude of the Mohawk Valley and Pyramid Lake. We model the data with a new block model that estimates rotations and slip rates of known active faults between the Mojave Desert and northern Nevada and northeast California. The density of active faults in the region requires including a relatively large number of blocks in the model to accurately estimate deformation patterns. With 49 blocks, our the model captures structural detail not represented in previous province-scale models, and

  1. Three-dimensional constrained variational analysis: Approach and application to analysis of atmospheric diabatic heating and derivative fields during an ARM SGP intensive observational period

    NASA Astrophysics Data System (ADS)

    Tang, Shuaiqi; Zhang, Minghua

    2015-08-01

    Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.

  2. Constraining Dark Energy

    NASA Astrophysics Data System (ADS)

    Abrahamse, Augusta

    2010-12-01

    Future advances in cosmology will depend on the next generation of cosmological observations and how they shape our theoretical understanding of the universe. Current theoretical ideas, however, have an important role to play in guiding the design of such observational programs. The work presented in this thesis concerns the intersection of observation and theory, particularly as it relates to advancing our understanding of the accelerated expansion of the universe (or the dark energy). Chapters 2 - 4 make use of the simulated data sets developed by the Dark Energy Task Force (DETF) for a number of cosmological observations currently in the experimental pipeline. We use these forecast data in the analysis of four quintessence models of dark energy: the PNGB, Exponential, Albrecht-Skordis and Inverse Power Law (IPL) models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of these models. We examine the potential of the data for differentiating time-varying models from a pure cosmological constant. Additionally, we introduce an abstract parameter space to facilitate comparison between models and investigate the ability of future data to distinguish between these quintessence models. In Chapter 5 we present work towards understanding the effects of systematic errors associated with photometric redshift estimates. Due to the need to sample a vast number of deep and faint galaxies, photometric redshifts will be used in a wide range of future cosmological observations including gravitational weak lensing, baryon accoustic oscillations and type 1A supernovae observations. The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from such observations. We introduce a method for parameterizing uncertainties in modeling assumptions affecting photometric redshift calculations and for propagating these

  3. Constraining Distance and Inclination Angle of V4641 Sgr Using Swift and NuSTAR Observations during Low Soft Spectral State

    NASA Astrophysics Data System (ADS)

    Pahari, Mayukh; Misra, Ranjeev; Dewangan, Gulab C.; Pawar, Pramod

    2015-12-01

    We present results from NuSTAR and Swift/XRT joint spectral analysis of V4641 Sgr during a disk dominated or soft state, as well as a power law dominated or hard state. The soft state spectrum is well modeled by a relativistically blurred disk emission, a power law, a broad iron line, two narrow emission lines, and two edges. The Markov Chain Monte Carlo simulation technique and the relativistic effects seen in the disk and broad iron line allow us to self-consistently constrain the inner disk radius, disk inclination angle, and distance to the source at {2.43}-0.17+0.39Rg (GM/c2), {69.5}-4.2+12.8 degrees and {10.8}-2.5+1.6 kpc respectively. For the hard state, the spectrum is a power law with a weakly broad iron line and an edge. The distance estimate gives a measure of the Eddington fraction, {L}2.0-80.0 {keV}/{L}{Edd}, to be ˜1.3 × 10-2 and ˜1.9 × 10-3 for the soft and hard states, respectively. Unlike many other typical black hole systems, which are always in a hard state at such a low Eddington fraction, V4641 Sgr shows a soft, disk dominated state. The soft state spectrum shows narrow emission lines at ˜6.95 and ˜8.31 keV which can be identified as being due to emission from highly ionized iron and nickel in an X-ray irradiated wind respectively. If this is not due to instrumental effect or calibration error, this would be the first detection of a Ni fluorescent line in a black hole X-ray binary.

  4. Development of a Standardized Screening Rule for Tuberculosis in People Living with HIV in Resource-Constrained Settings: Individual Participant Data Meta-analysis of Observational Studies

    PubMed Central

    Getahun, Haileyesus; Kittikraisak, Wanitchaya; Heilig, Charles M.; Corbett, Elizabeth L.; Ayles, Helen; Cain, Kevin P.; Grant, Alison D.; Churchyard, Gavin J.; Kimerling, Michael; Shah, Sarita; Lawn, Stephen D.; Wood, Robin; Maartens, Gary; Granich, Reuben; Date, Anand A.; Varma, Jay K.

    2011-01-01

    loss can identify a subset of people living with HIV who have a very low probability of having TB disease. A simplified screening rule using any one of these symptoms can be used in resource-constrained settings to identify people living with HIV in need of further diagnostic assessment for TB. Use of this algorithm should result in earlier TB diagnosis and treatment, and should allow for substantial scale-up of IPT. Please see later in the article for the Editors' Summary PMID:21267059

  5. Constraining and validating the Oct/Nov 2003 X-class EUV flare enhancements with observations of FUV dayglow and E-region electron densities

    NASA Astrophysics Data System (ADS)

    Strickland, D. J.; Lean, J. L.; Daniell, R. E.; Knight, H. K.; Woo, W. K.; Meier, R. R.; Straus, P. R.; Woods, T. N.; Eparvier, F. G.; McMullin, D. R.; Christensen, A. B.; Morrison, D.; Paxton, L. J.

    2007-06-01

    Near peak activity of two X-class solar flares, on 28 October and 4 November 2003, the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED)/Solar EUV Experiment (SEE) instrument recorded order of magnitude increases in solar EUV irradiance, the TIMED/Global Ultraviolet Imager (GUVI) observed simultaneous increases in upper atmosphere far ultraviolet (FUV) dayglow, and the European Incoherent Scatter Scientific Association (EISCAT) radar and the Ionospheric Occultation Experiment onboard the PICOSat spacecraft recorded corresponding changes in E-region electron densities. Calculations of the FUV dayglow and electron density profiles using Version 8 SEE flare spectra overestimate the actual observed increases by more than a factor of 2.0. This prompted the development of an alternative approach that uses the FUV dayglow and associated E-layer electron density profiles to derive and validate, respectively, the increases in the solar EUV irradiance spectrum. The solar EUV spectrum required to produce the FUV dayglow is specified between 45 and 27 nm by SEE's EGS measurements, between 27 and 5 nm by GUVI dayglow measurements, and between 5 and 1 nm using a combination of the GOES X-ray data and the NRLEUV model. The energy fluxes in the 5- to 27-nm bands (at 5-10, 10-15, 15-20, and 20-27 nm) are randomly varied in search of combinations such that the full spectrum (λ < 45 nm) replicates the GUVI dayglow observations. In contrast to the Version 8 SEE XPS observations, solar EUV spectra derived using the multiband yield approach produce electron densities that are consistent with those observed independently. The new multiband yield algorithm thus provides a unique tool for independent validation of solar EUV spectral irradiance measurements using FUV dayglow observations.

  6. Constraining Slab Sinking on a Whole-Mantle Scale: Quantitative Integration of Surface and Sub-Surface Observations from Geophysics and Geology

    NASA Astrophysics Data System (ADS)

    Sigloch, K.; Mihalynuk, M. G.

    2014-12-01

    How rapidly slabs sink, which trajectories they follow, and how they deform in the process, presents an inferential challenge to geophysics. Mantle rheologies remain highly uncertain, and seismic tomography can merely offer present-day snapshots of a process defined by temporal evolution. Thus observational constraints on slab sinking have tended to remain non-unique. Subduction zones are complex litho-consumers whose time-variant activity can be reconstructed from geological observations on paleo-arcs, but the association of arcs to their subducted, tomographically imaged lithosphere is iffy. Except for young slabs that can be reliably linked with coeval paleo-arc activity a priori, deeper geological time information cannot be exploited with certainty. As long as slab geometries remain "undated", few constraints on slab sinking behavior and hence mantle rheology can be extracted. Sigloch & Mihalynuk (2013) demonstrated a quantitative method to tighten constraints on slab sinking in the lower mantle by investigating the least ambiguous slab geometries observed. Extremely massive and almost vertical slab walls should have been deposited by vertical sinking beneath (intra-oceanic) trenches that remained stationary for a long time (~100 m.y.). We showed how this hypothesis of vertical sinking can be tested quantitatively and successfully, making only minimal assumptions on mantle rheology, and with proper error propagation for all observations (tomography, plate reconstructions, geology). Here the discussion of sinking trajectories and rates is extended to more challenging geometries. Dipping slabs in the lower mantle, and laterally extensive "stagnant slabs" in the transition zone can also be rendered dateable and trackable by (re-)investigation of their paleo-trenches. We discuss examples and link to recent geodynamic modeling of viscous sheet sinking. Reference: Sigloch K & Mihalynuk MG (2013), Intra-oceanic subduction shaped the assembly of Cordilleran North

  7. Joint modeling of teleseismic and tsunami wave observations to constrain the 16 September 2015 Illapel, Chile, Mw 8.3 earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Li, Linyan; Lay, Thorne; Cheung, Kwok Fai; Ye, Lingling

    2016-05-01

    The 16 September 2015 Illapel, Chile, Mw 8.3 earthquake ruptured ~170 km along the plate boundary megathrust fault from 30.0°S to 31.6°S. A patch of offshore slip of up to 10 m extended to near the trench, and a patch of ~3 m slip occurred downdip below the coast. Aftershocks fringe the large-slip zone, extending along the coast from 29.5°S to 32.5°S between the 1922 and 1971/1985 ruptures. The coseismic slip distribution is determined by iterative modeling of teleseismic body waves as well as tsunami signals recorded at three regional DART stations and tide gauges immediately north and south of the rupture. The tsunami observations tightly delimit the rupture length, suppressing bilateral southward extension of slip found in unconstrained teleseismic-wave inversions. The spatially concentrated rupture area, with a stress drop of ~3.2 MPa, is validated by modeling DART and tide gauge observations in Hawaii, which also prove sensitive to the along-strike length of the rupture.

  8. The formation of peak-ring basins: Working hypotheses and path forward in using observations to constrain models of impact-basin formation

    NASA Astrophysics Data System (ADS)

    Baker, David M. H.; Head, James W.; Collins, Gareth S.; Potter, Ross W. K.

    2016-07-01

    Impact basins provide windows into the crustal structure and stratigraphy of planetary bodies; however, interpreting the stratigraphic origin of basin materials requires an understanding of the processes controlling basin formation and morphology. Peak-ring basins (exhibiting a rim crest and single interior ring of peaks) provide important insight into the basin-formation process, as they are transitional between complex craters with central peaks and larger multi-ring basins. New image and altimetry data from the Lunar Reconnaissance Orbiter as well as a suite of remote sensing datasets have permitted a reassessment of the origin of lunar peak-ring basins. We synthesize morphometric, spectroscopic, and gravity observations of lunar peak-ring basins and describe two working hypotheses for the formation of peak rings that involve interactions between inward collapsing walls of the transient cavity and large central uplifts of the crust and mantle. Major facets of our observations are then compared and discussed in the context of numerical simulations of peak-ring basin formation in order to plot a course for future model refinement and development.

  9. Multiple-scale hydrothermal circulation in 135 Ma oceanic crust of the Japan Trench outer rise: Numerical models constrained with heat flow observations

    NASA Astrophysics Data System (ADS)

    Ray, Labani; Kawada, Yoshifumi; Hamamoto, Hideki; Yamano, Makoto

    2015-09-01

    Anomalous high heat flow is observed within 150 km seaward of the trench axis at the Japan Trench offshore of Sanriku, where the old Pacific Plate (˜135 Ma) is subducting. Individual heat flow values range between 42 and 114 mW m-2, with an average of ˜70 mW m-2. These values are higher than those expected from the seafloor age based on thermal models of the oceanic plate, i.e., ˜50 mW m-2. The heat flow exhibits spatial variations at multiple scales: regional high average heat flow (˜100 km) and smaller-scale heat flow peaks (˜1 km). We found that hydrothermal mining of heat from depth due to gradual thickening of an aquifer in the oceanic crust toward the trench axis can yield elevated heat flow of the spatial scale of ˜100 km. Topographic effects combined with hydrothermal circulation may account for the observed smaller-scale heat flow variations. Hydrothermal circulation in high-permeability faults may result in heat flow peaks of a subkilometer spatial scale. Volcanic intrusions are unlikely to be a major source of heat flow variations at any scale because of limited occurrence of young volcanoes in the study area. Hydrothermal heat transport may work at various scales on outer rises of other subduction zones as well, since fractures and faults have been well developed due to bending of the incoming plate.

  10. Synergy of short gamma ray burst and gravitational wave observations: Constraining the inclination angle of the binary and possible implications for off-axis gamma ray bursts

    NASA Astrophysics Data System (ADS)

    Arun, K. G.; Tagoshi, Hideyuki; Pai, Archana; Mishra, Chandra Kant

    2014-07-01

    Compact binary mergers are the strongest candidates for the progenitors of short gamma ray bursts (SGRBs). If a gravitational wave signal from the compact binary merger is observed in association with a SGRB, such a synergy can help us understand many interesting aspects of these bursts. We examine the accuracies with which a worldwide network of gravitational wave interferometers would measure the inclination angle (the angle between the angular momentum axis of the binary and the observer's line of sight) of the binary. We compare the projected accuracies of gravitational wave detectors to measure the inclination angle of double neutron star and neutron star-black hole binaries for different astrophysical scenarios. We find that a five-detector network can measure the inclination angle to an accuracy of ˜5.1 (2.2) deg for a double neutron star (neutron star-black hole) system at 200 Mpc if the direction of the source as well as the redshift is known electromagnetically. We argue as to how an accurate estimation of the inclination angle of the binary can prove to be crucial in understanding off-axis GRBs, the dynamics and the energetics of their jets, and help the searches for (possible) orphan afterglows of the SGRBs.

  11. Constraining Galileon inflation

    SciTech Connect

    Regan, Donough; Anderson, Gemma J.; Hull, Matthew; Seery, David E-mail: G.Anderson@sussex.ac.uk E-mail: D.Seery@sussex.ac.uk

    2015-02-01

    In this short paper, we present constraints on the Galileon inflationary model from the CMB bispectrum. We employ a principal-component analysis of the independent degrees of freedom constrained by data and apply this to the WMAP 9-year data to constrain the free parameters of the model. A simple Bayesian comparison establishes that support for the Galileon model from bispectrum data is at best weak.

  12. Constraining Predicted Secondary Organic Aerosol Formation and Processing Using Real-Time Observations of Aging Urban Emissions in an Oxidation Flow Reactor

    NASA Astrophysics Data System (ADS)

    Ortega, A. M.; Palm, B. B.; Hayes, P. L.; Day, D. A.; Cubison, M.; Brune, W. H.; Hu, W.; Graus, M.; Warneke, C.; Gilman, J.; De Gouw, J. A.; Jimenez, J. L.

    2014-12-01

    To investigate atmospheric processing of urban emissions, we deployed an oxidation flow reactor with measurements of size-resolved chemical composition of submicron aerosol during CalNex-LA, a field study investigating air quality and climate change at a receptor site in the Los Angeles Basin. The reactor produces OH concentrations up to 4 orders of magnitude higher than in ambient air, achieving equivalent atmospheric aging of hours to ~2 weeks in 5 minutes of processing. The OH exposure (OHexp) was stepped every 20 min to survey the effects of a range of oxidation exposures on gases and aerosols. This approach is a valuable tool for in-situ evaluation of changes in organic aerosol (OA) concentration and composition due to photochemical processing over a range of ambient atmospheric conditions and composition. Combined with collocated gas-phase measurements of volatile organic compounds, this novel approach enables the comparison of measured SOA to predicted SOA formation from a prescribed set of precursors. Results from CalNex-LA show enhancements of OA and inorganic aerosol from gas-phase precursors. The OA mass enhancement from aging was highest at night and correlated with trimethylbenzene, indicating the importance of relatively short-lived VOC (OH lifetime of ~12 hrs or less) as SOA precursors in the LA Basin. Maximum net SOA production is observed between 3-6 days of aging and decreases at higher exposures. Aging in the reactor shows similar behavior to atmospheric processing; the elemental composition of ambient and reactor measurements follow similar slopes when plotted in a Van Krevelen diagram. Additionally, for air processed in the reactor, oxygen-to-carbon ratios (O/C) of aerosol extended over a larger range compared to ambient aerosol observed in the LA Basin. While reactor aging always increases O/C, often beyond maximum observed ambient levels, a transition from net OA production to destruction occurs at intermediate OHexp, suggesting a transition

  13. Updated Rupture Model for the M7.8 October 28, 2012, Haida Gwaii Earthquake as Constrained by GPS-Observed Displacements

    NASA Astrophysics Data System (ADS)

    Nykolaishen, L.; Dragert, H.; Wang, K.; James, T. S.; de Lange Boom, B.; Schmidt, M.; Sinnott, D.

    2014-12-01

    The M7.8 low-angle thrust earthquake off the west coast of southern Haida Gwaii on October 28, 2012, provided Canadian scientists the opportunity to study a local large thrust earthquake and has provided important information towards an improved understanding of geohazards in coastal British Columbia. Most large events along the Pacific-North America boundary in this region have involved strike-slip motion, such as the 1949 M8.1 earthquake on the Queen Charlotte Fault. In contrast along the southern portion of Haida Gwaii, the young (~8 Ma) Pacific plate crust also underthrusts North America and has been viewed as a small-scale analogy of the Cascadia Subduction Zone. Initial seismic-based rupture models for this event were improved through inclusion of GPS observed coseismic displacements, which are as large as 115 cm of horizontal motion (SSW) and 30 cm of subsidence. Additional campaign-style GPS surveys have since been repeated by the Canadian Hydrographic Service (CHS) at seven vertical reference benchmarks throughout Haida Gwaii, significantly improving the coverage of coseismic displacement observations in the region. These added offsets were typically calculated by differencing a single occupation before and after the earthquake and preliminary displacement estimates are consistent with previous GPS observations from the Geological Survey of Canada. Addition of the CHS coseismic offset estimates may allow direct inversion of the GPS data to derive a purely GPS-based rupture model. To date, cumulative postseismic displacements at six sites indicate up to 6 cm of motion, varying in azimuth between SSW and SE. Preliminary postseismic timeseries curve fitting to date has utilized a double exponential function characteristic of mantle relaxation. The current postseismic trends also suggest afterslip on the deeper plate interface beneath central Haida Gwaii along with possible induced aseismic slip on a deeper segment of the Queen Charlotte Fault located offshore

  14. Finding consistency between different views of the absorption enhancement of black carbon: An observationally constrained hybrid model to support a transition in optical properties with mass fraction

    NASA Astrophysics Data System (ADS)

    Coe, H.; Allan, J. D.; Whitehead, J.; Alfarra, M. R. R.; Villegas, E.; Kong, S.; Williams, P. I.; Ting, Y. C.; Haslett, S.; Taylor, J.; Morgan, W.; McFiggans, G.; Spracklen, D. V.; Reddington, C.

    2015-12-01

    The mixing state of black carbon is uncertain yet has a significant influence on the efficiency with which a particle absorbs light. In turn, this may make a significant contribution to the uncertainty in global model predictions of the black carbon radiative budget. Previous modelling studies that have represented this mixing state using a core-shell approach have shown that aged black carbon particles may be considerably enhanced compared to freshly emitted black carbon due to the addition of co-emitted, weakly absorbing species. However, recent field results have demonstrated that any enhancement of absorption is minor in the ambient atmosphere. Resolving these differences in absorption efficiency is important as they will have a major impact on the extent to which black carbon heats the atmospheric column. We have made morphology-independent measurements of refractory black carbon mass and associated weakly absorbing material in single particles from laboratory-generated diesel soot and black carbon particles in ambient air influenced by traffic and wood burning sources and related these to the optical properties of the particles. We compared our calculated optical properties with optical models that use varying mixing state assumptions and by characterising the behaviour in terms of the relative amounts of weakly absorbing material and black carbon in a particle we show a sharp transition in mixing occurs. We show that the majority of black carbon particles from traffic-dominated sources can be treated as externally mixed and show no absorption enhancement, whereas models assuming internal mixing tend to give the best estimate of the absorption enhancement of thickly coated black carbon particles from biofuel or biomass burning. This approach reconciles the differences in absorption enhancement previously observed and offers a systematic way of treating the differences in behaviour observed.

  15. Source Attribution and Interannual Variability of Arctic Pollution in Spring Constrained by Aircraft (ARCTAS, ARCPAC) and Satellite (AIRS) Observations of Carbon Monoxide

    NASA Technical Reports Server (NTRS)

    Fisher, J. A.; Jacob, D. J.; Purdy, M. T.; Kopacz, M.; LeSager, P.; Carouge, C.; Holmes, C. D.; Yantosca, R. M.; Batchelor, R. L.; Strong, K.; Diskin, G. S.; Fuelberg, H. E.; Holloway, J. S.; McMillan, W. W.; Warner, J.; Streets, D. G.; Zhang, Q.; Wang, Y.; Wu, S.

    2009-01-01

    We use aircraft observations of carbon monoxide (CO) from the NASA ARCTAS and NOAA ARCPAC campaigns in April 2008 together with multiyear (2003-2008) CO satellite data from the AIRS instrument and a global chemical transport model (GEOS-Chem) to better understand the sources, transport, and interannual variability of pollution in the Arctic in spring. Model simulation of the aircraft data gives best estimates of CO emissions in April 2008 of 26 Tg month-1 for Asian anthropogenic, 9.1 for European anthropogenic, 4.2 for North American anthropogenic, 9.3 for Russian biomass burning (anomalously large that year), and 21 for Southeast Asian biomass burning. We find that Asian anthropogenic emissions are the dominant source of Arctic CO pollution everywhere except in surface air where European anthropogenic emissions are of similar importance. Synoptic pollution influences in the Arctic free troposphere include contributions of comparable magnitude from Russian biomass burning and from North American, European, and Asian anthropogenic sources. European pollution dominates synoptic variability near the surface. Analysis of two pollution events sampled by the aircraft demonstrates that AIRS is capable of observing pollution transport to the Arctic in the mid-troposphere. The 2003-2008 record of CO from AIRS shows that interannual variability averaged over the Arctic cap is very small. AIRS CO columns over Alaska are highly correlated with the Ocean Nino Index, suggesting a link between El Nino and northward pollution transport. AIRS shows lower-than-average CO columns over Alaska during April 2008, despite the Russian fires, due to a weakened Aleutian Low hindering transport from Asia and associated with the moderate 2007-2008 La Nina. This suggests that Asian pollution influence over the Arctic may be particularly large under strong El Nino conditions.

  16. Rupture processes of the 2012 September 5 Mw 7.6 Nicoya, Costa Rica earthquake constrained by improved geodetic and seismological observations

    NASA Astrophysics Data System (ADS)

    Liu, Chengli; Zheng, Yong; Xiong, Xiong; Wang, Rongjiang; López, Allan; Li, Jun

    2015-10-01

    On 2012 September 5, the anticipated interplate thrust earthquake ruptured beneath the Nicoya peninsula in northwestern Costa Rica close to the Middle America trench, with a magnitude Mw 7.6. Extensive co-seismic observations were provided by dense near-field strong ground motion, Global Positioning Systems (GPS) networks and teleseismic recordings from global seismic networks. The wealthy data sets available for the 2012 Mw 7.6 Nicoya earthquake provide a unique opportunity to investigate the details of the rupture process of this earthquake. By implementing a non-linear joint inversion with high-rate GPS waveform, more static GPS offsets, strong-motion data and teleseismic body waveform, we obtained a robust and accurate rupture model of the 2012 Mw 7.6 Nicoya earthquake. The earthquake is dominantly a pure thrust component with a maximum slip of 3.5 m, and the main large slip patch is located below the hypocentre, spanning ˜50 km along dip and ˜110 km along strike. The static stress drop is about 3.4 MPa. The total seismic moment of our preferred model is 3.46 × 1020 N m, which gives Mw = 7.6. Due to the fast rupture velocity, most of the seismic moment was released within 70 s. The largest slip patch directly overlaps the interseismic locked region identified by geodetic observations and extends downdip to the intersection with the upper plate Moho. We also find that there is a complementary pattern between the distribution of aftershocks and the co-seismic rupture; most aftershocks locate in the crust of the upper plate and are possibly induced by the stress change caused by the large slip patch.

  17. Shallow Chamber & Conduit Behavior of Silicic Magma: A Thermo- and Fluid- Dynamic Parameterization Model of Physical Deformation as Constrained by Geodetic Observations: Case Study; Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Gunn de Rosas, C. L.

    2013-12-01

    The Soufrière Hills Volcano, Montserrat (SHV) is an active, mainly andesitic and well-studied stratovolcano situated at the northern end of the Lesser Antilles Arc subduction zone in the Caribbean Sea. The goal of our research is to create a high resolution 3D subsurface model of the shallow and deeper aspects of the magma storage and plumbing system at SHV. Our model will integrate inversions using continuous and campaign geodetic observations at SHV from 1995 to the present as well as local seismic records taken at various unrest intervals to construct a best-fit geometry, pressure point source and inflation rate and magnitude. We will also incorporate a heterogeneous media in the crust and use the most contemporary understanding of deep crustal- or even mantle-depth 'hot-zone' genesis and chemical evolution of silicic and intermediate magmas to inform the character of the deep edifice influx. Our heat transfer model will be constructed with a modified 'thin shell' enveloping the magma chamber to simulate the insulating or conducting influence of heat-altered chamber boundary conditions. The final forward model should elucidate observational data preceding and proceeding unrest events, the behavioral suite of magma transport in the subsurface environment and the feedback mechanisms that may contribute to eruption triggering. Preliminary hypotheses suggest wet, low-viscosity residual melts derived from 'hot zones' will ascend rapidly to shallower stall-points and that their products (eventually erupted lavas as well as stalled plutonic masses) will experience and display two discrete periods of shallow evolution; a rapid depressurization crystallization event followed by a slower conduction-controlled heat transfer and cooling crystallization. These events have particular implications for shallow magma behaviors, notably inflation, compressibility and pressure values. Visualization of the model with its inversion constraints will be affected with Com

  18. Dynamic triggering of creep events in the Salton Trough, Southern California by regional M ≥ 5.4 earthquakes constrained by geodetic observations and numerical simulations

    NASA Astrophysics Data System (ADS)

    Wei, Meng; Liu, Yajing; Kaneko, Yoshihiro; McGuire, Jeffrey J.; Bilham, Roger

    2015-10-01

    Since a regional earthquake in 1951, shallow creep events on strike-slip faults within the Salton Trough, Southern California have been triggered at least 10 times by M ≥ 5.4 earthquakes within 200 km. The high earthquake and creep activity and the long history of digital recording within the Salton Trough region provide a unique opportunity to study the mechanism of creep event triggering by nearby earthquakes. Here, we document the history of fault creep events on the Superstition Hills Fault based on data from creepmeters, InSAR, and field surveys since 1988. We focus on a subset of these creep events that were triggered by significant nearby earthquakes. We model these events by adding realistic static and dynamic perturbations to a theoretical fault model based on rate- and state-dependent friction. We find that the static stress changes from the causal earthquakes are less than 0.1 MPa and too small to instantaneously trigger creep events. In contrast, we can reproduce the characteristics of triggered slip with dynamic perturbations alone. The instantaneous triggering of creep events depends on the peak and the time-integrated amplitudes of the dynamic Coulomb stress change. Based on observations and simulations, the stress change amplitude required to trigger a creep event of a 0.01-mm surface slip is about 0.6 MPa. This threshold is at least an order of magnitude larger than the reported triggering threshold of non-volcanic tremors (2-60 kPa) and earthquakes in geothermal fields (5 kPa) and near shale gas production sites (0.2-0.4 kPa), which may result from differences in effective normal stress, fault friction, the density of nucleation sites in these systems, or triggering mechanisms. We conclude that shallow frictional heterogeneity can explain both the spontaneous and dynamically triggered creep events on the Superstition Hills Fault.

  19. Choosing health, constrained choices.

    PubMed

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease. PMID:20028669

  20. Constrained Canonical Correlation.

    ERIC Educational Resources Information Center

    DeSarbo, Wayne S.; And Others

    1982-01-01

    A variety of problems associated with the interpretation of traditional canonical correlation are discussed. A response surface approach is developed which allows for investigation of changes in the coefficients while maintaining an optimum canonical correlation value. Also, a discrete or constrained canonical correlation method is presented. (JKS)

  1. Observation of high energy atmospheric neutrinos with antarctic muon and neutrino detector array

    SciTech Connect

    Ahrens, J.; Andres, E.; Bai, X.; Barouch, G.; Barwick, S.W.; Bay, R.C.; Becka, T.; Becker, K.-H.; Bertrand, D.; Binon, F.; Biron, A.; Booth, J.; Botner, O.; Bouchta, A.; Bouhali, O.; Boyce, M.M.; Carius, S.; Chen, A.; Chirkin, D.; Conrad, J.; Cooley, J.; Costa, C.G.S.; Cowen, D.F.; Dalberg, E.; De Clercq, C.; DeYoung, T.; Desiati, P.; Dewulf, J.-P.; Doksus, P.; Edsjo, J.; Ekstrom, P.; Feser, T.; Frere, J.-M.; Gaisser, T.K.; Gaug, M.; Goldschmidt, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Hardtke, R.; Hauschildt, T.; Hellwig, M.; Heukenkamp, H.; Hill, G.C.; Hulth, P.O.; Hundertmark, S.; Jacobsen, J.; Karle, A.; Kim, J.; Koci, B.; Kopke, L.; Kowalski, M.; Lamoureux, J.I.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Loaiza, P.; Lowder, D.M.; Madsen, J.; Marciniewski, P.; Matis, H.S.; McParland, C.P.; Miller, T.C.; Minaeva, Y.; Miocinovic, P.; Mock, P.C.; Morse, R.; Neunhoffer, T.; Niessen, P.; Nygren, D.R.; Ogelman, H.; Olbrechts, Ph.; Perez de los Heros, C.; Pohl, A.C.; Porrata, R.; Price, P.B.; Przybylski, G.T.; Rawlins, K.; Reed, C.; Rhode, W.; Ribordy, M.; Richter, S.; Rodriguez Martino, J.; Romenesko, P.; Ross, D.; Sander, H.-G.; Schmidt, T.; Schneider, D.; Schwarz, R.; Silvestri, A.; Solarz, M.; Spiczak, G.M.; Spiering, C.; Starinsky, N.; Steele, D.; Steffen, P.; Stokstad, R.G.; Streicher, O.; Sudhoff, P.; Sulanke, K.-H.; Taboada, I.; Thollander, L.; Thon, T.; Tilav, S.; Vander Donckt, M.; Walck, C.; Weinheimer, C.; Wiebusch, C.H.; Wiedeman, C.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Wu, W.; Yodh, G.; Young, S.

    2002-05-07

    The Antarctic Muon and Neutrino Detector Array (AMANDA) began collecting data with ten strings in 1997. Results from the first year of operation are presented. Neutrinos coming through the Earth from the Northern Hemisphere are identified by secondary muons moving upward through the array. Cosmic rays in the atmosphere generate a background of downward moving muons, which are about 10{sup 6} times more abundant than the upward moving muons. Over 130 days of exposure, we observed a total of about 300 neutrino events. In the same period, a background of 1.05 x 10{sup 9} cosmic ray muon events was recorded. The observed neutrino flux is consistent with atmospheric neutrino predictions. Monte Carlo simulations indicate that 90 percent of these events lie in the energy range 66 GeV to 3.4 TeV. The observation of atmospheric neutrinos consistent with expectations establishes AMANDA-B10 as a working neutrino telescope.

  2. Constraining Lorentz violation with cosmology.

    PubMed

    Zuntz, J A; Ferreira, P G; Zlosnik, T G

    2008-12-31

    The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities. PMID:19113765

  3. Constraining Lorentz Violation with Cosmology

    SciTech Connect

    Zuntz, J. A.; Ferreira, P. G.; Zlosnik, T. G

    2008-12-31

    The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities.

  4. Image compression using constrained relaxation

    NASA Astrophysics Data System (ADS)

    He, Zhihai

    2007-01-01

    In this work, we develop a new data representation framework, called constrained relaxation for image compression. Our basic observation is that an image is not a random 2-D array of pixels. They have to satisfy a set of imaging constraints so as to form a natural image. Therefore, one of the major tasks in image representation and coding is to efficiently encode these imaging constraints. The proposed data representation and image compression method not only achieves more efficient data compression than the state-of-the-art H.264 Intra frame coding, but also provides much more resilience to wireless transmission errors with an internal error-correction capability.

  5. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  6. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  7. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  8. BICEP2 constrains composite inflation

    NASA Astrophysics Data System (ADS)

    Channuie, Phongpichit

    2014-07-01

    In light of BICEP2, we re-examine single field inflationary models in which the inflation is a composite state stemming from various four-dimensional strongly coupled theories. We study in the Einstein frame a set of cosmological parameters, the primordial spectral index ns and tensor-to-scalar ratio r, predicted by such models. We confront the predicted results with the joint Planck data, and with the recent BICEP2 data. We constrain the number of e-foldings for composite models of inflation in order to obtain a successful inflation. We find that the minimal composite inflationary model is fully consistent with the Planck data. However it is in tension with the recent BICEP2 data. The observables predicted by the glueball inflationary model can be consistent with both Planck and BICEP2 contours if a suitable number of e-foldings are chosen. Surprisingly, the super Yang-Mills inflationary prediction is significantly consistent with the Planck and BICEP2 observations.

  9. Constrained Vapor Bubble

    NASA Technical Reports Server (NTRS)

    Huang, J.; Karthikeyan, M.; Plawsky, J.; Wayner, P. C., Jr.

    1999-01-01

    The nonisothermal Constrained Vapor Bubble, CVB, is being studied to enhance the understanding of passive systems controlled by interfacial phenomena. The study is multifaceted: 1) it is a basic scientific study in interfacial phenomena, fluid physics and thermodynamics; 2) it is a basic study in thermal transport; and 3) it is a study of a heat exchanger. The research is synergistic in that CVB research requires a microgravity environment and the space program needs thermal control systems like the CVB. Ground based studies are being done as a precursor to flight experiment. The results demonstrate that experimental techniques for the direct measurement of the fundamental operating parameters (temperature, pressure, and interfacial curvature fields) have been developed. Fluid flow and change-of-phase heat transfer are a function of the temperature field and the vapor bubble shape, which can be measured using an Image Analyzing Interferometer. The CVB for a microgravity environment, has various thin film regions that are of both basic and applied interest. Generically, a CVB is formed by underfilling an evacuated enclosure with a liquid. Classification depends on shape and Bond number. The specific CVB discussed herein was formed in a fused silica cell with inside dimensions of 3x3x40 mm and, therefore, can be viewed as a large version of a micro heat pipe. Since the dimensions are relatively large for a passive system, most of the liquid flow occurs under a small capillary pressure difference. Therefore, we can classify the discussed system as a low capillary pressure system. The studies discussed herein were done in a 1-g environment (Bond Number = 3.6) to obtain experience to design a microgravity experiment for a future NASA flight where low capillary pressure systems should prove more useful. The flight experiment is tentatively scheduled for the year 2000. The SCR was passed on September 16, 1997. The RDR is tentatively scheduled for October, 1998.

  10. Cosmicflows Constrained Local UniversE Simulations

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  11. Gyrification from constrained cortical expansion

    PubMed Central

    Tallinen, Tuomas; Chung, Jun Young; Biggins, John S.; Mahadevan, L.

    2014-01-01

    The exterior of the mammalian brain—the cerebral cortex—has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highly convoluted. Furthermore, this dependence on two simple geometric parameters that characterize the brain also allows us to qualitatively explain how variations in these parameters lead to anatomical anomalies in such situations as polymicrogyria, pachygyria, and lissencephalia. PMID:25136099

  12. Constrained Allocation Flux Balance Analysis.

    PubMed

    Mori, Matteo; Hwa, Terence; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-06-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  13. Constrained Allocation Flux Balance Analysis

    PubMed Central

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  14. Using outcrop observations, 3D discrete feature network (DFN) fluid-flow simulations, and subsurface data to constrain the impact of normal faults and opening mode fractures on fluid flow in an active asphalt mine

    NASA Astrophysics Data System (ADS)

    Wilson, C. E.; Aydin, A.; Durlofsky, L.; Karimi-Fard, M.; Brownlow, D. T.

    2008-12-01

    An active quarry near Uvalde, TX which mines asphaltic limestone from the Anacacho Formation offers an ideal setting to study fluid-flow in fractured and faulted carbonate rocks. Semi-3D exposures of normal faults and fractures in addition to visual evidence of asphalt concentrations in the quarry help constrain relationships between geologic structures and the flow and transport of hydrocarbons. Furthermore, a subsurface dataset which includes thin sections and measured asphalt concentration from the surrounding region provides a basis to estimate asphalt concentrations and constrain the depositional architecture of both the previously mined portions of the quarry and the un-mined surrounding rock volume. We characterized a series of normal faults and opening mode fractures at the quarry and documented a correlation between the intensity and distribution of these structures with increased concentrations of asphalt. The three-dimensional depositional architecture of the Anacacho Formation was characterized using the subsurface thin sections. Then outcrop exposures of faults, fractured beds, and stratigraphic contacts were mapped and their three-dimensional positions were recorded with differential gps devices. These two datasets were assimilated and a quarry-scale, geologically realistic, three-dimensional Discrete Feature Network (DFN) which represents the geometries and material properties of the matrix, normal faults, and fractures within the quarry was constructed. We then performed two-point flux, control-volume finite- difference fluid-flow simulations with the DFN to investigate the 3D flow and transport of fluids. The results were compared and contrasted with available asphalt concentration estimates from the mine and the aforementioned data from the surrounding drill cores.

  15. A constrained supersymmetric left-right model

    NASA Astrophysics Data System (ADS)

    Hirsch, Martin; Krauss, Manuel E.; Opferkuch, Toby; Porod, Werner; Staub, Florian

    2016-03-01

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model's capability to explain current anomalies observed at the LHC.

  16. Constrained simulation of the Bullet Cluster

    SciTech Connect

    Lage, Craig; Farrar, Glennys

    2014-06-01

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.

  17. Constraining curvatonic reheating

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Koyama, Kazuya; Wands, David

    2016-08-01

    We derive the first systematic observational constraints on reheating in models of inflation where an additional light scalar field contributes to primordial density perturbations and affects the expansion history during reheating. This encompasses the original curvaton model but also covers a larger class of scenarios. We find that, compared to the single-field case, lower values of the energy density at the end of inflation and of the reheating temperature are preferred when an additional scalar field is introduced. For instance, if inflation is driven by a quartic potential, which is one of the most favoured models when a light scalar field is added, the upper bound Treh < 5 × 104 GeV on the reheating temperature Treh is derived, and the implications of this value on post-inflationary physics are discussed. The information gained about reheating is also quantified and it is found that it remains modest in plateau inflation (though still larger than in the single-field version of the model) but can become substantial in quartic inflation. The role played by the vev of the additional scalar field at the end of inflation is highlighted, and opens interesting possibilities for exploring stochastic inflation effects that could determine its distribution.

  18. CONSTRAINING SOURCE REDSHIFT DISTRIBUTIONS WITH GRAVITATIONAL LENSING

    SciTech Connect

    Wittman, D.; Dawson, W. A.

    2012-09-10

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that {approx}40 massive ({sigma}{sub v} = 1200 km s{sup -1}) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to {approx}11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N{sub lens}{sup -1/2}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  19. Constrained Clustering With Imperfect Oracles.

    PubMed

    Zhu, Xiatian; Loy, Chen Change; Gong, Shaogang

    2016-06-01

    While clustering is usually an unsupervised operation, there are circumstances where we have access to prior belief that pairs of samples should (or should not) be assigned with the same cluster. Constrained clustering aims to exploit this prior belief as constraint (or weak supervision) to influence the cluster formation so as to obtain a data structure more closely resembling human perception. Two important issues remain open: 1) how to exploit sparse constraints effectively and 2) how to handle ill-conditioned/noisy constraints generated by imperfect oracles. In this paper, we present a novel pairwise similarity measure framework to address the above issues. Specifically, in contrast to existing constrained clustering approaches that blindly rely on all features for constraint propagation, our approach searches for neighborhoods driven by discriminative feature selection for more effective constraint diffusion. Crucially, we formulate a novel approach to handling the noisy constraint problem, which has been unrealistically ignored in the constrained clustering literature. Extensive comparative results show that our method is superior to the state-of-the-art constrained clustering approaches and can generally benefit existing pairwise similarity-based data clustering algorithms, such as spectral clustering and affinity propagation. PMID:25622327

  20. Generalized Constrained Multiple Correspondence Analysis.

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2002-01-01

    Proposes a comprehensive approach, generalized constrained multiple correspondence analysis, for imposing both row and column constraints on multivariate discrete data. Each set of discrete data is decomposed into several submatrices and then multiple correspondence analysis is applied to explore relationships among the decomposed submatrices.…

  1. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  2. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  3. Constraining torsion with Gravity Probe B

    SciTech Connect

    Mao Yi; Guth, Alan H.; Cabi, Serkan; Tegmark, Max

    2007-11-15

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.

  4. Constraining torsion with Gravity Probe B

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Tegmark, Max; Guth, Alan H.; Cabi, Serkan

    2007-11-01

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein’s general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.

  5. How alive is constrained SUSY really?

    DOE PAGESBeta

    Bechtle, Philip; Desch, Klaus; Dreiner, Herbert K.; Hamer, Matthias; Kramer, Michael; O'Leary, Ben; Porod, Werner; Sarrazin, Bjorn; Stefaniak, Tim; Uhlenbrock, Mathias; et al

    2016-05-31

    Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Furthermore, using the framework Fittino, we perform globalmore » fits of the CMSSM to the toy data and find that this model is excluded at the 90% confidence level.« less

  6. Constraining dark sectors with monojets and dijets

    NASA Astrophysics Data System (ADS)

    Chala, Mikael; Kahlhoefer, Felix; McCullough, Matthew; Nardini, Germano; Schmidt-Hoberg, Kai

    2015-07-01

    We consider dark sector particles (DSPs) that obtain sizeable interactions with Standard Model fermions from a new mediator. While these particles can avoid observation in direct detection experiments, they are strongly constrained by LHC measurements. We demonstrate that there is an important complementarity between searches for DSP production and searches for the mediator itself, in particular bounds on (broad) dijet resonances. This observation is crucial not only in the case where the DSP is all of the dark matter but whenever — precisely due to its sizeable interactions with the visible sector — the DSP annihilates away so efficiently that it only forms a dark matter subcomponent. To highlight the different roles of DSP direct detection and LHC monojet and dijet searches, as well as perturbativity constraints, we first analyse the exemplary case of an axial-vector mediator and then generalise our results. We find important implications for the interpretation of LHC dark matter searches in terms of simplified models.

  7. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    NASA Astrophysics Data System (ADS)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  8. Spacetime-constrained oblivious transfer

    NASA Astrophysics Data System (ADS)

    Pitalúa-García, Damián

    2016-06-01

    In 1-out-of-2 oblivious transfer (OT), Alice inputs numbers x0,x1 , Bob inputs a bit b and outputs xb. Secure OT requires that Alice and Bob learn nothing about b and xb ¯, respectively. We define spacetime-constrained oblivious transfer (SCOT) as OT in Minkowski spacetime in which Bob must output xb within Rb, where R0 and R1 are fixed spacelike separated spacetime regions. We show that unconditionally secure SCOT is impossible with classical protocols in Minkowski (or Galilean) spacetime, or with quantum protocols in Galilean spacetime. We describe a quantum SCOT protocol in Minkowski spacetime, and we show it unconditionally secure.

  9. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  10. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  11. Porosity and water ice content of the sub-surface material in the Imhotep region of 67P/Churyumov-Gerasimenko constrained with the Microwave Instrument on the Rosetta Orbiter (MIRO) observations

    NASA Astrophysics Data System (ADS)

    von Allmen, Paul

    2016-04-01

    In late October 2014, the Rosetta spacecraft orbited around 67P/Churyumov-Gerasimenko at a distance less than 10 km, the closest orbit in the mission so far. During this close approach, the Microwave Instrument on the Rosetta Orbiter (MIRO) observed an 800-meter long swath in the Imhotep region on October 27, 2014. Continuum and spectroscopic data were obtained. These data provided the highest spatial resolution obtained to date with the MIRO instrument. The footprint diameter of MIRO on the surface of the nucleus was about 20 meters in the sub-millimeter band at λ=0.5 mm, and 60 meters in the millimeter band at λ=1.6 mm. The swath transitions from a relatively flat area of the Imhotep region to a topographically more diverse area, still making the data relatively easy to analyze. We used a thermal model of the nucleus, including water ice sublimation to analyze the continuum data. The sub-surface material of the nucleus is described in terms of its porosity, grain size and water ice content, in addition to assumptions for the dust bulk density and grain packing geometry. We used the optimal estimation algorithm to fit the material parameters for the best agreement between the observations and the simulation results. We will present the material parameters determined from our analysis.

  12. A stochastic framework for inequality constrained estimation

    NASA Astrophysics Data System (ADS)

    Roese-Koerner, Lutz; Devaraju, Balaji; Sneeuw, Nico; Schuh, Wolf-Dieter

    2012-11-01

    Quality description is one of the key features of geodetic inference. This is even more true if additional information about the parameters is available that could improve the accuracy of the estimate. However, if such additional information is provided in the form of inequality constraints, most of the standard tools of quality description (variance propagation, confidence ellipses, etc.) cannot be applied, as there is no analytical relationship between parameters and observations. Some analytical methods have been developed for describing the quality of inequality constrained estimates. However, these methods either ignore the probability mass in the infeasible region or the influence of inactive constraints and therefore yield only approximate results. In this article, a frequentist framework for quality description of inequality constrained least-squares estimates is developed, based on the Monte Carlo method. The quality is described in terms of highest probability density regions. Beyond this accuracy estimate, the proposed method allows to determine the influence and contribution of each constraint on each parameter using Lagrange multipliers. Plausibility of the constraints is checked by hypothesis testing and estimating the probability mass in the infeasible region. As more probability mass concentrates in less space, applying the proposed method results in smaller confidence regions compared to the unconstrained ordinary least-squares solution. The method is applied to describe the quality of estimates in the problem of approximating a time series with positive definite functions.

  13. Constraining the braking indices of magnetars

    NASA Astrophysics Data System (ADS)

    Gao, Z. F.; Li, X.-D.; Wang, N.; Yuan, J. P.; Wang, P.; Peng, Q. H.; Du, Y. J.

    2016-02-01

    Because of the lack of long-term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index n of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of the mean braking indices of eight magnetars with SNRs, and find that they cluster in the range of 1-42. Five magnetars have smaller mean braking indices of 1 < n < 3, and we interpret them within a combination of magneto-dipole radiation and wind-aided braking. The larger mean braking indices of n > 3 for the other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with 1 < n < 3, and the dipolar magnetic field decay rates for the magnetars with n > 3, within the updated magneto-thermal evolution models. Although the constrained range of the magnetars' braking indices is tentative, as a result of the uncertainties in the SNR ages due to distance uncertainties and the unknown conditions of the expanding shells, our method provides an effective way to constrain the magnetars' braking indices if the measurements of the SNR ages are reliable, which can be improved by future observations.

  14. Quantum Annealing for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Spedalieri, Federico M.

    2016-03-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that promise to solve certain combinatorial optimization problems of practical relevance faster than their classical analogues. The applicability of such devices for many theoretical and real-world optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which, in turn, requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here, we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby reduces the number of required couplers and removes the need for minor embedding, greatly reducing the number of required physical qubits. We argue the advantages of the proposed technique and illustrate its effectiveness. We conclude by discussing the experimental feasibility of the suggested method as well as its potential to appreciably reduce the resource requirements for implementing optimization problems on quantum annealers and its significance in the field of quantum computing.

  15. Quantum Annealing for Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Spedalieri, Federico

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealers that could potentially solve certain quadratic unconstrained binary optimization problems faster than their classical analogues. The applicability of such devices for many theoretical and practical optimization problems, which are often constrained, is severely limited by the sparse, rigid layout of the devices' quantum bits. Traditionally, constraints are addressed by the addition of penalty terms to the Hamiltonian of the problem, which in turn requires prohibitively increasing physical resources while also restricting the dynamical range of the interactions. Here we propose a method for encoding constrained optimization problems on quantum annealers that eliminates the need for penalty terms and thereby removes many of the obstacles associated with the implementation of these. We argue the advantages of the proposed technique and illustrate its effectiveness. We then conclude by discussing the experimental feasibility of the suggested method as well as its potential to boost the encodability of other optimization problems.

  16. Constrained Local UniversE Simulations: a Local Group factory

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  17. Time-dependent response of hydrogels under constrained swelling

    NASA Astrophysics Data System (ADS)

    Drozdov, A. D.; Sommer-Larsen, P.; Christiansen, J. deClaville; Sanporean, C.-G.

    2014-06-01

    Constitutive equations are developed for the viscoplastic behavior of covalently cross-linked hydrogels subjected to swelling. The ability of the model to describe the time-dependent response is confirmed by comparison of results of simulation with observations on partially swollen poly(2-hydroxyethyl methacrylate) gel specimens in uniaxial tensile tests with a constant strain rate and tensile relaxation tests. The stress-strain relations are applied to study the kinetics of unconstrained and constrained swelling. The following conclusions are drawn from numerical analysis: (i) maximum water uptake under constrained swelling a viscoplastic hydrogel is lower than that for unconstrained swelling of its elastic counterpart and exceeds maximum water uptake under constrained swelling of the elastic gel, (ii) when the rate of water diffusion exceeds the rate of plastic flow in a polymer network, swelling curves (mass uptake versus time) for viscoplastic gels under constraints demonstrate characteristic features of non-Fickian diffusion.

  18. Slow Solar Wind: Observable Characteristics for Constraining Modelling

    NASA Astrophysics Data System (ADS)

    Ofman, L.; Abbo, L.; Antiochos, S. K.; Hansteen, V. H.; Harra, L.; Ko, Y. K.; Lapenta, G.; Li, B.; Riley, P.; Strachan, L.; von Steiger, R.; Wang, Y. M.

    2015-12-01

    The Slow Solar Wind (SSW) origin is an open issue in the post SOHO era and forms a major objective for planned future missions such as the Solar Orbiter and Solar Probe Plus.Results from spacecraft data, combined with theoretical modeling, have helped to investigate many aspects of the SSW. Fundamental physical properties of the coronal plasma have been derived from spectroscopic and imaging remote-sensing data and in-situ data, and these results have provided crucial insights for a deeper understanding of the origin and acceleration of the SSW.Advances models of the SSW in coronal streamers and other structures have been developed using 3D MHD and multi-fluid equations.Nevertheless, there are still debated questions such as:What are the source regions of SSW? What are their contributions to the SSW?Which is the role of the magnetic topology in corona for the origin, acceleration and energy deposition of SSW?Which are the possible acceleration and heating mechanisms for the SSW?The aim of this study is to present the insights on the SSW origin and formationarisen during the discussions at the International Space Science Institute (ISSI) by the Team entitled ''Slowsolar wind sources and acceleration mechanisms in the corona'' held in Bern (Switzerland) in March2014--2015. The attached figure will be presented to summarize the different hypotheses of the SSW formation.

  19. Constraining Intracluster Gas Models with AMiBA13

    NASA Astrophysics Data System (ADS)

    Molnar, Sandor M.; Umetsu, Keiichi; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltán; Hearn, Nathan; Shang, Cien; Ho, Paul T. P.; Locutus Huang, Chih-Wei; Koch, Patrick M.; Liao, Yu-Wei Victor; Lin, Kai-Yang; Liu, Guo-Chin; Nishioka, Hiroaki; Wang, Fu-Cheng; Proty Wu, Jiun-Huei

    2010-11-01

    Clusters of galaxies have been extensively used to determine cosmological parameters. A major difficulty in making the best use of Sunyaev-Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intracluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2 m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal β models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the quality of constraints on the distribution of gas density, and simulated SZ visibilities (AMiBA13 observations) for constraints on the large-scale temperature distribution of the ICG. We find that AMiBA13 visibilities should constrain the scale radius of the temperature distribution to about 50% accuracy. We conclude that the upgraded AMiBA, AMiBA13, should be a powerful instrument to constrain the large-scale distribution of the ICG.

  20. Constrained Peptides as Miniature Protein Structures

    PubMed Central

    Yin, Hang

    2012-01-01

    This paper discusses the recent developments of protein engineering using both covalent and noncovalent bonds to constrain peptides, forcing them into designed protein secondary structures. These constrained peptides subsequently can be used as peptidomimetics for biological functions such as regulations of protein-protein interactions. PMID:25969758

  1. Constraining the Evolution of ZZ Ceti

    NASA Technical Reports Server (NTRS)

    Mukadam, Anjum S.; Kepler, S. O.; Winget, D. E.; Nather, R. E.; Kilic, M.; Mullally, F.; vonHippel, T.; Kleinman, S. J.; Nitta, A.; Guzik, J. A.

    2003-01-01

    We report our analysis of the stability of pulsation periods in the DAV star (pulsating hydrogen atmosphere white dwarf) ZZ Ceti, also called R548. On the basis of observations that span 31 years, we conclude that the period 213.13 s observed in ZZ Ceti drifts at a rate dP/dt 5 (5.5 plus or minus 1.9) x 10(exp -15) ss(sup -1), after correcting for proper motion. Our results are consistent with previous P values for this mode and an improvement over them because of the larger time base. The characteristic stability timescale implied for the pulsation period is |P||P(raised dot)|greater than or equal to 1.2 Gyr, comparable to the theoretical cooling timescale for the star. Our current stability limit for the period 213.13 s is only slightly less than the present measurement for another DAV, G117-B15A, for the period 215.2 s, establishing this mode in ZZ Ceti as the second most stable optical clock known, comparable to atomic clocks and more stable than most pulsars. Constraining the cooling rate of ZZ Ceti aids theoretical evolutionary models and white dwarf cosmochronology. The drift rate of this clock is small enough that we can set interesting limits on reflex motion due to planetary companions.

  2. CONSTRAINING RADIO EMISSION FROM MAGNETARS

    SciTech Connect

    Lazarus, P.; Kaspi, V. M.; Dib, R.; Champion, D. J.; Hessels, J. W. T.

    2012-01-10

    We report on radio observations of five magnetars and two magnetar candidates carried out at 1950 MHz with the Green Bank Telescope in 2006-2007. The data from these observations were searched for periodic emission and bright single pulses. Also, monitoring observations of magnetar 4U 0142+61 following its 2006 X-ray bursts were obtained. No radio emission was detected for any of our targets. The non-detections allow us to place luminosity upper limits of L{sub 1950} {approx}< 1.60 mJy kpc{sup 2} for periodic emission and L{sub 1950,single} {approx}< 7.6 Jy kpc{sup 2} for single pulse emission. These are the most stringent limits yet for the magnetars observed. The resulting luminosity upper limits together with previous results are discussed, as is the importance of further radio observations of radio-loud and radio-quiet magnetars.

  3. Constrained Deformable-Layer Tomography

    NASA Astrophysics Data System (ADS)

    Zhou, H.

    2006-12-01

    The improvement on traveltime tomography depends on improving data coverage and tomographic methodology. The data coverage depends on the spatial distribution of sources and stations, as well as the extent of lateral velocity variation that may alter the raypaths locally. A reliable tomographic image requires large enough ray hit count and wide enough angular range between traversing rays over the targeted anomalies. Recent years have witnessed the advancement of traveltime tomography in two aspects. One is the use of finite frequency kernels, and the other is the improvement on model parameterization, particularly that allows the use of a priori constraints. A new way of model parameterization is the deformable-layer tomography (DLT), which directly inverts for the geometry of velocity interfaces by varying the depths of grid points to achieve a best traveltime fit. In contrast, conventional grid or cell tomography seeks to determine velocity values of a mesh of fixed-in-space grids or cells. In this study, the DLT is used to map crustal P-wave velocities with first arrival data from local earthquakes and two LARSE active surveys in southern California. The DLT solutions along three profiles are constrained using known depth ranges of the Moho discontinuity at 21 sites from a previous receiver function study. The DLT solutions are generally well resolved according to restoration resolution tests. The patterns of 2D DLT models of different profiles match well at their intersection locations. In comparison with existing 3D cell tomography models in southern California, the new DLT models significantly improve the data fitness. In comparison with the multi-scale cell tomography conducted for the same data, while the data fitting levels of the DLT and the multi-scale cell tomography models are compatible, the DLT provides much higher vertical resolution and more realistic description of the undulation of velocity discontinuities. The constraints on the Moho depth

  4. Discussion of Void nucleation in constrained silver interlayers'' and Void growth and coalescence in constrained silver interlayers''

    SciTech Connect

    Kassner, M.E.; Tolle, M.C. . Dept. of Mechanical Engineering); Rosen, R.S.; Henshall, G.A.; Elmer, J.W. )

    1993-08-01

    The authors have read with some concern the two articles by Klassen, Weatherly, and Ramaswami (KWR) entitled Void Nucleation in Constrained Silver Interlayers'' and Void Growth and Coalescence in Constrained Silver Interlayers'' published recently in this journal. They have several comments to these articles. First, substantial portions of these articles appear to closely reaffirm experiments and stress analyses on fracture and other mechanical behavior of constrained silver interlayers already published. KWR appeared to be unaware of (or disregarded) much of these works and this communication is partly intended to direct KWR and perhaps others to these works. Next, although there are many scientific aspects of the articles that warrant discussion, they have focused on two principal points. First, there appear to be some odd aspects of the Nucleation (KWR) article. The authors suggest nucleation and unstable growth occur only near the fracture stress (S[sub f]). This clearly is in contradiction to their careful work, where nucleation is shown to occur at very low stress (S[sub f]/5), just above the uniaxial yield stress of the interlayer silver. Second, and more importantly, KWR do not report any void growth. This, also, is in contradiction to earlier work on void growth in constrained silver interlayers. In the case of brazed silver joints, the shrinkage voids are observed to grow until a critical void separation is reached and instability occurs. In their work, voids appear to grow from small to larger cavities with small overall plastic strain in the interlayer, including at the base-metal/silver interface. In summary, although the KWR articles reasonably reproduced some established experimental trends for constrained interlayers and observed some other phenomena particularly relevant to the case with a substantial volume fraction of dispersions, other more basic conclusions relating to final fracture do not appear to consider more reasonable approaches.

  5. Constraining Binary Stellar Evolution With Pulsar Timing

    NASA Astrophysics Data System (ADS)

    Ferdman, Robert D.; Stairs, I. H.; Backer, D. C.; Burgay, M.; Camilo, F.; D'Amico, N.; Demorest, P.; Faulkner, A.; Hobbs, G.; Kramer, M.; Lorimer, D. R.; Lyne, A. G.; Manchester, R.; McLaughlin, M.; Nice, D. J.; Possenti, A.

    2006-06-01

    The Parkes Multibeam Pulsar Survey has yielded a significant number of very interesting binary and millisecond pulsars. Two of these objects are part of an ongoing timing study at the Green Bank Telescope (GBT). PSR J1756-2251 is a double-neutron star (DNS) binary system. It is similar to the original Hulse-Taylor binary pulsar system PSR B1913+16 in its orbital properties, thus providing another important opportunity to test the validity of General Relativity, as well as the evolutionary history of DNS systems through mass measurements. PSR J1802-2124 is part of the relatively new and unstudied "intermediate-mass" class of binary system, which typically have spin periods in the tens of milliseconds, and/or relatively massive (> 0.7 solar masses) white dwarf companions. With our GBT observations, we have detected the Shapiro delay in this system, allowing us to constrain the individual masses of the neutron star and white dwarf companion, and thus the mass-transfer history, in this unusual system.

  6. CONSTRAINING SOLAR FLARE DIFFERENTIAL EMISSION MEASURES WITH EVE AND RHESSI

    SciTech Connect

    Caspi, Amir; McTiernan, James M.; Warren, Harry P.

    2014-06-20

    Deriving a well-constrained differential emission measure (DEM) distribution for solar flares has historically been difficult, primarily because no single instrument is sensitive to the full range of coronal temperatures observed in flares, from ≲2 to ≳50 MK. We present a new technique, combining extreme ultraviolet (EUV) spectra from the EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory with X-ray spectra from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), to derive, for the first time, a self-consistent, well-constrained DEM for jointly observed solar flares. EVE is sensitive to ∼2-25 MK thermal plasma emission, and RHESSI to ≳10 MK; together, the two instruments cover the full range of flare coronal plasma temperatures. We have validated the new technique on artificial test data, and apply it to two X-class flares from solar cycle 24 to determine the flare DEM and its temporal evolution; the constraints on the thermal emission derived from the EVE data also constrain the low energy cutoff of the non-thermal electrons, a crucial parameter for flare energetics. The DEM analysis can also be used to predict the soft X-ray flux in the poorly observed ∼0.4-5 nm range, with important applications for geospace science.

  7. Constrained crosstalk resistant adaptive noise canceller

    NASA Astrophysics Data System (ADS)

    Parsa, V.; Parker, P.

    1994-08-01

    The performance of an adaptive noise canceller (ANC) is sensitive to the presence of signal `crosstalk' in the reference channel. The authors propose a novel approach to crosstalk resistant adaptive noise cancellation, namely the constrained crosstalk resistant adaptive noise canceller (CCRANC). The theoretical analysis of the CCRANC along with the constrained algorithm is presented. The performance of the CCRANC in recovering somatosensory evoked potentials (SEPs) from myoelectric interference is then evaluated through simulations.

  8. Constraining the Evolution of Poor Clusters

    NASA Astrophysics Data System (ADS)

    Broming, Emma J.; Fuse, C. R.

    2012-01-01

    There currently exists no method by which to quantify the evolutionary state of poor clusters (PCs). Research by Broming & Fuse (2010) demonstrated that the evolution of Hickson compact groups (HCGs) are constrained by the correlation between the X-ray luminosities of point sources and diffuse gas. The current investigation adopts an analogous approach to understanding PCs. Plionis et al. (2009) proposed a theory to define the evolution of poor clusters. The theory asserts that cannibalism of galaxies causes a cluster to become more spherical, develop increased velocity dispersion and increased X-ray temperature and gas luminosity. Data used to quantify the evolution of the poor clusters were compiled across multiple wavelengths. The sample includes 162 objects from the WBL catalogue (White et al. 1999), 30 poor clusters in the Chandra X-ray Observatory archive, and 15 Abell poor clusters observed with BAX (Sadat et al. 2004). Preliminary results indicate that the cluster velocity dispersion and X-ray gas and point source luminosities can be used to highlight a weak correlation. An evolutionary trend was observed for multiple correlations detailed herein. The current study is a continuation of the work by Broming & Fuse examining point sources and their properties to determine the evolutionary stage of compact groups, poor clusters, and their proposed remnants, isolated ellipticals and fossil groups. Preliminary data suggests that compact groups and their high-mass counterpart, poor clusters, evolve along tracks identified in the X-ray gas - X-ray point source relation. While compact groups likely evolve into isolated elliptical galaxies, fossil groups display properties that suggest they are the remains of fully coalesced poor clusters.

  9. Constraining blazar physics with polarization signatures

    NASA Astrophysics Data System (ADS)

    Zhang, Haocheng; Boettcher, Markus; Li, Hui

    2016-01-01

    Blazars are active galactic nuclei whose jets are directed very close to our line of sight. They emit nonthermal-dominated emission from radio to gamma-rays, with the radio to optical emissions known to be polarized. Both radiation and polarization signatures can be strongly variable. Observations have shown that sometimes strong multiwavelength flares are accompanied by drastic polarization variations, indicating active participation of the magnetic field during flares. We have developed a 3D multi-zone time-dependent polarization-dependent radiation transfer code, which enables us to study the spectral and polarization signatures of blazar flares simultaneously. By combining this code with a Fokker-Planck nonthermal particle evolution scheme, we are able to derive simultaneous fits to time-dependent spectra, multiwavelength light curves, and time-dependent optical polarization signatures of a well-known multiwavelength flare with 180 degree polarization angle swing of the blazar 3C279. Our work shows that with detailed consideration of light travel time effects, the apparently symmetric time-dependent radiation and polarization signatures can be naturally explained by a straight, helically symmetric jet pervaded by a helical magnetic field, without the need of any asymmetric structures. Also our model suggests that the excess in the nonthermal particles during flares can originate from magnetic reconnection events, initiated by a shock propagating through the emission region. Additionally, the magnetic field should generally revert to its initial topology after the flare. We conclude that such shock-initiated magnetic reconnection event in an emission environment with relatively strong magnetic energy can be the driver of multiwavelength flares with polarization angle swings. Future statistics on such observations will constrain general features of such events, while magneto-hydrodynamic simulations will provide physical scenarios for the magnetic field evolution

  10. Constraining duty cycles through a Bayesian technique

    NASA Astrophysics Data System (ADS)

    Romano, P.; Guidorzi, C.; Segreto, A.; Ducci, L.; Vercellone, S.

    2014-12-01

    The duty cycle (DC) of astrophysical sources is generally defined as the fraction of time during which the sources are active. It is used to both characterize their central engine and to plan further observing campaigns to study them. However, DCs are generally not provided with statistical uncertainties, since the standard approach is to perform Monte Carlo bootstrap simulations to evaluate them, which can be quite time consuming for a large sample of sources. As an alternative, considerably less time-consuming approach, we derived the theoretical expectation value for the DC and its error for sources whose state is one of two possible, mutually exclusive states, inactive (off) or flaring (on), as based on a finite set of independent observational data points. Following a Bayesian approach, we derived the analytical expression for the posterior, the conjugated distribution adopted as prior, and the expectation value and variance. We applied our method to the specific case of the inactivity duty cycle (IDC) for supergiant fast X-ray transients, a subclass of flaring high mass X-ray binaries characterized by large dynamical ranges. We also studied IDC as a function of the number of observations in the sample. Finally, we compare the results with the theoretical expectations. We found excellent agreement with our findings based on the standard bootstrap method. Our Bayesian treatment can be applied to all sets of independent observations of two-state sources, such as active galactic nuclei, X-ray binaries, etc. In addition to being far less time consuming than bootstrap methods, the additional strength of this approach becomes obvious when considering a well-populated class of sources (Nsrc ≥ 50) for which the prior can be fully characterized by fitting the distribution of the observed DCs for all sources in the class, so that, through the prior, one can further constrain the DC of a new source by exploiting the information acquired on the DC distribution derived

  11. The study of microstructure and mechanical properties of twin-roll cast AZ31 magnesium alloy after constrained groove pressing

    NASA Astrophysics Data System (ADS)

    Zimina, M.; Bohlen, J.; Letzig, D.; Kurz, G.; Cieslar, M.; Zník, J.

    2014-08-01

    Microstructure investigation and microhardness mapping were done on the material with ultra-fine grained structure prepared by constrained groove pressing of twin-roll cast AZ31 magnesium strips. The microstructure observations showed significant drop of the grain size from 200 gm to 20 gm after constrained groove pressing. Moreover, the heterogeneities in the microhardness along the cross-section observed in the as-cast strip were replaced by the bands of different microhardness in the constrained groove pressed material. It is shown that the constrained groove pressing technique is a good tool for the grain refinement of magnesium alloys.

  12. Spectrum reconstruction based on the constrained optimal linear inverse methods.

    PubMed

    Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan

    2012-07-01

    The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461

  13. Lilith: a tool for constraining new physics from Higgs measurements

    NASA Astrophysics Data System (ADS)

    Bernon, Jérémy; Dumont, Béranger

    2015-09-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.

  14. Gyrification from constrained cortical expansion

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas

    The convolutions of the human brain are a symbol of its functional complexity. But how does the outer surface of the brain, the layered cortex of neuronal gray matter get its folds? In this talk, we ask to which extent folding of the brain can be explained as a purely mechanical consequence of unpatterned growth of the cortical layer relative to the sublayers. Modeling the growing brain as a soft layered solid leads to elastic instabilities and the formation of cusped sulci and smooth gyri consistent with observations across species in both normal and pathological situations. Furthermore, we apply initial geometries obtained from fetal brain MRI to address the question of how the brain geometry and folding patterns may be coupled via mechanics.

  15. Mars, Moon, Mercury: Magnetometry Constrains Planetary Evolution

    NASA Astrophysics Data System (ADS)

    Connerney, John E. P.

    2015-04-01

    We have long appreciated that magnetic measurements obtained about a magnetized planet are of great value in probing the deep interior. The existence of a substantial planetary magnetic field implies dynamo action requiring an electrically conducting, fluid core in convective motion and a source of energy to maintain it. Application of the well-known Lowe's spectrum may in some cases identify the dynamo outer radius; where secular variation can be measured, the outer radius can be estimated using the frozen flux approximation. Magnetic induction may be used to probe the electrical conductivity of the mantle and crust. These are useful constraints that together with gravity and/or other observables we may infer the state of the interior and gain insight into planetary evolution. But only recently has it become clear that space magnetometry can do much more, particularly about a planet that once sustained a dynamo that has since disappeared. Mars is the best example of this class: the Mars Global Surveyor spacecraft globally mapped a remanent crustal field left behind after the demise of the dynamo. This map is a magnetic record of the planet's evolution. I will argue that this map may be interpreted to constrain the era of dynamo activity within Mars; to establish the reversal history of the Mars dynamo; to infer the magnetization intensity of Mars crustal rock and the depth of the magnetized crustal layer; and to establish that plate tectonics is not unique to planet Earth, as has so often been claimed. The Lunar magnetic record is in contrast one of weakly magnetized and scattered sources, not easily interpreted as yet in terms of the interior. Magnetometry about Mercury is more difficult to interpret owing to the relatively weak field and proximity to the sun, but MESSENGER (and ultimately Beppi Columbo) may yet map crustal anomalies (induced and/or remanent).

  16. Constraining Cosmic Evolution of Type Ia Supernovae

    SciTech Connect

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  17. Motor Demands Constrain Cognitive Rule Structures.

    PubMed

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-03-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist. PMID:26966909

  18. Motor Demands Constrain Cognitive Rule Structures

    PubMed Central

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-01-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist. PMID:26966909

  19. Towards weakly constrained double field theory

    NASA Astrophysics Data System (ADS)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  20. Constraining weak annihilation using semileptonic D decays

    SciTech Connect

    Ligeti, Zoltan; Luke, Michael; Manohar, Aneesh V.

    2010-08-01

    The recently measured semileptonic D{sub s} decay rate can be used to constrain weak annihilation (WA) effects in semileptonic D and B decays. We revisit the theoretical predictions for inclusive semileptonic D{sub (s)} decays using a variety of quark mass schemes. The most reliable results are obtained if the fits to B decay distributions are used to eliminate the charm quark mass dependence, without using any specific charm mass scheme. Our fit to the available data shows that WA is smaller than commonly assumed. There is no indication that the WA octet contribution (which is better constrained than the singlet contribution) dominates. The results constrain an important source of uncertainty in the extraction of |V{sub ub}| from inclusive semileptonic B decays.

  1. Constrained simultaneous stitching measurement for aspheric surface

    NASA Astrophysics Data System (ADS)

    Wang, Weibo; Fan, Zhigang

    2013-01-01

    Significant errors could be result from multiple data sets due to error transfer and accumulation in each sub-aperture. The constrained simultaneous stitching method with error calibration is proposed to increase the stability of the numerical solution of the stitching algorithm. Global averaging error and constrained optimization are applied to simultaneous stitching after alignment errors calibrated. The goal of optimization and merit function is the minimization of the discrepancy between multiple data sets by including components related to various alignment errors. The values for stitching coefficients that fall within the unit sphere and minimize the mean square difference between and overlapping values can be found by iterative constrained optimization. At last, the full aperture wave-front was reconstructed by simultaneous stitching with the stitching coefficients required to remain within meaningful bounds.

  2. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  3. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  4. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  5. Performance enhancement for GPS positioning using constrained Kalman filtering

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Fuhong

    2015-08-01

    Over the past decades Kalman filtering (KF) algorithms have been extensively investigated and applied in the area of kinematic positioning. In the application of KF in kinematic precise point positioning (PPP), it is often the case where some known functional or theoretical relations exist among the unknown state parameters, which can be and should be made use of to enhance the performance of kinematic PPP, especially in an urban and forest environment. The central task of this paper is to effectively blend the commonly used GNSS data and internal/external additional constrained information to generate an optimal PPP solution. This paper first investigates the basic algorithm of constrained Kalman filtering. Then two types of PPP model with speed constraints and trajectory constraints, respectively, are proposed. Further validation tests based on a variety of situations show that the positioning performances (positioning accuracy, reliability and continuity) from the constrained Kalman filter are significantly superior to those from the conventional Kalman filter, particularly under extremely poor observation conditions.

  6. Towards better constrained models of the solar magnetic cycle

    NASA Astrophysics Data System (ADS)

    Munoz-Jaramillo, Andres

    2010-12-01

    The best tools we have for understanding the origin of solar magnetic variability are kinematic dynamo models. During the last decade, this type of models has seen a continuous evolution and has become increasingly successful at reproducing solar cycle characteristics. The basic ingredients of these models are: the solar differential rotation -- which acts as the main source of energy for the system by shearing the magnetic field; the meridional circulation -- which plays a crucial role in magnetic field transport; the turbulent diffusivity -- which attempts to capture the effect of convective turbulence on the large scale magnetic field; and the poloidal field source -- which closes the cycle by regenerating the poloidal magnetic field. However, most of these ingredients remain poorly constrained which allows one to obtain solar-like solutions by "tuning" the input parameters, leading to controversy regarding which parameter set is more appropriate. In this thesis we revisit each of those ingredients in an attempt to constrain them better by using observational data and theoretical considerations, reducing the amount of free parameters in the model. For the meridional flow and differential rotation we use helioseismic data to constrain free parameters and find that the differential rotation is well determined, but the available data can only constrain the latitudinal dependence of the meridional flow. For the turbulent magnetic diffusivity we show that combining mixing-length theory estimates with magnetic quenching allows us to obtain viable magnetic cycles and that the commonly used diffusivity profiles can be understood as a spatiotemporal average of this process. For the poloidal source we introduce a more realistic way of modeling active region emergence and decay and find that this resolves existing discrepancies between kinematic dynamo models and surface flux transport simulations. We also study the physical mechanisms behind the unusually long minimum of

  7. Constrained tri-sphere kinematic positioning system

    DOEpatents

    Viola, Robert J

    2010-12-14

    A scalable and adaptable, six-degree-of-freedom, kinematic positioning system is described. The system can position objects supported on top of, or suspended from, jacks comprising constrained joints. The system is compatible with extreme low temperature or high vacuum environments. When constant adjustment is not required a removable motor unit is available.

  8. Rhythmic Grouping Biases Constrain Infant Statistical Learning

    ERIC Educational Resources Information Center

    Hay, Jessica F.; Saffran, Jenny R.

    2012-01-01

    Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to…

  9. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  10. Automation of constrained-value business forms

    SciTech Connect

    Carson, M.L.; Beaumariage, T.G.; Greitzer, F.L.

    1993-05-01

    Expert systems can improve many business tasks. However, the nature of a constrained-value business form can result in a rule base that contains circular reasoning, unsuitable for expert system implementation. A methodology is presented for restructuring such a rule base for compatibility with a backward-chaining expert system.

  11. Analytical solutions to constrained hypersonic flight trajectories

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The flight trajectory of aerospace vehicles subject to a class of path constraints is considered. The constrained dynamics is shown to be a natural two-time-scale system. Asymptotic analytical solutions are obtained. Problems of trajectory optimization and guidance can be dramatically simplified with these solutions. Applications in trajectory design for an aerospace plane strongly support the theoretical development.

  12. Constraining portals with displaced Higgs decay searches at the LHC

    NASA Astrophysics Data System (ADS)

    Clarke, Jackson D.

    2015-10-01

    It is very easy to write down models in which long-lived particles decaying to standard model states are pair-produced via Higgs decays, resulting in the signature of approximately back-to-back pairs of displaced narrow hadronic jets and/or lepton jets at the LHC. The LHC collaborations have already searched for such signatures with no observed excess. This paper describes a Monte Carlo method to reinterpret the searches. The method relies on (ideally multidimensional) efficiency tables, thus we implore collaborations to include them in any future work. Exclusion regions in mixing-mass parameter space are presented which constrain portal models.

  13. Vibrational pooling and constrained equilibration on surfaces

    NASA Astrophysics Data System (ADS)

    Boney, E. T. D.

    In this thesis, we provide a statistical theory for the vibrational pooling and fluorescence time dependence observed in infrared laser excitation of CO on an NaCl surface. The pooling is seen in experiment and in computer simulations. In the theory, we assume a rapid equilibration of the quanta in the substrate and minimize the free energy subject to the constraint at any time t of a fixed number of vibrational quanta N(t). At low incident intensity, the distribution is limited to one-quantum exchanges with the solid and so the Debye frequency of the solid plays a key role in limiting the range of this one-quantum domain. The resulting inverted vibrational equilibrium population depends only on fundamental parameters of the oscillator (oe and oeχe) and the surface (oD and T). Possible applications and relation to the Treanor gas phase treatment are discussed. Unlike the solid phase system, the gas phase system has no Debye-constraining maximum. We discuss the possible distributions for arbitrary N-conserving diatom-surface pairs, and include application to H:Si(111) as an example. Computations are presented to describe and analyze the high levels of infrared laser induced vibrational excitation of a monolayer of absorbed 13CO on a NaCl(100) surface. The calculations confirm that, for situations where the Debye frequency limited n domain restriction approximately holds, the vibrational state population deviates from a Boltzmann population linearly in n, a result that we have derived earlier theoretically for a domain of n restricted to one-phonon transfers. This theoretically understood term, linear in n, dominates the Boltzmann term and is responsible for the inversion of the population of vibrational states, Pn We discuss the one-to-one relationship between N and gamma and the examine the state space of the new distribution function for varied gamma. We derive the Free Energy and effective chemical potential for the vibrational pool. We also find the anti

  14. Elastic Domain Architectures in Constrained Layers

    NASA Astrophysics Data System (ADS)

    Slutsker, J.; Artemev, A.; Roytburd, A. L.

    2002-08-01

    The formation of elastic domains in transforming constrained films is a mechanism of relaxation of internal stresses caused by the misfit between a film and a substrate. The formation and evolution of polydomain microstructure as a result of the cubic-tetragonal transformation in a constrained layer are investigated by phase-field simulation. It has been shown that the three-domain hierarchical structure can be formed in the epitaxial films. With changing a fraction of out-of-plane domain there are two types of morphological transitions: from the three-domain structure to the two-domain one and from the hierarchical three-domain structure to the cellular three-domain structure. The results of the phase-field simulation are compared with available experimental data on 90deg domain structures in epitaxial ferroelectric films.

  15. Constraining SUSY GUTs and Inflation with Cosmology

    SciTech Connect

    Rocher, Jonathan

    2006-11-03

    In the framework of Supersymmetric Grand Unified Theories (SUSY GUTs), the universe undergoes a cascade of symmetry breakings, during which topological defects can be formed. We address the question of the probability of cosmic string formation after a phase of hybrid inflation within a large number of models of SUSY GUTs in agreement with particle and cosmological data. We show that cosmic strings are extremely generic and should be used to relate cosmology and high energy physics. This conclusion is employed together with the WMAP CMB data to strongly constrain SUSY hybrid inflation models. F-term and D-term inflation are studied in the SUSY and minimal SUGRA framework. They are both found to agree with data but suffer from fine tuning of their superpotential coupling ({lambda} (less-or-similar sign) 3 x 10-5 or less). Mass scales of inflation are also constrained to be less than M < or approx. 3 x 1015 GeV.

  16. Constrained optimization via artificial immune system.

    PubMed

    Zhang, Weiwei; Yen, Gary G; He, Zhongshi

    2014-02-01

    An artificial immune system inspired by the fundamental principle of the vertebrate immune system, for solving constrained optimization problems, is proposed. The analogy between the mechanism of biological immune response and constrained optimization formulation is drawn. Individuals in population are classified into feasible and infeasible groups according to their constraint violations that closely match with the two states, inactivated and activated, of B-cells in the immune response. Feasible group focuses on exploitation in the feasible areas through clonal selection, recombination, and hypermutation, while infeasible group facilitates exploration along the feasibility boundary via location update. Direction information is extracted to promote the interactions between these two groups. This approach is validated by the benchmark functions proposed most recently and compared with those of the state of the art from various branches of evolutionary computation paradigms. The performance achieved is considered fairly competitive and promising. PMID:23757542

  17. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  18. Synthesis of constrained analogues of tryptophan

    PubMed Central

    Negrato, Marco; Abbiati, Giorgio; Dell’Acqua, Monica

    2015-01-01

    Summary A Lewis acid-catalysed diastereoselective [4 + 2] cycloaddition of vinylindoles and methyl 2-acetamidoacrylate, leading to methyl 3-acetamido-1,2,3,4-tetrahydrocarbazole-3-carboxylate derivatives, is described. Treatment of the obtained cycloadducts under hydrolytic conditions results in the preparation of a small library of compounds bearing the free amino acid function at C-3 and pertaining to the class of constrained tryptophan analogues. PMID:26664620

  19. Hybrid evolutionary programming for heavily constrained problems.

    PubMed

    Myung, H; Kim, J H

    1996-01-01

    A hybrid of evolutionary programming (EP) and a deterministic optimization procedure is applied to a series of non-linear and quadratic optimization problems. The hybrid scheme is compared with other existing schemes such as EP alone, two-phase (TP) optimization, and EP with a non-stationary penalty function (NS-EP). The results indicate that the hybrid method can outperform the other methods when addressing heavily constrained optimization problems in terms of computational efficiency and solution accuracy. PMID:8833746

  20. Constraining RRc candidates using SDSS colours

    NASA Astrophysics Data System (ADS)

    Banyai, E.; Plachy, E.; Molnar, L.; Dobos, L.; Szabo, R.

    2016-05-01

    The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.

  1. An English language interface for constrained domains

    NASA Technical Reports Server (NTRS)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  2. Constrained optimum trajectories with specified range

    NASA Technical Reports Server (NTRS)

    Erzberger, H.; Lee, H.

    1980-01-01

    The characteristics of optimum fixed-range trajectories whose structure is constrained to climb, steady cruise, and descent segments are derived by application of optimal control theory. The performance function consists of the sum of fuel and time costs, referred to as direct operating costs (DOC). The state variable is range-to-go and the independent variable is energy. In this formulation a cruise segment always occurs at the optimum cruise energy for sufficiently large range. At short ranges (500 n. mi. and less) a cruise segment may also occur below the optimum cruise energy. The existence of such a cruise segment depends primarily on the fuel flow vs thrust characteristics and on thrust constraints. If thrust is a free control variable along with airspeed, it is shown that such cruise segments will not generally occur. If thrust is constrained to some maximum value in climb and to some minimum in descent, such cruise segments generally will occur. The performance difference between free thrust and constrained thrust trajectories has been determined in computer calculations for an example transport aircraft.

  3. Constrained Implants in Total Knee Replacement.

    PubMed

    Touzopoulos, Panagiotis; Drosos, Georgios I; Ververidis, Athanasios; Kazakos, Konstantinos

    2015-05-01

    Total knee replacement (TKR) is a successful procedure for pain relief and functional restoration in patients with advanced osteoarthritis. The number of TKRs is increasing, and this has led to an increase in revision surgeries. The key to long-term success in both primary and revision TKR is stability, as well as adequate and stable fixation between components and underlying bone. In the vast majority of primary TKRs and in some revision cases, a posterior cruciate retaining or a posterior cruciate substituting device can be used. In some primary cases with severe deformity or ligamentous instability and in most of the revision cases, a more constrained implant is required. The purpose of this paper is to review the literature concerning the use of condylar constrained knee (CCK) and rotating hinge (RH) implants in primary and revision cases focusing on the indications and results. According to this review, although excellent and very good results have been reported, there are limitations of the existing literature concerning the indications for the use of constrained implants, the absence of long-term results, and the limited comparative studies. PMID:26055025

  4. A Method to Constrain the Size of the Protosolar Nebula

    NASA Astrophysics Data System (ADS)

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-01

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i >~ 50°), low-eccentricity orbits (q >~ 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (RD >~ 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  5. Constraining f (T ,T ) gravity models using type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Sáez-Gómez, Diego; Carvalho, C. Sofia; Lobo, Francisco S. N.; Tereno, Ismael

    2016-07-01

    We present an analysis of an f (T ,T ) extension of the Teleparallel Equivalent of General Relativity, where T denotes the torsion and T denotes the trace of the energy-momentum tensor. This extension includes nonminimal couplings between torsion and matter. In particular, we construct two specific models that recover the usual continuity equation, namely, f (T ,T )=T +g (T ) and f (T ,T )=T ×g (T ). We then constrain the parameters of each model by fitting the predicted distance modulus to that measured from type Ia supernovae and find that both models can reproduce the late-time cosmic acceleration. We also observe that one of the models satisfies well the observational constraints and yields a goodness-of-fit similar to the Λ CDM model, thus demonstrating that f (T ,T ) gravity theory encompasses viable models that can be an alternative to Λ CDM .

  6. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  7. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  8. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  9. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  10. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  11. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  12. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  13. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  14. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  15. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  16. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  17. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  18. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  19. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  20. Incomplete Dirac reduction of constrained Hamiltonian systems

    SciTech Connect

    Chandre, C.

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  1. Equilibria of three constrained point charges

    NASA Astrophysics Data System (ADS)

    Khimshiashvili, G.; Panina, G.; Siersma, D.

    2016-08-01

    We study the critical points of Coulomb energy considered as a function on configuration spaces associated with certain geometric constraints. Two settings of such kind are discussed in some detail. The first setting arises by considering polygons of fixed perimeter with freely sliding positively charged vertices. The second one is concerned with triples of positive charges constrained to three concentric circles. In each of these cases the Coulomb energy is generically a Morse function. We describe the minima and other stationary points of Coulomb energy and show that, for three charges, a pitchfork bifurcation takes place accompanied by an effect of the Euler's Buckling Beam type.

  2. Constrained inflaton due to a complex scalar

    NASA Astrophysics Data System (ADS)

    Budhi, Romy H. S.; Kashiwase, Shoichi; Suematsu, Daijiro

    2015-09-01

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential varphin. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model

  3. Local structure of equality constrained NLP problems

    SciTech Connect

    Mari, J.

    1994-12-31

    We show that locally around a feasible point, the behavior of an equality constrained nonlinear program is described by the gradient and the Hessian of the Lagrangian on the tangent subspace. In particular this holds true for reduced gradient approaches. Applying the same ideas to the control of nonlinear ODE:s, one can device first and second order methods that can be applied also to stiff problems. We finally describe an application of these ideas to the optimization of the production of human growth factor by fed-batch fermentation.

  4. Constrained inflaton due to a complex scalar

    SciTech Connect

    Budhi, Romy H. S.; Kashiwase, Shoichi; Suematsu, Daijiro

    2015-09-14

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.

  5. Quantization of soluble classical constrained systems

    SciTech Connect

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  6. Constraining the level density using fission of lead projectiles

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, J. L.; Benlliure, J.; Álvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Boutoux, G.; Casarejos, E.; Chatillon, A.; Cortina-Gil, D.; Gorbinet, T.; Heinz, A.; Kelić-Heil, A.; Laurent, B.; Martin, J.-F.; Paradela, C.; Pellereau, E.; Pietras, B.; Ramos, D.; Rodríguez-Tajes, C.; Rossi, D. M.; Simon, H.; Taïeb, J.; Vargas, J.; Voss, B.

    2015-10-01

    The nuclear level density is one of the main ingredients for the statistical description of the fission process. In this work, we propose to constrain the description of this parameter by using fission reactions induced by protons and light ions on 208Pb at high kinetic energies. The experiment was performed at GSI (Darmstadt), where the combined use of the inverse kinematics technique with an efficient detection setup allowed us to measure the atomic number of the two fission fragments in coincidence. This measurement permitted us to obtain with high precision the partial fission cross sections and the width of the charge distribution as a function of the atomic number of the fissioning system. These data and others previously measured, covering a large range in fissility, are compared to state-of-the-art calculations. The results reveal that total and partial fission cross sections cannot unambiguously constrain the level density at ground-state and saddle-point deformations and additional observables, such as the width of the charge distribution of the final fission fragments, are required.

  7. 3D constrained inversion of geophysical and geological information applying Spatial Mutually Constrained Inversion.

    NASA Astrophysics Data System (ADS)

    Nielsen, O. F.; Ploug, C.; Mendoza, J. A.; Martínez, K.

    2009-05-01

    The need for increaseding accuracy and reduced ambiguities in the inversion results has resulted in focus on the development of more advanced inversion methods of geophysical data. Over the past few years more advanced inversion techniques have been developed to improve the results. Real 3D-inversion is time consuming and therefore often not the best solution in a cost-efficient perspective. This has motivated the development of 3D constrained inversions, where 1D-models are constrained in 3D, also known as a Spatial Constrained Inversion (SCI). Moreover, inversion of several different data types in one inversion has been developed, known as Mutually Constrained Inversion (MCI). In this paper a presentation of a Spatial Mutually Constrained Inversion method (SMCI) is given. This method allows 1D-inversion applied to different geophysical datasets and geological information constrained in 3D. Application of two or more types of geophysical methods in the inversion has proved to reduce the equivalence problem and to increase the resolution in the inversion results. The use of geological information from borehole data or digital geological models can be integrated in the inversion. In the SMCI, a 1D inversion code is used to model soundings that are constrained in three dimensions according to their relative position in space. This solution enhances the accuracy of the inversion and produces distinct layers thicknesses and resistivities. It is very efficient in the mapping of a layered geology but still also capable of mapping layer discontinuities that are, in many cases, related to fracturing and faulting or due to valley fills. Geological information may be included in the inversion directly or used only to form a starting model for the individual soundings in the inversion. In order to show the effectiveness of the method, examples are presented from both synthetic data and real data. The examples include DC-soundings as well as land-based and airborne TEM

  8. Regular language constrained sequence alignment revisited.

    PubMed

    Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal

    2011-05-01

    Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, the Regular Expression Constrained Sequence Alignment Problem was introduced, which proposed an O(n²t⁴) time and O(n²t²) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the input non-deterministic automaton. A faster O(n²t³) time algorithm for the same problem was subsequently proposed. In this article, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n²t³)/log t). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense. PMID:21554020

  9. Multiple Manifold Clustering Using Curvature Constrained Path

    PubMed Central

    Babaeian, Amir; Bayestehtashk, Alireza; Bandarabadi, Mojtaba

    2015-01-01

    The problem of multiple surface clustering is a challenging task, particularly when the surfaces intersect. Available methods such as Isomap fail to capture the true shape of the surface near by the intersection and result in incorrect clustering. The Isomap algorithm uses shortest path between points. The main draw back of the shortest path algorithm is due to the lack of curvature constrained where causes to have a path between points on different surfaces. In this paper we tackle this problem by imposing a curvature constraint to the shortest path algorithm used in Isomap. The algorithm chooses several landmark nodes at random and then checks whether there is a curvature constrained path between each landmark node and every other node in the neighborhood graph. We build a binary feature vector for each point where each entry represents the connectivity of that point to a particular landmark. Then the binary feature vectors could be used as a input of conventional clustering algorithm such as hierarchical clustering. We apply our method to simulated and some real datasets and show, it performs comparably to the best methods such as K-manifold and spectral multi-manifold clustering. PMID:26375819

  10. Intersecting transcription networks constrain gene regulatory evolution.

    PubMed

    Sorrells, Trevor R; Booth, Lauren N; Tuch, Brian B; Johnson, Alexander D

    2015-07-16

    Epistasis-the non-additive interactions between different genetic loci-constrains evolutionary pathways, blocking some and permitting others. For biological networks such as transcription circuits, the nature of these constraints and their consequences are largely unknown. Here we describe the evolutionary pathways of a transcription network that controls the response to mating pheromone in yeast. A component of this network, the transcription regulator Ste12, has evolved two different modes of binding to a set of its target genes. In one group of species, Ste12 binds to specific DNA binding sites, while in another lineage it occupies DNA indirectly, relying on a second transcription regulator to recognize DNA. We show, through the construction of various possible evolutionary intermediates, that evolution of the direct mode of DNA binding was not directly accessible to the ancestor. Instead, it was contingent on a lineage-specific change to an overlapping transcription network with a different function, the specification of cell type. These results show that analysing and predicting the evolution of cis-regulatory regions requires an understanding of their positions in overlapping networks, as this placement constrains the available evolutionary pathways. PMID:26153861

  11. Trajectory generation and constrained control of quadrotors

    NASA Astrophysics Data System (ADS)

    Tule, Carlos Alberto

    Unmanned Aerial Systems, although still in early development, are expected to grow in both the military and civil sectors. As part of the UAV sector, the Quadrotor helicopter platform has been receiving a lot of interest from various academic and research institutions because of their simplistic design and low cost to manufacture, yet remaining a challenging platform to control. Four different controllers were derived for the trajectory generation and constrained control of a quadrotor platform. The first approach involves the linear version of the Model Predictive Control (MPC) algorithm to solve the state constrained optimization problem. The second approach uses the State Dependent Coefficient (SDC) form to capture the system non-linearities into a pseudo-linear system matrix, which is used to derive the State Dependent Riccati Equation (SDRE) based optimal control. For the third approach, the SDC form is exploited for obtaining a nonlinear equivalent of the model predictive control. Lastly, a combination of the nonlinear MPC and SDRE optimal control algorithms is used to explore the feasibility of a near real-time nonlinear optimization technique.

  12. Intersecting transcription networks constrain gene regulatory evolution

    PubMed Central

    Sorrells, Trevor R; Booth, Lauren N; Tuch, Brian B; Johnson, Alexander D

    2015-01-01

    Epistasis—the non-additive interactions between different genetic loci—constrains evolutionary pathways, blocking some and permitting others1–8. For biological networks such as transcription circuits, the nature of these constraints and their consequences are largely unknown. Here we describe the evolutionary pathways of a transcription network that controls the response to mating pheromone in yeasts9. A component of this network, the transcription regulator Ste12, has evolved two different modes of binding to a set of its target genes. In one group of species, Ste12 binds to specific DNA binding sites, while in another lineage it occupies DNA indirectly, relying on a second transcription regulator to recognize DNA. We show, through the construction of various possible evolutionary intermediates, that evolution of the direct mode of DNA binding was not directly accessible to the ancestor. Instead, it was contingent on a lineage-specific change to an overlapping transcription network with a different function, the specification of cell type. These results show that analyzing and predicting the evolution of cis-regulatory regions requires an understanding of their positions in overlapping networks, as this placement constrains the available evolutionary pathways. PMID:26153861

  13. Using Simple Shapes to Constrain Asteroid Thermal Inertia

    NASA Astrophysics Data System (ADS)

    MacLennan, Eric M.; Emery, Joshua P.

    2015-11-01

    With the use of remote thermal infrared observations and a thermophysical model (TPM), the thermal inertia of an asteroid surface can be determined. The thermal inertia, in turn, can be used to infer physical properties of the surface, specifically to estimate the average regolith grain size. Since asteroids are often non-spherical techniques for incorporating modeled (non-spherical) shapes into calculating thermal inertia have been established. However, using a sphere as input for TPM is beneficial in reducing running time and shape models are not generally available for all (or most) objects that are observed in the thermal-IR. This is particularly true, as the pace of infrared observations has recently dramatically increased, notably due to the WISE mission, while the time to acquire sufficient light curves for accurate shape inversion remains relatively long. Here, we investigate the accuracy of using both a spherical and ellipsoidal TPM, with infrared observations obtained at pre- and post-opposition (hereafter multi-epoch) geometries to constrain the thermal inertias of a large number of asteroids.We test whether using multi-epoch observations combined with a spherical and ellipsoidal shape TPM can constrain the thermal inertia of an object without a priori knowledge of its shape or spin state. The effectiveness of this technique is tested for 16 objects with shape models from DAMIT and WISE multi-epoch observations. For each object, the shape model is used as input for the TPM to generate synthetic fluxes for different values of thermal inertia. The input spherical and ellipsoidal shapes are then stepped through different spin vectors as the TPM is used to generate best-fit thermal inertia and diameter to the synthetically generated fluxes, allowing for a direct test of the approach’s effectiveness. We will discuss whether the precision of the thermal inertia constraints from the spherical TPM analysis of multi- epoch observations is comparable to works

  14. The effect of humor on memory: constrained by the pun.

    PubMed

    Summerfelt, Hannah; Lippman, Louis; Hyman, Ira E

    2010-01-01

    In a series of experiments, we investigated the effect of pun humor on memory. In all experiments, the participants were exposed to knock-knock jokes in either the original form retaining the pun or in a modified form that removed the pun. In Experiment 1, the authors found that pun humor improved both recall and recognition memory following incidental encoding. In Experiment 2, they found evidence that rehearsal is not the cause of the humor effect on memory. In Experiments 3 and 4, the authors found that the constraints imposed by puns and incongruity may account for the humor effects observed. Puns constrain and limit the information that can fit in the final line of a joke and thus make recall easier. PMID:21086859

  15. Constraining black-hole spin using disc tomography

    NASA Astrophysics Data System (ADS)

    Middleton, Matthew

    2013-10-01

    The emission from the inner accretion disc of low mass, high mass accretion rate AGN extends into the soft X-ray bandpass. Where orbiting material covers or reveals each side of the disc in turn, we can study the region of strong gravity and determine the spin and inclination directly. RX J1301.9+2747, shows flares as a long-lived feature of its lightcurve which are most likely due to gaps in an obscuring shroud. By comparing to the predictions from ray-tracing codes, the data imply the spin and inclination to be low. We propose a 210 ks observation (including overheads and background flaring) to more robustly test our new technique and tightly constrain the spin and inclination, or otherwise provide a unique view of the mechanism responsible for the soft excess in this source.

  16. Drying-induced cavitation in a constrained hydrogel.

    PubMed

    Wang, Huiming; Cai, Shengqiang

    2015-02-14

    Cavitation can be often observed in soft materials. Most previous studies were focused on cavitation in an elastomer, which is under different mechanical loadings. In this paper, we investigate cavitation in a constrained hydrogel induced by drying. Taking account of surface tension and chemo-mechanics of gels, we calculate the free energy of the system as a function of cavity size. The free energy landscape shows a double-well structure, analogous to first-order phase transition. Above the critical humidity, a cavity inside the gel is tiny. Below the critical humidity, the size of the cavity is large. At the critical humidity, the cavity size grows suddenly and discontinuously. We further show that local large stretches can be induced in the gel during the drying process, which may result in fractures. PMID:25592184

  17. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  18. Constraining the Ensemble Kalman Filter for improved streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Maxwell, Deborah; Jackson, Bethanna; McGregor, James

    2016-04-01

    Data assimilation techniques such as the Kalman Filter and its variants are often applied to hydrological models with minimal state volume/capacity constraints. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this presentation, we investigate the effect of constraining the Ensemble Kalman Filter (EnKF) on forecast performance. An EnKF implementation with no constraints is compared to model output with no assimilation, followed by a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then a more tightly constrained implementation where flux as well as mass constraints are imposed to limit the rate of water movement within a state. A three year period (2008-2010) with no significant data gaps and representative of the range of flows observed over the fuller 1976-2010 record was selected for analysis. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Overall, neither the unconstrained nor the "typically" mass-constrained forecasts were significantly better than the non-filtered forecasts; in fact several were significantly degraded. Flux constraints (in conjunction with mass constraints) significantly improved the forecast performance of six events relative to all other implementations, while the remaining two events showed no significant difference in performance. We conclude that placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state updating and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, and find that this

  19. Constraining Earthquake Source Properties Based on Array Waveform Coherency

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Meng, L.

    2014-12-01

    Ever since the deployment of large regional seismic arrays (e.g. USArray), numerous contributions have been made to develop refined structural models of the Earth's interior. However the dataset has not been explored in earthquake source studies except back-projections of large earthquakes. Waveform coherence across a seismic array is crucial for back-projection earthquake source imaging. While previous studies indicate waveform coherency decays dramatically with distance and frequency, their adoption of time windows with fixed duration may naturally degrade the coherence at high frequency. In this study, we measure the correlation coefficients of teleseismic waveforms recorded by USArray using window lengths proportional to 1/frequency. Based on the coherency curve of USArray as a function of interstation distance, we may constrain earthquake source properties through data-mining. Preliminary results show that the coherency is high across the USArray over inter-station distances >10 wavelengths and up to 5 Hz. The coherence of large/shallow earthquakes decays faster with distance than small/deep earthquakes. For the same earthquake, coherence falls slower along the ray-path than across it. One possible explanation for such patterns is finite source effect including scattering near the source. We seek to systematically measure the waveform coherency of earthquakes of different properties, for example, magnitude, focal depth, faulting type, rupture size and aspect ratio, some of which are hard to resolve with conventional observations. By establishing a multi-variable dependence of the source properties on the USArray coherence, we may reduce the scattering of stress drop calculation and constrain other source properties that are difficult to be determined by conventional approaches. Such new observations may shed light on the long-stand debate of earthquake self-similarity.

  20. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  1. A METHOD TO CONSTRAIN THE SIZE OF THE PROTOSOLAR NEBULA

    SciTech Connect

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-15

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i {approx}> 50 Degree-Sign ), low-eccentricity orbits (q {approx}> 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (R{sub D} {approx}> 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  2. Asymmetric Exclusion Process with Constrained Hopping and Parallel Dynamics at a Junction

    NASA Astrophysics Data System (ADS)

    Liu, Mingzhe; Tuo, Xianguo; Li, Zhe; Yang, Jianbo

    In this article totally asymmetric simple exclusion process (TASEP) with constrained hopping and parallel dynamics at a junction is investigated using a mean-field approximation and Monte Carlo simulations. The constrained particle hopping probability r (r ≤ 1) at a junction may correspond to a delay caused by a driver choosing the right direction or a delay waiting for green traffic light in the real world. There are six stationary phases in the system, which can reflect free flow and congested traffic situations. Correlations at the junction point are investigated via simulations. It is observed that small r leads to stronger correlations. The theoretical results are agreement with computer simulations well.

  3. Arithmetic coding with constrained carry operations

    NASA Astrophysics Data System (ADS)

    Mahfoodh, Abo-Talib; Said, Amir; Yea, Sehoon

    2015-03-01

    Buffer or counter-based techniques are adequate for dealing with carry propagation in software implementations of arithmetic coding, but create problems in hardware implementations due to the difficulty of handling worst-case scenarios, defined by very long propagations. We propose a new technique for constraining the carry propagation, similar to "bit-stuffing," but designed for encoders that generate data as bytes instead of individual bits, and is based on the fact that the encoder and decoder can maintain the same state, and both can identify the situations when it desired to limit carry propagation. The new technique adjusts the coding interval in a way that corresponds to coding an unused data symbol, but selected to minimize overhead. Our experimental results demonstrate that the loss in compression can be made very small using regular precision for arithmetic operations.

  4. Constraining condensate dark matter in galaxy clusters

    NASA Astrophysics Data System (ADS)

    de Souza, J. C. C.; Ujevic, M.

    2015-09-01

    We constrain scattering length parameters in a Bose-Einstein condensate dark matter model by using galaxy clusters radii, with the implementation of a method previously applied to galaxies. At the present work, we use a sample of 114 clusters radii in order to obtain the scattering lengths associated with a dark matter particle mass in the range - eV. We obtain scattering lengths that are five orders of magnitude larger than the ones found in the galactic case, even when taking into account the cosmological expansion in the cluster scale by means of the introduction of a small cosmological constant. We also construct and compare curves for the orbital velocity of a test particle in the vicinity of a dark matter cluster in both the expanding and the non-expanding cases.

  5. Constraining cosmology with pairwise velocity estimator

    NASA Astrophysics Data System (ADS)

    Ma, Yin-Zhe; Li, Min; He, Ping

    2015-11-01

    In this paper, we develop a full statistical method for the pairwise velocity estimator previously proposed, and apply Cosmicflows-2 catalogue to this method to constrain cosmology. We first calculate the covariance matrix for line-of-sight velocities for a given catalogue, and then simulate the mock full-sky surveys from it, and then calculate the variance for the pairwise velocity field. By applying the 8315 independent galaxy samples and compressed 5224 group samples from Cosmicflows-2 catalogue to this statistical method, we find that the joint constraint on Ωm0.6h and σ8 is completely consistent with the WMAP 9-year and Planck 2015 best-fitting cosmology. Currently, there is no evidence for the modified gravity models or any dynamic dark energy models from this practice, and the error-bars need to be reduced in order to provide any concrete evidence against/to support ΛCDM cosmology.

  6. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  7. Constrained least squares estimation incorporating wavefront sensing

    NASA Astrophysics Data System (ADS)

    Ford, Stephen D.; Welsh, Byron M.; Roggemann, Michael C.

    1998-11-01

    We address the optimal processing of astronomical images using the deconvolution from wave-front sensing technique (DWFS). A constrained least-squares (CLS) solution which incorporates ensemble-averaged DWFS data is derived using Lagrange minimization. The new estimator requires DWFS data, noise statistics, optical transfer function statistics, and a constraint. The constraint can be chosen such that the algorithm selects a conventional regularization constant automatically. No ad hoc parameter tuning is necessary. The algorithm uses an iterative Newton-Raphson minimization to determine the optimal Lagrange multiplier. Computer simulation of a 1m telescope imaging through atmospheric turbulence is used to test the estimation scheme. CLS object estimates are compared with the corresponding long exposure images. The CLS algorithm provides images with superior resolution and is computationally inexpensive, converging to a solution in less than 10 iterations.

  8. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  9. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  10. The asymptotics of large constrained graphs

    NASA Astrophysics Data System (ADS)

    Radin, Charles; Ren, Kui; Sadun, Lorenzo

    2014-05-01

    We show, through local estimates and simulation, that if one constrains simple graphs by their densities ɛ of edges and τ of triangles, then asymptotically (in the number of vertices) for over 95% of the possible range of those densities there is a well-defined typical graph, and it has a very simple structure: the vertices are decomposed into two subsets V1 and V2 of fixed relative size c and 1 - c, and there are well-defined probabilities of edges, gjk, between vj ∈ Vj, and vk ∈ Vk. Furthermore the four parameters c, g11, g22 and g12 are smooth functions of (ɛ, τ) except at two smooth ‘phase transition’ curves.

  11. Traveltime tomography and nonlinear constrained optimization

    SciTech Connect

    Berryman, J.G.

    1988-10-01

    Fermat's principle of least traveltime states that the first arrivals follow ray paths with the smallest overall traveltime from the point of transmission to the point of reception. This principle determines a definite convex set of feasible slowness models - depending only on the traveltime data - for the fully nonlinear traveltime inversion problem. The existence of such a convex set allows us to transform the inversion problem into a nonlinear constrained optimization problem. Fermat's principle also shows that the standard undamped least-squares solution to the inversion problem always produces a slowness model with many ray paths having traveltime shorter than the measured traveltime (an impossibility even if the trial ray paths are not the true ray paths). In a damped least-squares inversion, the damping parameter may be varied to allow efficient location of a slowness model on the feasibility boundary. 13 refs., 1 fig., 1 tab.

  12. Multiplier-continuation algorthms for constrained optimization

    NASA Technical Reports Server (NTRS)

    Lundberg, Bruce N.; Poore, Aubrey B.; Bing, Yang

    1989-01-01

    Several path following algorithms based on the combination of three smooth penalty functions, the quadratic penalty for equality constraints and the quadratic loss and log barrier for inequality constraints, their modern counterparts, augmented Lagrangian or multiplier methods, sequential quadratic programming, and predictor-corrector continuation are described. In the first phase of this methodology, one minimizes the unconstrained or linearly constrained penalty function or augmented Lagrangian. A homotopy path generated from the functions is then followed to optimality using efficient predictor-corrector continuation methods. The continuation steps are asymptotic to those taken by sequential quadratic programming which can be used in the final steps. Numerical test results show the method to be efficient, robust, and a competitive alternative to sequential quadratic programming.

  13. Sampling Motif-Constrained Ensembles of Networks

    NASA Astrophysics Data System (ADS)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  14. A new constrained fixed-point algorithm for ordering independent components

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Guo, Chonghui; Shi, Zhenwei; Feng, Enmin

    2008-10-01

    Independent component analysis (ICA) aims to recover a set of unknown mutually independent components (ICs) from their observed mixtures without knowledge of the mixing coefficients. In the classical ICA model there exists ICs' indeterminacy on permutation and dilation. Constrained ICA is one of methods for solving this problem through introducing constraints into the classical ICA model. In this paper we first present a new constrained ICA model which composed of three parts: a maximum likelihood criterion as an objective function, statistical measures as inequality constraints and the normalization of demixing matrix as equality constraints. Next, we incorporate the new fixed-point (newFP) algorithm into this constrained ICA model to construct a new constrained fixed-point algorithm. Computation simulations on synthesized signals and speech signals demonstrate that this combination both can eliminate ICs' indeterminacy to a certain extent, and can provide better performance. Moreover, comparison results with the existing algorithm verify the efficiency of our new algorithm furthermore, and show that it is more simple to implement than the existing algorithm due to its advantage of not using the learning rate. Finally, this new algorithm is also applied for the real-world fetal ECG data, experiment results further indicate the efficiency of the new constrained fixed-point algorithm.

  15. Fast algorithm for the solution of large-scale non-negativity constrained least squares problems.

    SciTech Connect

    Van Benthem, Mark Hilary; Keenan, Michael Robert

    2004-06-01

    Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.

  16. Newton's method for large bound-constrained optimization problems.

    SciTech Connect

    Lin, C.-J.; More, J. J.; Mathematics and Computer Science

    1999-01-01

    We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.

  17. Lithium in halo stars - Constraining the effects of helium diffusion on globular cluster ages and cosmology

    NASA Technical Reports Server (NTRS)

    Deliyannis, Constantine P.; Demarque, Pierre

    1991-01-01

    Stellar evolutionary models with diffusion are used to show that observations of lithium in extreme halo stars provide crucial constraints on the magnitude of the effects of helium diffusion. The flatness of the observed Li-T(eff) relation severely constrains diffusion Li isochrones, which tend to curve downward toward higher T(eff). It is argued that Li observations at the hot edge of the plateau are particularly important in constraining the effects of helium diffusion; yet, they are currently few in number. It is proposed that additional observations are required there, as well as below 5500 K, to define more securely the morphology of the halo Li abundances. Implications for the primordial Li abundance are considered. It is suggested that a conservative upper limit to the initial Li abundance, due to diffusive effects alone, is 2.35.

  18. Constraining Upper Mantle Azimuthal Anisotropy With Free Oscillation Data (Invited)

    NASA Astrophysics Data System (ADS)

    Beghein, C.; Resovsky, J. S.; van der Hilst, R. D.

    2009-12-01

    We investigate the potential of Earth's free oscillations coupled modes as a tool to constrain large-scale seismic anisotropy in the transition zone and in the bulk of the lower mantle. While the presence of seismic anisotropy is widely documented in the uppermost and the lowermost mantle, its observation at intermediate depths remains a formidable challenge. We show that several coupled modes of oscillations are sensitive to radial and azimuthal anisotropy throughout the mantle. In particular, modes of the type 0Sl-0T(l+1) have high sensitivity to shear-wave radial anisotropy and to six elastic parameters describing azimuthal anisotropy in the 200 km-1000 km depth range. The use of such data enables us thus to extend the sensitivity of traditionally used fundamental mode surface waves to depths corresponding to the transition zone and the top of the lower mantle. In addition, these modes have the potential to provide new and unique constraints on several elastic parameters to which surface waves are not sensitive. We attempted to fit degree two splitting measurements of 0Sl-0T(l+1) coupled modes using previously published isotropic and transversely isotropic mantle models, but we could not explain the entire signal. We then explored the model space with a forward modeling approach and determined that, after correction for the effect of the crust and mantle radial anisotropy, the remaining signal can be explained by the presence of azimuthal anisotropy in the upper mantle. When we allow the azimuthal anisotropy to go below 400 km depth, the data fit is slightly better and the model space search leads to better-resolved model than when we force the anisotropy to lie in the top 400 km of the mantle. Its depth extent and distribution are, however, still not well constrained by the data due to parameter tradeoffs and a limited coupled mode data set. It is thus clear that mode coupling measurements have the potential to constrain upper-mantle azimuthal anisotropy

  19. Magma chamber dynamics constrained by crystal isotope stratigraphy

    NASA Astrophysics Data System (ADS)

    Davidson, J. P.; Tepley, F. J., III; Hora, J. M.

    2003-04-01

    geochemical observations from many volcanic systems suggest that the crystalline components are mechanical aggregates of crystals grown in different places and times in the magma storage and delivery system. Indeed the occurrence of true phenocrysts in volcanic rocks is arguably rare. Repeated remobilization of crystalline material by recharge magma indicates that the solids exist in a sufficiently weak state (as a crystal mush or framework) such that an injection of magma can cause disaggregation. This in turn limits the cooling time available for solidification between recharge episodes. The potential now exists for diffusional treatment of trace element and isotopic profiles from mineral phases to constrain effective residence times, and thereby determine crystallization and differentiation rates.

  20. Constraining t-T conditions during palaeoseismic events - constraining the viscous brake phenomena in nature.

    NASA Astrophysics Data System (ADS)

    Dobson, Katherine J.; Kirkpatrick, James D.; Mark, Darren F.; Stuart, Finlay M.

    2010-05-01

    observations of the fault rocks assemblage indicate that the pseudotachylytes formed at temperatures of < 300°C, the depth of formation, and therefore the normal stress are poorly constrained. In this study we exploit the relationship between the normal stress and the mass (i.e. thickness) of the rocks above the earthquake. We present data from standard thermochronological techniques (Ar/Ar, apatite and zircon (U-Th)/He and apatite fission track) applied to a vertical profile through the pseudotachylyte bearing granite. This enables the complete time-temperature cooling path of the host rock to be determined and the geothermal gradient to be assessed, which in turn allows us calculate the depth at which rupture occurred. We use these results to test the hypothesis that the Sierra Nevada pseudotachylyte acted as a viscous brake. This will ultimately improve understanding of earthquake ruptures by identifying an intrinsic control on the magnitude of earthquakes. References 1. Di Toro et al. 2006. Science 311. 647-649 2. Fialko & Khazab, 2005, J geophys. Res. 110 B12407

  1. Eulerian Formulation of Spatially Constrained Elastic Rods

    NASA Astrophysics Data System (ADS)

    Huynen, Alexandre

    Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration

  2. Asynchronous parallel generating set search for linearly-constrained optimization.

    SciTech Connect

    Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson

    2006-08-01

    Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.

  3. Explaining evolution via constrained persistent perfect phylogeny

    PubMed Central

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  4. Constraining decaying dark matter with Fermi LAT gamma-rays

    SciTech Connect

    Zhang, Le; Sigl, Günter; Weniger, Christoph; Maccione, Luca; Redondo, Javier E-mail: christoph.weniger@desy.de E-mail: redondo@mppmm.mpg.de

    2010-06-01

    High energy electrons and positrons from decaying dark matter can produce a significant flux of gamma rays by inverse Compton off low energy photons in the interstellar radiation field. This possibility is inevitably related with the dark matter interpretation of the observed PAMELA and FERMI excesses. The aim of this paper is providing a simple and universal method to constrain dark matter models which produce electrons and positrons in their decay by using the Fermi LAT gamma-ray observations in the energy range between 0.5 GeV and 300 GeV. We provide a set of universal response functions that, once convolved with a specific dark matter model produce the desired constraints. Our response functions contain all the astrophysical inputs such as the electron propagation in the galaxy, the dark matter profile, the gamma-ray fluxes of known origin, and the Fermi LAT data. We study the uncertainties in the determination of the response functions and apply them to place constraints on some specific dark matter decay models that can well fit the positron and electron fluxes observed by PAMELA and Fermi LAT. To this end we also take into account prompt radiation from the dark matter decay. We find that with the available data decaying dark matter cannot be excluded as source of the PAMELA positron excess.

  5. Constraining scalar fields with stellar kinematics and collisional dark matter

    SciTech Connect

    Amaro-Seoane, Pau; Barranco, Juan; Bernal, Argelia; Rezzolla, Luciano E-mail: jbarranc@aei.mpg.de E-mail: rezzolla@aei.mpg.de

    2010-11-01

    The existence and detection of scalar fields could provide solutions to long-standing puzzles about the nature of dark matter, the dark compact objects at the centre of most galaxies, and other phenomena. Yet, self-interacting scalar fields are very poorly constrained by astronomical observations, leading to great uncertainties in estimates of the mass m{sub φ} and the self-interacting coupling constant λ of these fields. To counter this, we have systematically employed available astronomical observations to develop new constraints, considerably restricting this parameter space. In particular, by exploiting precise observations of stellar dynamics at the centre of our Galaxy and assuming that these dynamics can be explained by a single boson star, we determine an upper limit for the boson star compactness and impose significant limits on the values of the properties of possible scalar fields. Requiring the scalar field particle to follow a collisional dark matter model further narrows these constraints. Most importantly, we find that if a scalar dark matter particle does exist, then it cannot account for both the dark-matter halos and the existence of dark compact objects in galactic nuclei.

  6. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    SciTech Connect

    Ellis, John; Mustafayev, Azar; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  7. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  8. Constrained filter optimization for subsurface landmine detection

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik

    2006-05-01

    Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.

  9. Constrained bounds on measures of entanglement

    SciTech Connect

    Datta, Animesh; Flammia, Steven T.; Shaji, Anil; Caves, Carlton M.

    2007-06-15

    Entanglement measures constructed from two positive, but not completely positive, maps on density operators are used as constraints in placing bounds on the entanglement of formation, the tangle, and the concurrence of 4N mixed states. The maps are the partial transpose map and the phi map introduced by Breuer [H.-P. Breuer, Phys. Rev. Lett. 97, 080501 (2006)]. The norm-based entanglement measures constructed from these two maps, called negativity and phi negativity, respectively, lead to two sets of bounds on the entanglement of formation, the tangle, and the concurrence. We compare these bounds and identify the sets of 4N density operators for which the bounds from one constraint are better than the bounds from the other. In the process, we present a derivation of the already known bound on the concurrence based on the negativity. We compute bounds on the three measures of entanglement using both the constraints simultaneously. We demonstrate how such doubly constrained bounds can be constructed. We discuss extensions of our results to bipartite states of higher dimensions and with more than two constraints.

  10. Constraining the oblateness of Kepler planets

    SciTech Connect

    Zhu, Wei; Huang, Chelsea X.; Zhou, George; Lin, D. N. C.

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  11. Constraining the Properties of Cold Interstellar Clouds

    NASA Astrophysics Data System (ADS)

    Spraggs, Mary Elizabeth; Gibson, Steven J.

    2016-01-01

    Since the interstellar medium (ISM) plays an integral role in star formation and galactic structure, it is important to understand the evolution of clouds over time, including the processes of cooling and condensation that lead to the formation of new stars. This work aims to constrain and better understand the physical properties of the cold ISM by utilizing large surveys of neutral atomic hydrogen (HI) 21cm spectral line emission and absorption, carbon monoxide (CO) 2.6mm line emission, and multi-band infrared dust thermal continuum emission. We identify areas where the gas may be cooling and forming molecules using HI self-absorption (HISA), in which cold foreground HI absorbs radiation from warmer background HI emission.We are developing an algorithm that uses total gas column densities inferred from Planck and other FIR/sub-mm data in parallel with CO and HISA spectral line data to determine the gas temperature, density, molecular abundance, and other properties as functions of position. We can then map these properties to study their variation throughout an individual cloud as well as any dependencies on location or environment within the Galaxy.Funding for this work was provided by the National Science Foundation, the NASA Kentucky Space Grant Consortium, the WKU Ogden College of Science and Engineering, and the Carol Martin Gatton Academy for Mathematics and Science in Kentucky.

  12. Computational studies of spatially constrained DNA

    SciTech Connect

    Olson, W.K.; Westcott, T.P.; Liu, Guo-Hua

    1996-12-31

    Closed loops of double stranded DNA are ubiquitous in nature, occurring in systems ranging from plasmids, bacterial chromosomes, and many viral genomes, which form single closed loops, to eukaryotic chromosomes and other linear DNAs, which appear to be organized into topologically constrained domains by DNA-binding proteins. The topological constraints in the latter systems are determined by the spacing of the bound proteins along the contour of the double helix along with the imposed turns and twists of DNA in the intermolecular complexes. As long as the duplex remains unbroken, the linking number Lk, or number of times the two strands of the DNA wrap around one another, is conserved. If one of the strands is nicked and later re-sealed, the change in overall folding that accompanies DNA-protein interactions leads to a change in Lk. The supercoiling brought about by such protein action, in turn, determines a number of key biological events, including replication, transcription, and recombination. 51 refs., 5 figs., 1 tab.

  13. FPGA design for constrained energy minimization

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Chang, Chein-I.; Cao, Mang

    2004-02-01

    The Constrained Energy Minimization (CEM) has been widely used for hyperspectral detection and classification. The feasibility of implementing the CEM as a real-time processing algorithm in systolic arrays has been also demonstrated. The main challenge of realizing the CEM in hardware architecture in the computation of the inverse of the data correlation matrix performed in the CEM, which requires a complete set of data samples. In order to cope with this problem, the data correlation matrix must be calculated in a causal manner which only needs data samples up to the sample at the time it is processed. This paper presents a Field Programmable Gate Arrays (FPGA) design of such a causal CEM. The main feature of the proposed FPGA design is to use the Coordinate Rotation DIgital Computer (CORDIC) algorithm that can convert a Givens rotation of a vector to a set of shift-add operations. As a result, the CORDIC algorithm can be easily implemented in hardware architecture, therefore in FPGA. Since the computation of the inverse of the data correlction involves a series of Givens rotations, the utility of the CORDIC algorithm allows the causal CEM to perform real-time processing in FPGA. In this paper, an FPGA implementation of the causal CEM will be studied and its detailed architecture will be also described.

  14. Constraining Particle Sizes of Saturn's F Ring

    NASA Astrophysics Data System (ADS)

    Becker, T. M.; Colwell, J.; Esposito, L. W.

    2011-12-01

    Saturn's beauty is often attributed to the magnificent rings that encircle the planet. Although admired for hundreds of years, we are now just beginning to understand the complexity of the rings as a result of new data from the Cassini orbiter. Studying occultations of the rings provides information about the distribution and sizes of the particles that define the rings. During one solar occultation, the Ultraviolet Imaging Spectrograph (UVIS) on board Cassini was slightly misaligned with the Sun, decreasing the amount of direct solar signal to ~1% of the normal value. As a result, UVIS detected a peak in photon counts above the non-occulted signal due to forward-scattered light diffracted by the small particles in the F Ring. There is a direct relationship between the size of the particles and the intensity of the light scattered. We utilize this relationship in a model that replicates the misalignment and calculates the amount of light that would be detected as a function of the particle sizes in the ring. We present new results from the model that constrain the size distribution of the dynamically active F Ring, contributing to the study of the origin and evolution of Saturn's ring system.

  15. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion

    NASA Astrophysics Data System (ADS)

    Karabanov, A.; Wiśniewski, D.; Lesanovsky, I.; Köckenberger, W.

    2015-07-01

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP—so-called solid effect DNP—which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications.

  16. Acoustic characteristics of listener-constrained speech

    NASA Astrophysics Data System (ADS)

    Ashby, Simone; Cummins, Fred

    2003-04-01

    Relatively little is known about the acoustical modifications speakers employ to meet the various constraints-auditory, linguistic and otherwise-of their listeners. Similarly, the manner by which perceived listener constraints interact with speakers' adoption of specialized speech registers is poorly Hypo (H&H) theory offers a framework for examining the relationship between speech production and output-oriented goals for communication, suggesting that under certain circumstances speakers may attempt to minimize phonetic ambiguity by employing a ``hyperarticulated'' speaking style (Lindblom, 1990). It remains unclear, however, what the acoustic correlates of hyperarticulated speech are, and how, if at all, we might expect phonetic properties to change respective to different listener-constrained conditions. This paper is part of a preliminary investigation concerned with comparing the prosodic characteristics of speech produced across a range of listener constraints. Analyses are drawn from a corpus of read hyperarticulated speech data comprising eight adult, female speakers of English. Specialized registers include speech to foreigners, infant-directed speech, speech produced under noisy conditions, and human-machine interaction. The authors gratefully acknowledge financial support of the Irish Higher Education Authority, allocated to Fred Cummins for collaborative work with Media Lab Europe.

  17. Constraining inflation with future galaxy redshift surveys

    SciTech Connect

    Huang, Zhiqi; Vernizzi, Filippo; Verde, Licia E-mail: liciaverde@icc.ub.edu

    2012-04-01

    With future galaxy surveys, a huge number of Fourier modes of the distribution of the large scale structures in the Universe will become available. These modes are complementary to those of the CMB and can be used to set constraints on models of the early universe, such as inflation. Using a MCMC analysis, we compare the power of the CMB with that of the combination of CMB and galaxy survey data, to constrain the power spectrum of primordial fluctuations generated during inflation. We base our analysis on the Planck satellite and a spectroscopic redshift survey with configuration parameters close to those of the Euclid mission as examples. We first consider models of slow-roll inflation, and show that the inclusion of large scale structure data improves the constraints by nearly halving the error bars on the scalar spectral index and its running. If we attempt to reconstruct the inflationary single-field potential, a similar conclusion can be reached on the parameters characterizing the potential. We then study models with features in the power spectrum. In particular, we consider ringing features produced by a break in the potential and oscillations such as in axion monodromy. Adding large scale structures improves the constraints on features by more than a factor of two. In axion monodromy we show that there are oscillations with small amplitude and frequency in momentum space that are undetected by CMB alone but can be measured by including galaxy surveys in the analysis.

  18. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  19. Autonomy, constraining options, and organ sales.

    PubMed

    Taylor, James Stacey

    2002-01-01

    Although there continues to be a chronic shortage of transplant organs the suggestion that we should try to alleviate it through allowing a current market in them continues to be morally condemned, usually on the grounds that such a market would undermine the autonomy of those who would participate in it as vendors. Against this objection Gerald Dworkin has argued that such markets would enhance the autonomy of the vendors through providing them with more options, thus enabling them to exercise a greater degree of control over their bodies. Paul Hughes and T.L. Zutlevics have recently criticized Dworkin's argument, arguing that the option to sell an organ is unusual in that it is an autonomy-undermining "constraining option" whose presence in a person's choice set is likely to undermine her autonomy rather than enhance it. I argue that although Hughes' and Zutlevics' arguments are both innovative and persuasive they are seriously flawed--and that allowing a market in human organs is more likely to enhance vendor autonomy than diminish it. Thus, given that autonomy is the preeminent value in contemporary medical ethics this provides a strong prima facie case for recognizing the moral legitimacy of such markets. PMID:12747360

  20. Testing constrained sequential dominance models of neutrinos

    NASA Astrophysics Data System (ADS)

    Björkeroth, Fredrik; King, Stephen F.

    2015-12-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.

  1. Constrained Graph Optimization: Interdiction and Preservation Problems

    SciTech Connect

    Schild, Aaron V

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  2. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  3. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  4. Constrained spheroids for prolonged hepatocyte culture.

    PubMed

    Tong, Wen Hao; Fang, Yu; Yan, Jie; Hong, Xin; Hari Singh, Nisha; Wang, Shu Rui; Nugraha, Bramasta; Xia, Lei; Fong, Eliza Li Shan; Iliescu, Ciprian; Yu, Hanry

    2016-02-01

    Liver-specific functions in primary hepatocytes can be maintained over extended duration in vitro using spheroid culture. However, the undesired loss of cells over time is still a major unaddressed problem, which consequently generates large variations in downstream assays such as drug screening. In static culture, the turbulence generated by medium change can cause spheroids to detach from the culture substrate. Under perfusion, the momentum generated by Stokes force similarly results in spheroid detachment. To overcome this problem, we developed a Constrained Spheroids (CS) culture system that immobilizes spheroids between a glass coverslip and an ultra-thin porous Parylene C membrane, both surface-modified with poly(ethylene glycol) and galactose ligands for optimum spheroid formation and maintenance. In this configuration, cell loss was minimized even when perfusion was introduced. When compared to the standard collagen sandwich model, hepatocytes cultured as CS under perfusion exhibited significantly enhanced hepatocyte functions such as urea secretion, and CYP1A1 and CYP3A2 metabolic activity. We propose the use of the CS culture as an improved culture platform to current hepatocyte spheroid-based culture systems. PMID:26708088

  5. Using infrasound to constrain ash plume height

    NASA Astrophysics Data System (ADS)

    Lamb, Oliver; De Angelis, Silvio; Lavallée, Yan

    2016-04-01

    Airborne volcanic ash advisories are currently based on analyses of satellite imagery with relatively low temporal resolution, and numerical simulations of atmospheric plume dispersion. These simulations rely on key input parameters such as the maximum height of eruption plumes and the mass eruption rate at the vent, which remain loosely constrained. In this study, we present a proof-of-concept workflow that incorporates the analysis of volcanic infrasound with numerical modelling of volcanic plume rise in a realistic atmosphere. We analyse acoustic infrasound records from two explosions during the 2009 eruption of Mt. Redoubt, USA, that produced plumes reaching heights of 12-14 km. We model the infrasonic radiation at the source under the assumptions of linear acoustic theory and calculate variations in mass ejection velocity at the vent. The estimated eruption velocities serve as the input for numerical models of plume rise. The encouraging results highlight the potential for infrasound measurements to be incorporated into numerical modelling of ash dispersion, and confirm their value for volcano monitoring operations.

  6. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Finger joint polymer constrained prosthesis. 888.3230 Section 888.3230 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification....

  7. The Constraining of Parameters in Restricted Factor Analysis.

    ERIC Educational Resources Information Center

    Hattie, John; Fraser, Colin

    1988-01-01

    In restricted factor analysis, each element of the matrices of factor loadings and correlations and unique variances and covariances can be constrained. It is argued that the practice of constraining some parameters at zero is not psychologically meaningful. Alternative procedures are presented and illustrated. (TJH)

  8. The Pendulum: From Constrained Fall to the Concept of Potential

    ERIC Educational Resources Information Center

    Bevilacqua, Fabio; Falomo, Lidia; Fregonese, Lucio; Giannetto, Enrico; Giudice, Franco; Mascheretti, Paolo

    2006-01-01

    Kuhn underlined the relevance of Galileo's gestalt switch in the interpretation of a swinging body from constrained fall to time metre. But the new interpretation did not eliminate the older one. The constrained fall, both in the motion of pendulums and along inclined planes, led Galileo to the law of free fall. Experimenting with physical…

  9. CONSTRAINING DARK ENERGY WITH GAMMA-RAY BURSTS

    SciTech Connect

    Samushia, Lado; Ratra, Bharat E-mail: ratra@phys.ksu.ed

    2010-05-10

    We use the measurement of gamma-ray burst (GRB) distances to constrain dark energy cosmological model parameters. We employ two methods for analyzing GRB data-fitting luminosity relation of GRBs in each cosmology and using distance measures computed from binned GRB data. Current GRB data alone cannot tightly constrain cosmological parameters and allow for a wide range of dark energy models.

  10. Probability Statements Extraction with Constrained Conditional Random Fields.

    PubMed

    Deleris, Léa A; Jochim, Charles

    2016-01-01

    This paper investigates how to extract probability statements from academic medical papers. In previous work we have explored traditional classification methods which led to numerous false negatives. This current work focuses on constraining classification output obtained from a Conditional Random Field (CRF) model to allow for domain knowledge constraints. Our experimental results indicate constraining leads to a significant improvement in performance. PMID:27577439

  11. Titan's interior constrained from its obliquity and tidal Love number

    NASA Astrophysics Data System (ADS)

    Baland, Rose-Marie; Coyette, Alexis; Yseboodt, Marie; Beuthe, Mikael; Van Hoolst, Tim

    2016-04-01

    In the last few years, the Cassini-Huygens mission to the Saturn system has measured the shape, the obliquity, the static gravity field, and the tidally induced gravity field of Titan. The large values of the obliquity and of the k2 Love number both point to the existence of a global internal ocean below the icy crust. In order to constrain interior models of Titan, we combine the above-mentioned data as follows: (1) we build four-layer density profiles consistent with Titan's bulk properties; (2) we determine the corresponding internal flattening compatible with the observed gravity and topography; (3) we compute the obliquity and tidal Love number for each interior model; (4) we compare these predictions with the observations. Previously, we found that Titan is more differentiated than expected (assuming hydrostatic equilibrium), and that its ocean is dense and less than 100 km thick. Here, we revisit these conclusions using a more complete Cassini state model, including: (1) gravitational and pressure torques due to internal tidal deformations; (2) atmosphere/lakes-surface exchange of angular momentum; (3) inertial torque due to Poincaré flow. We also adopt faster methods to evaluate Love numbers (i.e. the membrane approach) in order to explore a larger parameter space.

  12. A Bayesian Approach to Constraining Dwarf Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Lotz, J. M.; Ferguson, H. C.

    2001-12-01

    We use a Bayesian - maximum likelihood analysis of the Hubble Deep Field to constrain the epoch of dwarf galaxy formation. Late formation of dwarf galaxies arises as a natural consequence of proposed solutions to the "over-cooling" problem in hierarchical structure formation. Although dwarf-sized halos are among the first objects to collapse out of a cold dark matter dominated universe, photo-ionization from the inter-galactic UV background and stellar feedback at early epochs may suppress or delay significant star formation in dwarf galaxies until redshifts ~ 1. Such late-forming dwarf galaxies may make up a portion of the population of the faint blue galaxies observed at intermediate redshifts. Previous attempts to understand the nature of the faint blue galaxy population have fit the binned number counts, luminosity functions, color and size distributions and compared the results to a handful of possible scenarios. Our approach sums the likelihood of observing each object in the HDF catalog given a dwarf galaxy formation scenario and computes the total likelihood of the given dwarf formation scenario. The parameters of the input model are then varied, and the model with the maximum total likelihood is determined. This technique does not bin the data in any way, tests a wide range of input model parameters, and allows us to quantify the goodness-of-fit and constraints on dwarf galaxy evolution.

  13. Informed constrained spherical deconvolution (iCSD).

    PubMed

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Philips, Wilfried; Leemans, Alexander; Sijbers, Jan

    2015-08-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. However, the voxel sizes used in DW-MRI are relatively large, making DW-MRI prone to significant partial volume effects (PVE). These PVEs can be caused both by complex (e.g. crossing) WM fiber configurations and non-WM tissue, such as gray matter (GM) and cerebrospinal fluid. High angular resolution diffusion imaging methods have been developed to correctly characterize complex WM fiber configurations, but significant non-WM PVEs are also present in a large proportion of WM voxels. In constrained spherical deconvolution (CSD), the full fiber orientation distribution function (fODF) is deconvolved from clinically feasible DW data using a response function (RF) representing the signal of a single coherently oriented population of fibers. Non-WM PVEs cause a loss of precision in the detected fiber orientations and an emergence of false peaks in CSD, more prominently in voxels with GM PVEs. We propose a method, informed CSD (iCSD), to improve the estimation of fODFs under non-WM PVEs by modifying the RF to account for non-WM PVEs locally. In practice, the RF is modified based on tissue fractions estimated from high-resolution anatomical data. Results from simulation and in-vivo bootstrapping experiments demonstrate a significant improvement in the precision of the identified fiber orientations and in the number of false peaks detected under GM PVEs. Probabilistic whole brain tractography shows fiber density is increased in the major WM tracts and decreased in subcortical GM regions. The iCSD method significantly improves the fiber orientation estimation at the WM-GM interface, which is especially important in connectomics, where the connectivity between GM regions is analyzed. PMID:25660002

  14. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  15. Constraining the source of mantle plumes

    NASA Astrophysics Data System (ADS)

    Cagney, N.; Crameri, F.; Newsome, W. H.; Lithgow-Bertelloni, C.; Cotel, A.; Hart, S. R.; Whitehead, J. A.

    2016-02-01

    In order to link the geochemical signature of hot spot basalts to Earth's deep interior, it is first necessary to understand how plumes sample different regions of the mantle. Here, we investigate the relative amounts of deep and shallow mantle material that are entrained by an ascending plume and constrain its source region. The plumes are generated in a viscous syrup using an isolated heater for a range of Rayleigh numbers. The velocity fields are measured using stereoscopic Particle-Image Velocimetry, and the concept of the 'vortex ring bubble' is used to provide an objective definition of the plume geometry. Using this plume geometry, the plume composition can be analysed in terms of the proportion of material that has been entrained from different depths. We show that the plume composition can be well described using a simple empirical relationship, which depends only on a single parameter, the sampling coefficient, sc. High-sc plumes are composed of material which originated from very deep in the fluid domain, while low-sc plumes contain material entrained from a range of depths. The analysis is also used to show that the geometry of the plume can be described using a similarity solution, in agreement with previous studies. Finally, numerical simulations are used to vary both the Rayleigh number and viscosity contrast independently. The simulations allow us to predict the value of the sampling coefficient for mantle plumes; we find that as a plume reaches the lithosphere, 90% of its composition has been derived from the lowermost 260-750 km in the mantle, and negligible amounts are derived from the shallow half of the lower mantle. This result implies that isotope geochemistry cannot provide direct information about this unsampled region, and that the various known geochemical reservoirs must lie in the deepest few hundred kilometres of the mantle.

  16. The cost-constrained traveling salesman problem

    SciTech Connect

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  17. Constraining Dark Matter Through the Study of Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Dawson, William Anthony

    2013-03-01

    gravitational lensing observations to map and weigh the mass (i.e., dark matter which comprises ~85% of the mass) of the cluster, Sunyaev-Zel'dovich effect and X-ray observations to map and quantify the intracluster gas, and finally radio observations to search for associated radio relics, which had they been observed would have helped constrain the properties of the merger. Using this information in conjunction with a Monte Carlo analysis model I quantify the dynamic properties of the merger, necessary to properly interpret constraints on the SIDM cross-section. I compare the locations of the galaxies, dark matter and gas to constrain the SIDM cross-section. This dissertation presents this work. Findings: We find that the Musket Ball is a merger with total mass of 4.8+3.2-1.5x10 14Msun. However, the dynamic analysis shows that the Musket Ball is being observed 1.1+1.3-0.4 Gyr after first pass through and is much further progressed in its merger process than previously identified dissociative mergers (for example it is 3.4+3.8 -1.4 times further progressed that the Bullet Cluster). By observing that the dark matter is significantly offset from the gas we are able to place an upper limit on the dark matter cross-section of sigmaSIDMm -1DM < 8 cm2g-1. However, we find an that the galaxies appear to be leading the weak lensing (WL) mass distribution by 20.5" (129 kpc at z=0.53) in southern subcluster, which might be expected to occur if dark matter self-interacts. Contrary to this finding though the WL mass centroid appears to be leading the galaxy centroid by 7.4" (47 kpc at z=0.53) in the northern subcluster. Conclusion: The southern offset alone suggests that dark matter self-interacts with ~83% confidence. However, when we account for the observation that the galaxy centroid appears to trail the WL centroid in the north the confidence falls to ~55%. While the SIDM scenario is slightly preferred over the CDM scenario it is not significantly so. Perspectives: The galaxy

  18. Constraining Anthropogenic and Biogenic Emissions Using Chemical Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Spencer, Kathleen M.

    Numerous gas-phase anthropogenic and biogenic compounds are emitted into the atmosphere. These gases undergo oxidation to form other gas-phase species and particulate matter. Whether directly or indirectly, primary pollutants, secondary gas-phase products, and particulate matter all pose health and environmental risks. In this work, ambient measurements conducted using chemical ionization mass spectrometry are used as a tool for investigating regional air quality. Ambient measurements of peroxynitric acid (HO2NO2) were conducted in Mexico City. A method of inferring the rate of ozone production, PO3, is developed based on observations of HO2NO 2, NO, and NO2. Comparison of this observationally based PO3 to a highly constrained photochemical box model indicates that regulations aimed at reducing ozone levels in Mexico City by reducing NOx concentrations may be effective at higher NO x levels than predicted using accepted photochemistry. Measurements of SO2 and particulate sulfate were conducted over the Los Angeles basin in 2008 and are compared to measurements made in 2002. A large decrease in SO2 concentration and a change in spatial distribution are observed. Nevertheless, only a modest reduction in sulfate concentration is observed at ground sites within the basin. Possible explanations for these trends are investigated. Two techniques, single and triple quadrupole chemical ionization mass spectrometry, were used to quantify ambient concentrations of biogenic oxidation products, hydroxyacetone and glycolaldehyde. The use of these techniques demonstrates the advantage of triple quadrupole mass spectrometry for separation of mass analogues, provided the collision-induced daughter ions are sufficiently distinct. Enhancement ratios of hydroxyacetone and glycolaldehyde in Californian biomass burning plumes are presented as are concentrations of these compounds at a rural ground site downwind of Sacramento.

  19. Constraining canopy biophysical simulations with MODIS reflectance data

    NASA Astrophysics Data System (ADS)

    Drewry, D. T.; Duveiller, G.

    2013-05-01

    Modern vegetation models incorporate ecophysiological details that allow for accurate estimates of carbon dioxide uptake, water use and energy exchange, but require knowledge of dynamic structural and biochemical traits. Variations in these traits are controlled by genetic factors as well as growth stage and nutrient and moisture availability, making them difficult to predict and prone to significant error. Here we explore the use of MODIS optical reflectance data for constraining key canopy- and leaf-level traits required by forward biophysical models. A multi-objective optimization algorithm is used to invert the PROSAIL canopy radiation transfer model, which accounts for the effects of leaf-level optical properties, foliage distribution and orientation on canopy reflectance across the optical range. Inversions are conducted for several growing seasons for both soybean and maize at several sites in the Central US agro-ecosystem. These inversions provide estimates of seasonal variations, and associated uncertainty, of variables such as leaf area index (LAI) that are then used as inputs into the MLCan biophysical model to conduct forward simulations. MLCan characterizes the ecophysiological functioning of a plant canopy at a half-hourly timestep, and has been rigorously validated for both C3 and C4 crops against observations of canopy CO2 uptake, evapotranspiration and sensible heat exchange across a wide range of meteorological conditions. The inversion-derived canopy properties are used to examine the ability of MODIS data to characterize seasonal variations in canopy properties in the context of a detailed forward canopy biophysical model, and the uncertainty induced in forward model estimates as a function of the uncertainty in the inverted parameters. Special care is made to ensure that the satellite observations match adequately, in both time and space, with the coupled model simulations. To do so, daily MODIS observations are used and a validated model of

  20. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lindsay, Sean; Harker, David E.; Kelley, Michael S. P.; Woodward, Charles E.; Murphy, James Richard

    2013-01-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 microns [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, lambdaF lambda vs. lambda) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline

  1. How We Can Constrain Aerosol Type Globally

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph

    2016-01-01

    In addition to aerosol number concentration, aerosol size and composition are essential attributes needed to adequately represent aerosol-cloud interactions (ACI) in models. As the nature of ACI varies enormously with environmental conditions, global-scale constraints on particle properties are indicated. And although advanced satellite remote-sensing instruments can provide categorical aerosol-type classification globally, detailed particle microphysical properties are unobtainable from space with currently available or planned technologies. For the foreseeable future, only in situ measurements can constrain particle properties at the level-of-detail required for ACI, as well as to reduce uncertainties in regional-to-global-scale direct aerosol radiative forcing (DARF). The limitation of in situ measurements for this application is sampling. However, there is a simplifying factor: for a given aerosol source, in a given season, particle microphysical properties tend to be repeatable, even if the amount varies from day-to-day and year-to-year, because the physical nature of the particles is determined primarily by the regional environment. So, if the PDFs of particle properties from major aerosol sources can be adequately characterized, they can be used to add the missing microphysical detail the better sampled satellite aerosol-type maps. This calls for Systematic Aircraft Measurements to Characterize Aerosol Air Masses (SAM-CAAM). We are defining a relatively modest and readily deployable, operational aircraft payload capable of measuring key aerosol absorption, scattering, and chemical properties in situ, and a program for characterizing statistically these properties for the major aerosol air mass types, at a level-of-detail unobtainable from space. It is aimed at: (1) enhancing satellite aerosol-type retrieval products with better aerosol climatology assumptions, and (2) improving the translation between satellite-retrieved aerosol optical properties and

  2. Topology of Wrinklons in Graphene Nanoribbons in the Vicinity of Constrained Edge

    NASA Astrophysics Data System (ADS)

    Korznikova, E. A.; Baimova, J. A.; Dmitriev, S. V.

    2015-10-01

    Most of the two-dimensional materials possessing low bending stiffness tend to lose the flat shape to form topological defects in the form of wrinkles and folds under the action of external factors. One of the striking examples of such material is graphene, where the presence of wrinkles leads to changes in physical, mechanical, and chemical properties of the material. Thus, changing the geometry of wrinkles, one can purposefully control properties of graphene. In this paper, we studied the characteristics of wrinkles appearing in graphene under the influence of elastic deformation, as well as the evolution of the configuration of wrinkles in the vicinity of the constrained edge of the graphene nanoribbon at different initial conditions. It is found that near the constrained edges of the deformed graphene nanoribbons, it is profitable to form wrinklons, that is, transition regions, where two or more wrinkles merge into one. The stability of two types of wrinklons formed by merging of the two or three wrinkles in one is shown. It is shown that in the process of the structure relaxation of the uniformly deformed graphene depending on the initial configuration of wrinkles, hierarchy of wrinkles containing wrinklons of one or another type is formed near the constrained edges. The results allow to explain the experimentally observed topology of the graphene sheet in the vicinity of the constrained edge.

  3. Anomalous ion heating from ambipolar-constrained magnetic fluctuation-induced transport

    SciTech Connect

    Gatto, R.; Terry, P. W.

    2001-01-01

    A kinetic theory for the anomalous heating of ions from energy stored in magnetic turbulence is presented. Imposing self consistency through the constitutive relations between particle distributions and fields, a turbulent Kirchhoff's law is derived that expresses a direct connection between rates of ion heating and electron thermal transport. This connection arises from the kinematics of electron motion along turbulent fields, which results in granular structures in the electron distribution. The drag exerted on these structures through emission into collective modes mediates ambipolar-constrained transport. Resonant damping of the collective modes by ions produces the heating. In collisionless plasmas the rate of ion damping controls the rate of emission, and hence the ambipolar-constrained electron heat flux. The heating rate is calculated for both a resonant and non-resonant magnetic fluctuation spectrum and compared with observations. The theoretical heating rate is sufficient to account for the observed two-fold rise in ion temperature during sawtooth events in experimental discharges.

  4. Age and mass of solar twins constrained by lithium abundance

    NASA Astrophysics Data System (ADS)

    Do Nascimento, J. D., Jr.; Castro, M.; Meléndez, J.; Bazot, M.; Théado, S.; Porto de Mello, G. F.; de Medeiros, J. R.

    2009-07-01

    Aims: We analyze the non-standard mixing history of the solar twins HIP 55 459, HIP 79 672, HIP 56 948, HIP 73 815, and HIP 100 963, to determine as precisely as possible their mass and age. Methods: We computed a grid of evolutionary models with non-standard mixing at several metallicities with the Toulouse-Geneva code for a range of stellar masses assuming an error bar of ±50 K in T_eff. We choose the evolutionary model that reproduces accurately the observed low lithium abundances observed in the solar twins. Results: Our best-fit model for each solar twin provides a mass and age solution constrained by their Li content and T_eff determination. HIP 56 948 is the most likely solar-twin candidate at the present time and our analysis infers a mass of 0.994 ± 0.004 {M⊙} and an age of 4.71 ± 1.39 Gyr. Conclusions: Non-standard mixing is required to explain the low Li abundances observed in solar twins. Li depletion due to additional mixing in solar twins is strongly mass dependent. An accurate lithium abundance measurement and non-standard models provide more precise information about the age and mass more robustly than determined by classical methods alone. The models are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/501/687 or via http://andromeda.dfte.ufrn.br

  5. Constraining Magnetosphere Storm Drivers Through Analysis of the Atmospheric Response

    NASA Astrophysics Data System (ADS)

    Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.

    2009-12-01

    The effects of Joule heating on neutral wind, composition, temperature and density of the upper atmosphere are known qualitatively but a quantitative characterization is still missing. A step towards such a quantitative analysis requires detailed observations of the dynamics itself and its impacts on the thermosphere-ionosphere system, as well as an adequate numerical model. For this research we use the global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere and electrodynamics (CTIPe), a self-consistent physics-based model that solves the momentum, energy, and composition equations for the neutral and ionized atmosphere. The F10.7 index is used to define solar EUV heating, ionization, and dissociation. Propagating tidal modes are imposed at 80 km altitude with a prescribed amplitude and phase. The magnetospheric energy input into the system is characterized by the time variations of the solar wind velocity and the interplanetary magnetic field (IMF) magnitude and direction, whereas the auroral precipitation is derived either from the TIROS/NOAA satellite observations or from ACE solar wind and IMF data. During geomagnetic storms the temperature of the Earth’s upper atmosphere can be substantially increased mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. This heating drives rapid increases in temperature inducing upwelling of the neutral atmosphere. The enhanced density results in a subsequent increase of atmospheric drag on satellites and large-scale ionospheric storm effects. The storm energy input drives changes in global circulation, neutral composition, plasma density, and electrodynamics. One full year comparison between ground and space observations with CTIPe results during solar minimum conditions shows that the model captures the daily space weather and the year-long climatology not only in a qualitative but in a quantitative way

  6. Quantum dynamics by the constrained adiabatic trajectory method

    SciTech Connect

    Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.

  7. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  8. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  9. Constraining the mass of Galactic black hole GX 339-4

    NASA Astrophysics Data System (ADS)

    Sreehari, H.; Iyer, N.; Nandi, Anuj

    Observations of Galactic black hole candidates during their outburst phases have revealed Quasi-Periodic Oscillations (QPOs) and their dependence on the spectral parameters. We used the 2010 outburst data of GX 339-4 from Rossi X-ray Timing Explorer (RXTE) and modeled it based on evolution of QPO frequencies and spectro-temporal correlation to constrain the mass of the compact object.

  10. Progress in constraining the asymmetry dependence of the nuclear caloric curve

    NASA Astrophysics Data System (ADS)

    McIntosh, Alan B.; Yennello, Sherry J.

    2016-05-01

    The nuclear equation of state is a basic emergent property of nuclear material. Despite its importance in nuclear physics and astrophysics, aspects of it are still poorly constrained. Our research focuses on answering the question: How does the nuclear caloric curve depend on the neutron-proton asymmetry? We briefly describe our initial observation that increasing neutron-richness leads to lower temperatures. We then discuss the status of our recently executed experiment to independently measure the asymmetry dependence of the caloric curve.

  11. Ultraconservation identifies a small subset of extremely constrained developmental enhancers

    SciTech Connect

    Pennacchio, Len A.; Visel, Axel; Prabhakar, Shyam; Akiyama, Jennifer A.; Shoukry, Malak; Lewis, Keith D.; Holt, Amy; Plajzer-Frick, Ingrid; Afzal, Veena; Rubin, Edward M.; Pennacchio, Len A.

    2007-10-01

    While experimental studies have suggested that non-coding ultraconserved DNA elements are central nodes in the regulatory circuitry that specifies mammalian embryonic development, the possible functional relevance of their>200bp of perfect sequence conservation between human-mouse-rat remains obscure 1,2. Here we have compared the in vivo enhancer activity of a genome-wide set of 231 non-exonic sequences with ultraconserved cores to that of 206 sequences that are under equivalently severe human-rodent constraint (ultra-like), but lack perfect sequence conservation. In transgenic mouse assays, 50percent of the ultraconserved and 50percent of the ultra-like conserved elements reproducibly functioned as tissue-specific enhancers at embryonic day 11.5. In this in vivo assay, we observed that ultraconserved enhancers and constrained non-ultraconserved enhancers targeted expression to a similar spectrum of tissues with a particular enrichment in the developing central nervous system. A human genome-wide comparative screen uncovered ~;;2,600 non-coding elements that evolved under ultra-like human-rodent constraint and are similarly enriched near transcriptional regulators and developmental genes as the much smaller number of ultraconserved elements. These data indicate that ultraconserved elements possessing absolute human-rodent sequence conservation are not distinct from other non-coding elements that are under comparable purifying selection in mammals and suggest they are principal constituents of the cis-regulatory framework of mammalian development.

  12. Geometric momentum for a particle constrained on a curved hypersurface

    SciTech Connect

    Liu, Q. H.

    2013-12-15

    The canonical quantization is a procedure for quantizing a classical theory while preserving the formal algebraic structure among observables in the classical theory to the extent possible. For a system without constraint, we have the so-called fundamental commutation relations (CRs) among positions and momenta, whose algebraic relations are the same as those given by the Poisson brackets in classical mechanics. For the constrained motion on a curved hypersurface, we need more fundamental CRs otherwise neither momentum nor kinetic energy can be properly quantized, and we propose an enlarged canonical quantization scheme with introduction of the second category of fundamental CRs between Hamiltonian and positions, and those between Hamiltonian and momenta, whereas the original ones are categorized into the first. As an N − 1 (N ⩾ 2) dimensional hypersurface is embedded in an N dimensional Euclidean space, we obtain the proper momentum that depends on the mean curvature. For the spherical surface, a long-standing problem in the form of the geometric potential is resolved in a lucid and unambiguous manner, which turns out to be identical to that given by the so-called confining potential technique. In addition, a new dynamical group SO(N, 1) symmetry for the motion on the sphere is demonstrated.

  13. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovičić discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  14. Two-dimensional topological order of kinetically constrained quantum particles

    NASA Astrophysics Data System (ADS)

    Kourtis, Stefanos

    2015-03-01

    Motivated by recent experimental and theoretical work on driven optical lattices, we investigate how imposing kinetic restrictions on quantum particles that would otherwise hop freely on a two-dimensional lattice can lead to topologically ordered states. The kinetically constrained models introduced here are derived as an approximate generalization of strongly interacting particles hopping on Haldane and equivalent lattices and are pertinent to systems irradiated by circularly polarized light. After introducing a broad class of models, we focus on particular realizations and show numerically that they exhibit topological order, by observing topological ground-state degeneracies and the quantization of corresponding invariants. Apart from potentially being crucial for the interpretation of forthcoming cold-atom experiments, our results also hint at unexplored possibilities for the realization of topologically ordered matter. A further implication, relevant to fractional quantum Hall (FQH) physics, is that the correlations responsible for FQH-like states can arise from processes other than density-density interactions. Financial support from EPSRC (Grant No. EP/K028960/1) and ICAM Branch Contributions.

  15. Constraining MHD Disk-Winds with X-ray Absorbers

    NASA Astrophysics Data System (ADS)

    Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.

    2014-01-01

    From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (< 2 keV). While the identified WAs are often mildly blueshifted to yield line-of-sight velocities up to ~100-3,000 km/sec in typical X-ray-bright Seyfert 1 AGNs, a fraction of Seyfert galaxies such as PG 1211+143 exhibits even faster absorbers (v/ 0.1-0.2) called ultra-fast outflows (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.

  16. Could the Pliocene constrain the equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. C.; Annan, J. D.

    2016-08-01

    The mid-Pliocene Warm Period (mPWP) is the most recent interval in which atmospheric carbon dioxide was substantially higher than in modern pre-industrial times. It is, therefore, a potentially valuable target for testing the ability of climate models to simulate climates warmer than the pre-industrial state. The recent Pliocene Model Intercomparison Project (PlioMIP) presented boundary conditions for the mPWP and a protocol for climate model experiments. Here we analyse results from the PlioMIP and, for the first time, discuss the potential for this interval to usefully constrain the equilibrium climate sensitivity. We observe a correlation in the ensemble between their tropical temperature anomalies at the mPWP and their equilibrium sensitivities. If the real world is assumed to also obey this relationship, then the reconstructed tropical temperature anomaly at the mPWP can in principle generate a constraint on the true sensitivity. Directly applying this methodology using available data yields a range for the equilibrium sensitivity of 1.9-3.7 °C, but there are considerable additional uncertainties surrounding the analysis which are not included in this estimate. We consider the extent to which these uncertainties may be better quantified and perhaps lessened in the next few years.

  17. On Optimizing Joint Inversion of Constrained Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Sosa Aguirre, U. A.; Velazquez, L.; Argaez, M.; Velasco, A. A.; Romero, R.

    2010-12-01

    We implemented a joint inversion least-squares (LSQ) algorithm to characterize 1-D crustal velocity Earth structure using geophysical data sets with two different optimization methods: truncated singular value decomposition (TSVD), and primal-dual interior-point (PDIP). We used receiver function and surface wave dispersion velocity observations, and created a framework to incorporate other data sets. An improvement in the final outcome (model) is expected by providing better physical constraints than using just one single data set. The TSVD and PDIP methods solve a regularized unconstrained and an inherent regularized constrained minimization problems, respectively. Both techniques implement the inclusion of bounds into the layered shear velocities in a different fashion. We conduct a numerical experimentation with synthetic data, and find that the PDID method’s solution was more robust in terms of satisfying geophysical constraints, accuracy, and efficiency than the TSVD approach. Finally, we apply the PDIP method for characterizing material properties of the Rio Grande Rift region using real recorded seismic data with promising numerical results.

  18. Constraining planetary interiors with the Love number k2

    NASA Astrophysics Data System (ADS)

    Kramm, Ulrike; Nettelmann, Nadine; Redmer, Ronald

    2011-11-01

    For the solar sytem giant planets the measurement of the gravitational moments J2 and J4 provided valuable information about the interior structure. However, for extrasolar planets the gravitational moments are not accessible. Nevertheless, an additional constraint for extrasolar planets can be obtained from the tidal Love number k2, which, to first order, is equivalent to J2. k2 quantifies the quadrupolic gravity field deformation at the surface of the planet in response to an external perturbing body and depends solely on the planet's internal density distribution. On the other hand, the inverse deduction of the density distribution of the planet from k2 is non-unique. The Love number k2 is a potentially observable parameter that can be obtained from tidally induced apsidal precession of close-in planets (Ragozzine & Wolf 2009) or from the orbital parameters of specific two-planet systems in apsidal alignment (Mardling 2007). We find that for a given k2, a precise value for the core mass cannot be derived. However, a maximum core mass can be inferred which equals the core mass predicted by homogeneous zero metallicity envelope models. Using the example of the extrasolar transiting planet HAT-P-13b we show to what extend planetary models can be constrained by taking into account the tidal Love number k2.

  19. Transverse gravity versus observations

    SciTech Connect

    Álvarez, Enrique; Faedo, Antón F.; López-Villarejo, J.J. E-mail: anton.fernandez@uam.es

    2009-07-01

    Theories of gravity invariant under those diffeomorphisms generated by transverse vectors, ∂{sub μ}ξ{sup μ} = 0 are considered. Such theories are dubbed transverse, and differ from General Relativity in that the determinant of the metric, g, is a transverse scalar. We comment on diverse ways in which these models can be constrained using a variety of observations. Generically, an additional scalar degree of freedom mediates the interaction, so the usual constraints on scalar-tensor theories have to be imposed. If the purely gravitational part is Einstein-Hilbert but the matter action is transverse, the models predict that the three a priori different concepts of mass (gravitational active and gravitational passive as well as inertial) are not equivalent anymore. These transverse deviations from General Relativity are therefore tightly constrained, actually correlated with existing bounds on violations of the equivalence principle, local violations of Newton's third law and/or violation of Local Position Invariance.

  20. Transverse gravity versus observations

    NASA Astrophysics Data System (ADS)

    Álvarez, Enrique; Faedo, Antón F.; López-Villarejo, J. J.

    2009-07-01

    Theories of gravity invariant under those diffeomorphisms generated by transverse vectors, ∂μξμ = 0 are considered. Such theories are dubbed transverse, and differ from General Relativity in that the determinant of the metric, g, is a transverse scalar. We comment on diverse ways in which these models can be constrained using a variety of observations. Generically, an additional scalar degree of freedom mediates the interaction, so the usual constraints on scalar-tensor theories have to be imposed. If the purely gravitational part is Einstein-Hilbert but the matter action is transverse, the models predict that the three a priori different concepts of mass (gravitational active and gravitational passive as well as inertial) are not equivalent anymore. These transverse deviations from General Relativity are therefore tightly constrained, actually correlated with existing bounds on violations of the equivalence principle, local violations of Newton's third law and/or violation of Local Position Invariance.

  1. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  2. Constraining a Distributed Hydrologic Model Using Process Constraints derived from a Catchment Perceptual Model

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Musuuza, Jude; Zhang, Jun

    2015-04-01

    The increased availability of spatial datasets and hydrological monitoring techniques improves the potential to apply distributed hydrologic models robustly to simulate catchment systems. However, distributed catchment modelling remains problematic for several reasons, including the miss-match between the scale of process equations and observations, and the scale at which equations (and parameters) are applied at the model grid resolution. A key problem is that when equations are solved over a distributed grid of the catchment system, models contain a considerable number of distributed parameters, and therefore degrees of freedom, that need to be constrained through calibration. Often computational limitations alone prohibit a full search of the multidimensional parameter space. However, even when possible, insufficient data results in model parameter and/or structural equifinality. Calibration approaches therefore attempt to reduce the dimensions of parameter space to constrain model behaviour, typically by fixing, lumping or relating model parameters in some way when calibrating the model to time-series of response data. An alternative approach to help reduce the space of feasible models has been applied to lumped and semi-distributed models, where additional, often semi-qualitative information is used to constrain the internal states and fluxes of the model, which in turn help to identify feasible sets of model structures and parameters. Such process constraints have not been widely applied to distributed hydrological models, despite the fact that distributed models make more predictions of distributed states and fluxes that can potentially be constrained. This paper presents a methodology for deriving process and parameter constraints through development of a perceptual model for a given catchment system, which can then be applied in distributed model calibration and sensitivity analysis to constrain feasible parameter and model structural space. We argue that

  3. Constraining the initial entropy of directly detected exoplanets

    NASA Astrophysics Data System (ADS)

    Marleau, G.-D.; Cumming, A.

    2014-01-01

    The post-formation, initial entropy Si of a gas giant planet is a key witness to its mass-assembly history and a crucial quantity for its early evolution. However, formation models are not yet able to predict reliably Si, making unjustified the use solely of traditional, `hot-start' cooling tracks to interpret direct-imaging results and calling for an observational determination of initial entropies to guide formation scenarios. Using a grid of models in mass and entropy, we show how to place joint constraints on the mass and initial entropy of an object from its observed luminosity and age. This generalizes the usual estimate of only a lower bound on the real mass, through hot-start tracks. Moreover, we demonstrate that with mass information, e.g. from dynamical-stability analyses or radial velocity, tighter bounds can be set on the initial entropy. We apply this procedure to 2M1207 b and find that its initial entropy is at least 9.2 kB/baryon, assuming that it does not burn deuterium. For the planets of the HR 8799 system, we infer that they must have formed with Si > 9.2 kB/baryon, independent of uncertainties about the age of the star. Finally, a similar analysis for β Pic b reveals that it must have formed with Si > 10.5 kB/baryon, using the radial-velocity mass upper limit. These initial entropy values are, respectively, ca. 0.7, 0.5 and 1.5 kB/baryon higher than the ones obtained from core-accretion models by Marley et al., thereby quantitatively ruling out the coldest starts for these objects and constraining warm starts, especially for β Pic b.

  4. Constraining the initial entropy of directly-detected exoplanets

    NASA Astrophysics Data System (ADS)

    Marleau, Gabriel-Dominique; Cumming, Andrew

    2013-07-01

    The post-mass-assembly, initial entropy Si of a gas giant planet is a key witness to its formation history and a crucial quantity for its early evolution. However, formation models are not yet able to predict reliably Si, making unjustified the use of traditional cooling tracks ("hot starts") to interpret direct imaging results and calling for an observational determination of initial entropies to guide formation scenarios. Using a grid of models in mass and entropy, we show how to place joint constraints on the mass and initial entropy of an object from its observed luminosity and age, highlighting that hot-start tracks only provide a lower limit on the real mass. Moreover, we demonstrate that with mass information, e.g. from dynamical stability analyses or radial velocity, tighter bounds can be set on the initial entropy. We apply this procedure to 2M1207 b and find that its initial entropy is at least 9.2 kB/baryon, assuming that it does not burn deuterium. For the planets of the HR 8799 system, we infer that they must have formed with Si > 9.2 kB/baryon, independent of uncertainties about the age of the star. Finally, a similar analysis for beta Pic b reveals that it must have formed with Si > 10.5 kB/baryon, using the radial-velocity mass upper limit. These initial entropy values are respectively ca. 0.7, 0.5, and 1.5 kB/baryon higher than the ones obtained from core accretion models by Marley et al. (2007), thereby quantitatively ruling out the coldest starts for these objects and constraining warm starts, especially for beta Pic b (Marleau & Cumming 2013, arXiv:1302.1517).

  5. Constraining primordial non-Gaussianity with future galaxy surveys

    NASA Astrophysics Data System (ADS)

    Giannantonio, Tommaso; Porciani, Cristiano; Carron, Julien; Amara, Adam; Pillepich, Annalisa

    2012-06-01

    We study the constraining power on primordial non-Gaussianity of future surveys of the large-scale structure of the Universe for both near-term surveys (such as the Dark Energy Survey - DES) as well as longer term projects such as Euclid and WFIRST. Specifically we perform a Fisher matrix analysis forecast for such surveys, using DES-like and Euclid-like configurations as examples, and take account of any expected photometric and spectroscopic data. We focus on two-point statistics and consider three observables: the 3D galaxy power spectrum in redshift space, the angular galaxy power spectrum and the projected weak-lensing shear power spectrum. We study the effects of adding a few extra parameters to the basic Λ cold dark matter (ΛCDM) set. We include the two standard parameters to model the current value for the dark-energy equation of state and its time derivative, w0, wa, and we account for the possibility of primordial non-Gaussianity of the local, equilateral and orthogonal types, of parameter fNL and, optionally, of spectral index ?. We present forecasted constraints on these parameters using the different observational probes. We show that accounting for models that include primordial non-Gaussianity does not degrade the constraint on the standard ΛCDM set nor on the dark-energy equation of state. By combining the weak-lensing data and the information on projected galaxy clustering, consistently including all two-point functions and their covariance, we find forecasted marginalized errors σ(fNL) ˜ 3, ? from a Euclid-like survey for the local shape of primordial non-Gaussianity, while the orthogonal and equilateral constraints are weakened for the galaxy clustering case, due to the weaker scale dependence of the bias. In the lensing case, the constraints remain instead similar in all configurations.

  6. Constraining dark energy fluctuations with supernova correlations

    SciTech Connect

    Blomqvist, Michael; Enander, Jonas; Mörtsell, Edvard E-mail: enander@fysik.su.se

    2010-10-01

    We investigate constraints on dark energy fluctuations using type Ia supernovae. If dark energy is not in the form of a cosmological constant, that is if the equation of state w≠−1, we expect not only temporal, but also spatial variations in the energy density. Such fluctuations would cause local variations in the universal expansion rate and directional dependences in the redshift-distance relation. We present a scheme for relating a power spectrum of dark energy fluctuations to an angular covariance function of standard candle magnitude fluctuations. The predictions for a phenomenological model of dark energy fluctuations are compared to observational data in the form of the measured angular covariance of Hubble diagram magnitude residuals for type Ia supernovae in the Union2 compilation. The observational result is consistent with zero dark energy fluctuations. However, due to the limitations in statistics, current data still allow for quite general dark energy fluctuations as long as they are in the linear regime.

  7. Prediction of noise constrained optimum takeoff procedures

    NASA Technical Reports Server (NTRS)

    Padula, S. L.

    1980-01-01

    An optimization method is used to predict safe, maximum-performance takeoff procedures which satisfy noise constraints at multiple observer locations. The takeoff flight is represented by two-degree-of-freedom dynamical equations with aircraft angle-of-attack and engine power setting as control functions. The engine thrust, mass flow and noise source parameters are assumed to be given functions of the engine power setting and aircraft Mach number. Effective Perceived Noise Levels at the observers are treated as functionals of the control functions. The method is demonstrated by applying it to an Advanced Supersonic Transport aircraft design. The results indicate that automated takeoff procedures (continuously varying controls) can be used to significantly reduce community and certification noise without jeopardizing safety or degrading performance.

  8. Building a predictive model of galaxy formation - I. Phenomenological model constrained to the z = 0 stellar mass function

    NASA Astrophysics Data System (ADS)

    Benson, Andrew J.

    2014-11-01

    We constrain a highly simplified semi-analytic model of galaxy formation using the z ≈ 0 stellar mass function of galaxies. Particular attention is paid to assessing the role of random and systematic errors in the determination of stellar masses, to systematic uncertainties in the model, and to correlations between bins in the measured and modelled stellar mass functions, in order to construct a realistic likelihood function. We derive constraints on model parameters and explore which aspects of the observational data constrain particular parameter combinations. We find that our model, once constrained, provides a remarkable match to the measured evolution of the stellar mass function to z = 1, although fails dramatically to match the local galaxy H I mass function. Several `nuisance parameters' contribute significantly to uncertainties in model predictions. In particular, systematic errors in stellar mass estimate are the dominant source of uncertainty in model predictions at z ≈ 1, with additional, non-negligble contributions arising from systematic uncertainties in halo mass functions and the residual uncertainties in cosmological parameters. Ignoring any of these sources of uncertainties could lead to viable models being erroneously ruled out. Additionally, we demonstrate that ignoring the significant covariance between bins the observed stellar mass function leads to significant biases in the constraints derived on model parameters. Careful treatment of systematic and random errors in the constraining data, and in the model being constrained, is crucial if this methodology is to be used to test hypotheses relating to the physics of galaxy formation.

  9. Constraining East Antarctic mass trends using a Bayesian inference approach

    NASA Astrophysics Data System (ADS)

    Martin-Español, Alba; Bamber, Jonathan L.

    2016-04-01

    East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (<10 m/yr) , such as those in the interior of East Antarctica (a similar condition as used in Zwally 2015). The model must find a solution that

  10. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  11. Constraining the Magnetic Field of HAT-P-16b via Near-UV Photometry

    NASA Astrophysics Data System (ADS)

    Pearson, Kyle; Turner, J. D.; Sagan, T.

    2013-10-01

    We present the first primary transit light curve of the hot Jupiter HAT-P-16b in the near-UV photometric band. We observed this object on December 29, 2012 in order to update the transit ephemeris, constrain its planetary parameters and search for magnetic field interference. Vidotto et al. (2011a) postulate that the magnetic field of HAT-P-16b can be constrained if its near-UV light curve shows an early ingress compared to its optical light curve, while its egress remains unchanged. However, we did not detect an early ingress in our night of observing when using a cadence of 60 seconds and an average photometric precision of 2.26 mmag. We find a near-UV planetary radius of Rp = 1.274 ± 0.057 RJup which is consistent with its near-IR radius of Rp = 1.289 ± 0.066 RJup (Buchhave et al., 2010). We developed an automated reduction pipeline (ExoDRPL) and a modeling package (EXOMOP) to process our data. The data reduction package synthesizes a set of IRAF scripts to calibrate images and perform aperture photometry. The modeling package utilizes the Levenberg-Marquardt minimization algorithm to find a least-squares best fit and a differential evolution Markov Chain Monte Carlo algorithm to find the best fit to the light curve. To constrain the red noise in both fitting models we use the residual permutation (rosary bead) method and time-averaging method.

  12. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. PMID:26548947

  13. Constrained postures and spatial S-R compatibility as measured by the Simon effect.

    PubMed

    Kreutzfeldt, Magali; Leisten, Marco; Müsseler, Jochen

    2015-07-01

    Whereas working under constrained postures is known to influence the worker's perceived comfort and health, little is known in regard to its influence on performance. Employing an Auditory Simon task while varying posture, we investigated the relationship between constrained postures and cognitive processes in three experiments. In Experiment 1 and 2, participants operated a rocker switch or a control knob with one hand either in front or in the back of their body and while either sitting or kneeling. Perceived musculoskeletal exertion was gathered with a questionnaire. Results of the first two experiments showed differently perceived comfort and a minor effect of constrained posture on cognitive performance. However, results indicated that spatial coding in the back compares to either a virtual turn of the observer towards the control device (front-device coding) or along the observer's hand (effector coding). To clarify this issue the rocker switch was operated with one or two hands in Experiment 3, showing a comparable coding only in the one-hand condition and indicating evidence for the effector-coding hypothesis in the back. PMID:25139464

  14. Energy losses in thermally cycled optical fibers constrained in small bend radii

    SciTech Connect

    Guild, Eric; Morelli, Gregg

    2012-09-23

    High energy laser pulses were fired into a 365μm diameter fiber optic cable constrained in small radii of curvature bends, resulting in a catastrophic failure. Q-switched laser pulses from a flashlamp pumped, Nd:YAG laser were injected into the cables, and the spatial intensity profile at the exit face of the fiber was observed using an infrared camera. The transmission of the radiation through the tight radii resulted in an asymmetric intensity profile with one half of the fiber core having a higher peak-to-average energy distribution. Prior to testing, the cables were thermally conditioned while constrained in the small radii of curvature bends. Single-bend, double-bend, and U-shaped eometries were tested to characterize various cable routing scenarios.

  15. Improving Ocean Angular Momentum Estimates Using a Model Constrained by Data

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Stammer, Detlef; Wunsch, Carl

    2001-01-01

    Ocean angular momentum (OAM) calculations using forward model runs without any data constraints have, recently revealed the effects of OAM variability on the Earth's rotation. Here we use an ocean model and its adjoint to estimate OAM values by constraining the model to available oceanic data. The optimization procedure yields substantial changes in OAM, related to adjustments in both motion and mass fields, as well as in the wind stress torques acting on the ocean. Constrained and unconstrained OAM values are discussed in the context of closing the planet's angular momentum budget. The estimation procedure, yields noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale. The comparison with Earth rotation measurements provides an independent consistency check on the estimated ocean state and underlines the importance of ocean state estimation for quantitative. studies of the variable large-scale oceanic mass and circulation fields, including studies of OAM.

  16. Gravitational-wave limits from pulsar timing constrain supermassive black hole evolution.

    PubMed

    Shannon, R M; Ravi, V; Coles, W A; Hobbs, G; Keith, M J; Manchester, R N; Wyithe, J S B; Bailes, M; Bhat, N D R; Burke-Spolaor, S; Khoo, J; Levin, Y; Osłowski, S; Sarkissian, J M; van Straten, W; Verbiest, J P W; Wang, J-B

    2013-10-18

    The formation and growth processes of supermassive black holes (SMBHs) are not well constrained. SMBH population models, however, provide specific predictions for the properties of the gravitational-wave background (GWB) from binary SMBHs in merging galaxies throughout the universe. Using observations from the Parkes Pulsar Timing Array, we constrain the fractional GWB energy density (Ω(GW)) with 95% confidence to be Ω(GW)(H0/73 kilometers per second per megaparsec)(2) < 1.3 × 10(-9) (where H0 is the Hubble constant) at a frequency of 2.8 nanohertz, which is approximately a factor of 6 more stringent than previous limits. We compare our limit to models of the SMBH population and find inconsistencies at confidence levels between 46 and 91%. For example, the standard galaxy formation model implemented in the Millennium Simulation Project is inconsistent with our limit with 50% probability. PMID:24136962

  17. CONSTRAINING THREE-DIMENSIONAL MAGNETIC FIELD EXTRAPOLATIONS USING THE TWIN PERSPECTIVES OF STEREO

    SciTech Connect

    Conlon, Paul A.; Gallagher, Peter T.

    2010-05-20

    The three-dimensional magnetic topology of a solar active region (NOAA 10956) was reconstructed using a linear force-free field extrapolation constrained using the twin perspectives of STEREO. A set of coronal field configurations was initially generated from extrapolations of the photospheric magnetic field observed by the Michelson Doppler Imager on SOHO. Using an EUV intensity-based cost function, the extrapolated field lines that were most consistent with 171 A passband images from the Extreme UltraViolet Imager on STEREO were identified. This facilitated quantitative constraints to be placed on the twist ({alpha}) of the extrapolated field lines, where {nabla} x B = {alpha}B. Using the constrained values of {alpha}, the evolution in time of twist, connectivity, and magnetic energy were then studied. A flux emergence event was found to result in significant changes in the magnetic topology and total magnetic energy of the region.

  18. Constraining the Masses and the Non-radial Drag Coefficient of a Solar Coronal Mass Ejection

    NASA Astrophysics Data System (ADS)

    Kay, C.; dos Santos, L. F. G.; Opher, M.

    2015-03-01

    Decades of observations show that coronal mass ejections (CMEs) can deflect from a purely radial trajectory, however, no consensus exists as to the cause of these deflections. Many theories attribute CME deflection to magnetic forces. We developed Forecasting a CMEs Altered Trajectory (ForeCAT), a model for CME deflections based solely on magnetic forces, neglecting any reconnection effects. Here, we compare ForeCAT predictions to the observed deflection of the 2008 December 12 CME and find that ForeCAT can accurately reproduce the observations. Multiple observations show that this CME deflected nearly 30° in latitude and 4.°4 in longitude. From the observations, we are able to constrain all of the ForeCAT input parameters (initial position, radial propagation speed, and expansion) except the CME mass and the drag coefficient that affects the CME motion. By minimizing the reduced chi-squared, χ ν 2, between the ForeCAT results and the observations, we determine an acceptable mass range between 4.5 × 1014 and 1 × 1015 g and a drag coefficient less than 1.4 with a best fit at 7.5 × 1014 g and 0 for the mass and drag coefficient. ForeCAT is sensitive to the magnetic background and we are also able to constrain the rate at which the quiet Sun magnetic field falls to be similar or slightly slower than the Potential Field Source Surface model.

  19. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  20. Constraining f(R) theories with cosmography

    SciTech Connect

    Pannia, Florencia Anabella Teppa

    2013-08-01

    A method to set constraints on the parameters of extended theories of gravitation is presented. It is based on the comparison of two series expansions of any observable that depends on H(z). The first expansion is of the cosmographical type, while the second uses the dependence of H with z furnished by a given type of extended theory. When applied to f(R) theories together with the redshift drift, the method yields limits on the parameters of two examples (the theory of Hu and Sawicki [1], and the exponential gravity introduced by Linder [2]) that are compatible with or more stringent than the existing ones, as well as a limit for a previously unconstrained parameter.

  1. Constraining Emission Models of Luminous Blazar Sources

    SciTech Connect

    Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  2. Viscoelasticity Using Reactive Constrained Solid Mixtures

    PubMed Central

    Ateshian, Gerard A.

    2015-01-01

    This study presents a framework for viscoelasticity where the free energy density depends on the stored energy of intact strong and weak bonds, where weak bonds break and reform in response to loading. The stress is evaluated by differentiating the free energy density with respect to the deformation gradient, similar to the conventional approach for hyperelasticity. The breaking and reformation of weak bonds is treated as a reaction governed by the axiom of mass balance, where the constitutive relation for the mass supply governs the bond kinetics. The evolving mass contents of these weak bonds serve as observable state variables. Weak bonds reform in an energy-free and stress-free state, therefore their reference configuration is given by the current configuration at the time of their reformation. A principal advantage of this formulation is the availability of a strain energy density function that depends only on observable state variables, also allowing for a separation of the contributions of strong and weak bonds. The Clausius-Duhem inequality is satisfied by requiring that the net free energy from all breaking bonds must be decreasing at all times. In the limit of infinitesimal strains, linear stress-strain responses and first-order kinetics for breaking and reforming of weak bonds, the reactive framework reduces exactly to classical linear viscoelasticity. For large strains, the reactive and classical quasilinear viscoelasticity theories produce different equations, though responses to standard loading configurations behave similarly. This formulation complements existing tools for modeling the nonlinear viscoelastic response of biological soft tissues under large deformations. PMID:25757663

  3. Viscoelasticity using reactive constrained solid mixtures.

    PubMed

    Ateshian, Gerard A

    2015-04-13

    This study presents a framework for viscoelasticity where the free energy density depends on the stored energy of intact strong and weak bonds, where weak bonds break and reform in response to loading. The stress is evaluated by differentiating the free energy density with respect to the deformation gradient, similar to the conventional approach for hyperelasticity. The breaking and reformation of weak bonds is treated as a reaction governed by the axiom of mass balance, where the constitutive relation for the mass supply governs the bond kinetics. The evolving mass contents of these weak bonds serve as observable state variables. Weak bonds reform in an energy-free and stress-free state, therefore their reference configuration is given by the current configuration at the time of their reformation. A principal advantage of this formulation is the availability of a strain energy density function that depends only on observable state variables, also allowing for a separation of the contributions of strong and weak bonds. The Clausius-Duhem inequality is satisfied by requiring that the net free energy from all breaking bonds must be decreasing at all times. In the limit of infinitesimal strains, linear stress-strain responses and first-order kinetics for breaking and reforming of weak bonds, the reactive framework reduces exactly to classical linear viscoelasticity. For large strains, the reactive and classical quasilinear viscoelasticity theories produce different equations, though responses to standard loading configurations behave similarly. This formulation complements existing tools for modeling the nonlinear viscoelastic response of biological soft tissues under large deformations. PMID:25757663

  4. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Astrophysics Data System (ADS)

    Wooden, D. H.; Lindsay, S.; Harker, D. E.; Kelley, M. S.; Woodward, C. E.; Murphy, J. R.

    2013-12-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 μm [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, λFλ vs. λ) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline forsterite formed. The

  5. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    ERIC Educational Resources Information Center

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  6. Constrained iterations for blind deconvolution and convexity issues

    NASA Astrophysics Data System (ADS)

    Spaletta, Giulia; Caucci, Luca

    2006-12-01

    The need for image restoration arises in many applications of various scientific disciplines, such as medicine and astronomy and, in general, whenever an unknown image must be recovered from blurred and noisy data [M. Bertero, P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics Publishing, Philadelphia, PA, USA, 1998]. The algorithm studied in this work restores the image without the knowledge of the blur, using little a priori information and a blind inverse filter iteration. It represents a variation of the methods proposed in Kundur and Hatzinakos [A novel blind deconvolution scheme for image restoration using recursive filtering, IEEE Trans. Signal Process. 46(2) (1998) 375-390] and Ng et al. [Regularization of RIF blind image deconvolution, IEEE Trans. Image Process. 9(6) (2000) 1130-1134]. The problem of interest here is an inverse one, that cannot be solved by simple filtering since it is ill-posed. The imaging system is assumed to be linear and space-invariant: this allows a simplified relationship between unknown and observed images, described by a point spread function modeling the distortion. The blurring, though, makes the restoration ill-conditioned: regularization is therefore also needed, obtained by adding constraints to the formulation. The restoration is modeled as a constrained minimization: particular attention is given here to the analysis of the objective function and on establishing whether or not it is a convex function, whose minima can be located by classic optimization techniques and descent methods. Numerical examples are applied to simulated data and to real data derived from various applications. Comparison with the behavior of methods [D. Kundur, D. Hatzinakos, A novel blind deconvolution scheme for image restoration using recursive filtering, IEEE Trans. Signal Process. 46(2) (1998) 375-390] and [M. Ng, R.J. Plemmons, S. Qiao, Regularization of RIF Blind Image Deconvolution, IEEE Trans. Image Process. 9

  7. Constraining The Reionization History With QSO Absorption Spectra

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Choudhury, T. R.; Ferrara, A.

    2006-08-01

    We use a semi-analytical approach to simulate absorption spectra of QSOs at high redshifts with the aim of constraining the cosmic reionization history. We consider two physically motivated and detailed reionization histories: (i) an Early Reionization Model (ERM) in which the intergalactic medium is reionized by PopIII stars at z~14, and (ii) a more standard Late Reionization Model (LRM) in which overlapping, induced by QSOs and normal galaxies, occurs at z~6. An example of simulated spectra is provided by FIG.1. From the analysis of current Lyα forest data at z<6, we conclude that it is impossible to disentangle the two scenarios, which fit equally well the observed Gunn-Peterson optical depth, flux probability distribution function and dark gap width distribution. At z>6, however, clear differences start to emerge which are best quantified by the dark gap width distribution. We find that 35 (zero) per cent of the lines of sight within 5.750Å in the rest frame of the QSO if re-ionization is not (is) complete at z>~6 (FIG.2). Similarly, the ERM predicts peaks of width ~1Å in 40 per cent of the lines of sight in the redshift range 6.0-6.6; in the same range, LRM predicts no peaks of width >0.8Å (FIG.3). We conclude that the dark gap and peak width statistics represent superb probes of cosmic reionization if about ten QSOs can be found at z>6.

  8. Constraining the reionization history with QSO absorption spectra

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Choudhury, T. Roy; Ferrara, A.

    2006-08-01

    We use a semi-analytical approach to simulate absorption spectra of QSOs at high redshifts with the aim of constraining the cosmic reionization history. We consider two physically motivated and detailed reionization histories: (i) an early reionization model (ERM) in which the intergalactic medium is reionized by Pop III stars at z ~ 14, and (ii) a more standard late reionization model (LRM) in which overlapping, induced by QSOs and normal galaxies, occurs at z ~ 6. From the analysis of current Lyα forest data at z < 6, we conclude that it is impossible to disentangle the two scenarios, which fit equally well the observed Gunn-Peterson optical depth, flux probability distribution function and dark gap width distribution. At z > 6, however, clear differences start to emerge which are best quantified by the dark gap and peak width distributions. We find that 35 (0) per cent of the lines of sight (LOS) within 5.7 < z < 6.3 show dark gaps of widths >50Å in the rest frame of the QSO if reionization is not (is) complete at z >~ 6. Similarly, the ERM predicts peaks of width ~1Å in 40 per cent of the LOS in the redshift range 6.0-6.6 in the same range, LRM predicts no peaks of width >0.8Å. We conclude that the dark gap and peak width statistics represent superb probes of cosmic reionization if about ten QSOs can be found at z > 6. We finally discuss strengths and limitations of our method.

  9. Analyzing and constraining signaling networks: parameter estimation for the user.

    PubMed

    Geier, Florian; Fengos, Georgios; Felizzi, Federico; Iber, Dagmar

    2012-01-01

    The behavior of most dynamical models not only depends on the wiring but also on the kind and strength of interactions which are reflected in the parameter values of the model. The predictive value of mathematical models therefore critically hinges on the quality of the parameter estimates. Constraining a dynamical model by an appropriate parameterization follows a 3-step process. In an initial step, it is important to evaluate the sensitivity of the parameters of the model with respect to the model output of interest. This analysis points at the identifiability of model parameters and can guide the design of experiments. In the second step, the actual fitting needs to be carried out. This step requires special care as, on the one hand, noisy as well as partial observations can corrupt the identification of system parameters. On the other hand, the solution of the dynamical system usually depends in a highly nonlinear fashion on its parameters and, as a consequence, parameter estimation procedures get easily trapped in local optima. Therefore any useful parameter estimation procedure has to be robust and efficient with respect to both challenges. In the final step, it is important to access the validity of the optimized model. A number of reviews have been published on the subject. A good, nontechnical overview is provided by Jaqaman and Danuser (Nat Rev Mol Cell Biol 7(11):813-819, 2006) and a classical introduction, focussing on the algorithmic side, is given in Press (Numerical recipes: The art of scientific computing, Cambridge University Press, 3rd edn., 2007, Chapters 10 and 15). We will focus on the practical issues related to parameter estimation and use a model of the TGFβ-signaling pathway as an educative example. Corresponding parameter estimation software and models based on MATLAB code can be downloaded from the authors's web page ( http://www.bsse.ethz.ch/cobi ). PMID:23361979

  10. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-08-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  11. Constraining the nature of the asthenosphere

    NASA Astrophysics Data System (ADS)

    Fahy, E. H.; Hall, P.; Faul, U.

    2010-12-01

    Geophysical observations indicate that the oceanic upper mantle has relatively low seismic velocities, high seismic attenuation, and high electrical conductivity at depths of ~80-200km. These depths coincide with the rheologically weak layer known as the asthenosphere . Three hypotheses have been proposed to account for these observations: 1) the presence of volatiles, namely water, in the oceanic upper mantle; 2) the presence of small-degree partial melts in the oceanic upper mantle; and 3) variations in the physical properties of dry, melt-free peridotite with temperature and pressure. Each of these hypotheses suggests a characteristic distribution of volatiles and melt in the upper mantle, resulting in corresponding spatial variations in viscosity, density, seismic structure, and electrical conductivity. These viscosity and density scenarios can also lead to variations in the onset time and growth rate of thermal instabilities at the base of the overriding lithosphere, which can in turn affect heat flow, bathymetry, and seismic structure. We report on the results of a series of computational geodynamic experiments that evaluate the dynamical consequences of each of the three proposed scenarios. Experiments were conducted using the CitcomCU finite element package to model the evolution of the oceanic lithosphere and flow in the underlying mantle. Our model domain consists of 2048x256x64 elements corresponding, to physical dimensions of 12,800x1600x400km. These dimensions allow us to consider oceanic lithosphere to ages of ~150Ma. We adopt the composite rheology law from Billen & Hirth (2007), which combines both diffusion and dislocation creep mechanisms, and consider a range of rheological parameters (e.g., activation energy, activation volume, grain size) as obtained from laboratory deformation experiments [e.g. Hirth & Kohlstedt, 2003]. Melting and volatile content within the model domain are tracked using a Lagrangian particle scheme. Variations in depletion

  12. Quantum field theory constrains traversable wormhole geometries

    SciTech Connect

    Ford, L.H. |; Roman, T.A. |

    1996-05-01

    Recently a bound on negative energy densities in four-dimensional Minkowski spacetime was derived for a minimally coupled, quantized, massless, scalar field in an arbitrary quantum state. The bound has the form of an uncertainty-principle-type constraint on the magnitude and duration of the negative energy density seen by a timelike geodesic observer. When spacetime is curved and/or has boundaries, we argue that the bound should hold in regions small compared to the minimum local characteristic radius of curvature or the distance to any boundaries, since spacetime can be considered approximately Minkowski on these scales. We apply the bound to the stress-energy of static traversable wormhole spacetimes. Our analysis implies that either the wormhole must be only a little larger than Planck size or that there is a large discrepancy in the length scales which characterize the wormhole. In the latter case, the negative energy must typically be concentrated in a thin band many orders of magnitude smaller than the throat size. These results would seem to make the existence of macroscopic traversable wormholes very improbable. {copyright} {ital 1996 The American Physical Society.}

  13. Constraining the Statistics of Population III Binaries

    NASA Technical Reports Server (NTRS)

    Stacy, Athena; Bromm, Volker

    2012-01-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the ten minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 AU. We find a binary fraction of approx. 36, with semi-major axes as large as 3000 AU. The distribution of orbital periods is slightly peaked at approx. < 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the ten minihaloes, approx. 50 are lost to mergers with larger sinks, and 50 of the remaining sinks are ejected from their star-forming disks. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  14. Constraining the aerosol influence on cloud fraction

    NASA Astrophysics Data System (ADS)

    Gryspeerdt, E.; Quaas, J.; Bellouin, N.

    2016-04-01

    Aerosol-cloud interactions have the potential to modify many different cloud properties. There is significant uncertainty in the strength of these aerosol-cloud interactions in analyses of observational data, partly due to the difficulty in separating aerosol effects on clouds from correlations generated by local meteorology. The relationship between aerosol and cloud fraction (CF) is particularly important to determine, due to the strong correlation of CF to other cloud properties and its large impact on radiation. It has also been one of the hardest to quantify from satellites due to the strong meteorological covariations involved. This work presents a new method to analyze the relationship between aerosol optical depth (AOD) and CF. By including information about the cloud droplet number concentration (CDNC), the impact of the meteorological covariations is significantly reduced. This method shows that much of the AOD-CF correlation is explained by relationships other than that mediated by CDNC. By accounting for these, the strength of the global mean AOD-CF relationship is reduced by around 80%. This suggests that the majority of the AOD-CF relationship is due to meteorological covariations, especially in the shallow cumulus regime. Requiring CDNC to mediate the AOD-CF relationship implies an effective anthropogenic radiative forcing from an aerosol influence on liquid CF of -0.48 W m-2 (-0.1 to -0.64 W m-2), although some uncertainty remains due to possible biases in the CDNC retrievals in broken cloud scenes.

  15. Constraining Deep Earth Structure Using Tidal Tomography

    NASA Astrophysics Data System (ADS)

    Lau, H. C.; Tromp, J.; Ishii, M.; Mitrovica, J. X.; Yang, H.; Davis, J. L.; Latychev, K.

    2013-12-01

    The solid Earth responses to luni-solar tidal forcings, as measured by space-geodetic and/or seismic techniques, have the potential to provide important and novel constraints on the long-wavelength density and elastic structure of the deep mantle as well as anelastic behavior at tidal frequencies. Here we describe a normal-mode theory for computing the semi-diurnal and long-period body tide response of a 3-D, rotating and anelastic Earth. The new theory provides a framework for incorporating body tide observations to infer deep Earth structure using tomographic methods, and, in this regard, it extends our earlier numerical formulation of this problem (Latychev et al., EPSL, 2008). To begin, we use normal-mode theory to treat the response of spherically symmetric, elastic Earth models, and demonstrate that the theory accurately reproduces the tidal Love numbers widely used in body-tide calculations. Next, we compute the body tide response of 3-D Earth models and benchmark these results against a finite-element formulation of this response. We also present results of an analysis which explores the sensitivity of the predictions to different models of mantle Q. Finally, we present preliminary inferences of deep mantle elastic and density structure based, in part, on the semi-diurnal tidal (radial displacement) response estimated from a network of GPS stations. We discuss the implications of these results for the structure and buoyancy of deep mantle LLSVPs.

  16. Constraining the physics of jet quenching

    NASA Astrophysics Data System (ADS)

    Renk, Thorsten

    2012-04-01

    Hard probes in the context of ultrarelativistic heavy-ion collisions represent a key class of observables studied to gain information about the QCD medium created in such collisions. However, in practice, the so-called jet tomography has turned out to be more difficult than expected initially. One of the major obstacles in extracting reliable tomographic information from the data is that neither the parton-medium interaction nor the medium geometry are known with great precision, and thus a difference in model assumptions in the hard perturbative Quantum Choromdynamics (pQCD) modeling can usually be compensated by a corresponding change of assumptions in the soft bulk medium sector and vice versa. The only way to overcome this problem is to study the full systematics of combinations of parton-medium interaction and bulk medium evolution models. This work presents a meta-analysis summarizing results from a number of such systematical studies and discusses in detail how certain data sets provide specific constraints for models. Combining all available information, only a small group of models exhibiting certain characteristic features consistent with a pQCD picture of parton-medium interaction is found to be viable given the data. In this picture, the dominant mechanism is medium-induced radiation combined with a surprisingly small component of elastic energy transfer into the medium.

  17. Constraining the statistics of Population III binaries

    NASA Astrophysics Data System (ADS)

    Stacy, Athena; Bromm, Volker

    2013-08-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the 10 minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 au. We find a binary fraction of ˜35 per cent, with semi-major axes as large as 3000 au. The distribution of orbital periods is slightly peaked at ≲ 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the 10 minihaloes, ˜50 per cent are lost to mergers with larger sinks, and ˜50 per cent of the remaining sinks are ejected from their star-forming discs. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  18. Observation of high-energy neutrinos using Cerenkov detectors embedded deep in Antarctic ice.

    PubMed

    Andrés, E; Askebjer, P; Bai, X; Barouch, G; Barwick, S W; Bay, R C; Becker, K H; Bergström, L; Bertrand, D; Bierenbaum, D; Biron, A; Booth, J; Botner, O; Bouchta, A; Boyce, M M; Carius, S; Chen, A; Chirkin, D; Conrad, J; Cooley, J; Costa, C G; Cowen, D F; Dailing, J; Dalberg, E; DeYoung, T; Desiati, P; Dewulf, J P; Doksus, P; Edsjö, J; Ekström, P; Erlandsson, B; Feser, T; Gaug, M; Goldschmidt, A; Goobar, A; Gray, L; Haase, H; Hallgren, A; Halzen, F; Hanson, K; Hardtke, R; He, Y D; Hellwig, M; Heukenkamp, H; Hill, G C; Hulth, P O; Hundertmark, S; Jacobsen, J; Kandhadai, V; Karle, A; Kim, J; Koci, B; Köpke, L; Kowalski, M; Leich, H; Leuthold, M; Lindahl, P; Liubarsky, I; Loaiza, P; Lowder, D M; Ludvig, J; Madsen, J; Marciniewski, P; Matis, H S; Mihalyi, A; Mikolajski, T; Miller, T C; Minaeva, Y; Miocinović, P; Mock, P C; Morse, R; Neunhöffer, T; Newcomer, F M; Niessen, P; Nygren, D R; Ogelman, H; Pérez de los Heros, C; Porrata, R; Price, P B; Rawlins, K; Reed, C; Rhode, W; Richards, A; Richter, S; Martino, J R; Romenesko, P; Ross, D; Rubinstein, H; Sander, H G; Scheider, T; Schmidt, T; Schneider, D; Schneider, E; Schwarz, R; Silvestri, A; Solarz, M; Spiczak, G M; Spiering, C; Starinsky, N; Steele, D; Steffen, P; Stokstad, R G; Streicher, O; Sun, Q; Taboada, I; Thollander, L; Thon, T; Tilav, S; Usechak, N; Vander Donckt, M; Walck, C; Weinheimer, C; Wiebusch, C H; Wischnewski, R; Wissing, H; Woschnagg, K; Wu, W; Yodh, G; Young, S

    2001-03-22

    Neutrinos are elementary particles that carry no electric charge and have little mass. As they interact only weakly with other particles, they can penetrate enormous amounts of matter, and therefore have the potential to directly convey astrophysical information from the edge of the Universe and from deep inside the most cataclysmic high-energy regions. The neutrino's great penetrating power, however, also makes this particle difficult to detect. Underground detectors have observed low-energy neutrinos from the Sun and a nearby supernova, as well as neutrinos generated in the Earth's atmosphere. But the very low fluxes of high-energy neutrinos from cosmic sources can be observed only by much larger, expandable detectors in, for example, deep water or ice. Here we report the detection of upwardly propagating atmospheric neutrinos by the ice-based Antarctic muon and neutrino detector array (AMANDA). These results establish a technology with which to build a kilometre-scale neutrino observatory necessary for astrophysical observations. PMID:11260705

  19. Constraining scalar-tensor theories of gravity from the most massive neutron stars

    NASA Astrophysics Data System (ADS)

    Palenzuela, Carlos; Liebling, Steven L.

    2016-02-01

    Scalar-tensor (ST) theories of gravity are natural phenomenological extensions to general relativity. Although these theories are severely constrained both by solar system experiments and by binary pulsar observations, a large set of ST families remain consistent with these observations. Recent work has suggested probing the unconstrained region of the parameter space of ST theories based on the stability properties of highly compact neutron stars. Here, the dynamical evolution of very compact stars in a fully nonlinear code demonstrates that the stars do become unstable and that the instability, in some cases, drives the stars to collapse. We discuss the implications of these results in light of recent observations of