Science.gov

Sample records for amanda observations constrain

  1. Geometrically constrained observability. [control theory

    NASA Technical Reports Server (NTRS)

    Brammer, R. F.

    1974-01-01

    This paper deals with observed processes in situations in which observations are available only when the state vector lies in certain regions. For linear autonomous observed processes, necessary and sufficient conditions are obtained for half-space observation regions. These results are shown to contain a theorem dual to a controllability result proved by the author for a linear autonomous control system whose control restraint set does not contain the origin as an interior point. Observability results relating to continuous observation systems and sampled data systems are presented, and an example of observing the state of an electrical network is given.

  2. Cloudsat Satellite Images of Amanda

    NASA Video Gallery

    NASA's CloudSat satellite flew over Hurricane Amanda on May 25, at 5 p.m. EDT and saw a deep area of moderate to heavy-moderate precipitation below the freezing level (where precipitation changes f...

  3. Constraining the braneworld with gravitational wave observations.

    PubMed

    McWilliams, Sean T

    2010-04-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm. PMID:20481929

  4. Constraining dark matter through 21-cm observations

    NASA Astrophysics Data System (ADS)

    Valdés, M.; Ferrara, A.; Mapelli, M.; Ripamonti, E.

    2007-05-01

    Beyond reionization epoch cosmic hydrogen is neutral and can be directly observed through its 21-cm line signal. If dark matter (DM) decays or annihilates, the corresponding energy input affects the hydrogen kinetic temperature and ionized fraction, and contributes to the Lyα background. The changes induced by these processes on the 21-cm signal can then be used to constrain the proposed DM candidates, among which we select the three most popular ones: (i) 25-keV decaying sterile neutrinos, (ii) 10-MeV decaying light dark matter (LDM) and (iii) 10-MeV annihilating LDM. Although we find that the DM effects are considerably smaller than found by previous studies (due to a more physical description of the energy transfer from DM to the gas), we conclude that combined observations of the 21-cm background and of its gradient should be able to put constrains at least on LDM candidates. In fact, LDM decays (annihilations) induce differential brightness temperature variations with respect to the non-decaying/annihilating DM case up to ΔδTb = 8 (22) mK at about 50 (15) MHz. In principle, this signal could be detected both by current single-dish radio telescopes and future facilities as Low Frequency Array; however, this assumes that ionospheric, interference and foreground issues can be properly taken care of.

  5. Constraining the halo mass function with observations

    NASA Astrophysics Data System (ADS)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-08-01

    The abundances of dark matter halos in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behavior through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of type Ia supernovae. Our results show that DES is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the halo mass function. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  6. Constraining CO emission estimates using atmospheric observations

    NASA Astrophysics Data System (ADS)

    Hooghiemstra, P. B.

    2012-06-01

    We apply a four-dimensional variational (4D-Var) data assimilation system to optimize carbon monoxide (CO) emissions and to reduce the uncertainty of emission estimates from individual sources using the chemistry transport model TM5. In the first study only a limited amount of surface network observations from the National Oceanic and Atmospheric Administration Earth System Research Laboratory (NOAA/ESRL) Global Monitoring Division (GMD) is used to test the 4D-Var system. Uncertainty reduction up to 60% in yearly emissions is observed over well-constrained regions and the inferred emissions compare well with recent studies for 2004. However, since the observations only constrain total CO emissions, the 4D-Var system has difficulties separating anthropogenic and biogenic sources in particular. The inferred emissions are validated with NOAA aircraft data over North America and the agreement is significantly improved from the prior to posterior simulation. Validation with the Measurements Of Pollution In The Troposphere (MOPITT) instrument shows a slight improved agreement over the well-constrained Northern Hemisphere and in the tropics (except for the African continent). However, the model simulation with posterior emissions underestimates MOPITT CO total columns on the remote Southern Hemisphere (SH) by about 10%. This is caused by a reduction in SH CO sources mainly due to surface stations on the high southern latitudes. In the second study, we compare two global inversions to estimate carbon monoxide (CO) emissions for 2004. Either surface flask observations from NOAA or CO total columns from the MOPITT instrument are assimilated in a 4D-Var framework. In the Southern Hemisphere (SH) three important findings are reported. First, due to their different vertical sensitivity, the stations-only inversion increases SH biomass burning emissions by 108 Tg CO/yr more than the MOPITT-only inversion. Conversely, the MOPITT-only inversion results in SH natural emissions

  7. Constraining Numerical Geodynamo Modeling with Surface Observations

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2006-01-01

    Numerical dynamo solutions have traditionally been generated entirely by a set of self-consistent differential equations that govern the spatial-temporal variation of the magnetic field, velocity field and other fields related to dynamo processes. In particular, those solutions are obtained with parameters very different from those appropriate for the Earth s core. Geophysical application of the numerical results therefore depends on correct understanding of the differences (errors) between the model outputs and the true states (truth) in the outer core. Part of the truth can be observed at the surface in the form of poloidal magnetic field. To understand these differences, or errors, we generate new initial model state (analysis) by assimilating sequentially the model outputs with the surface geomagnetic observations using an optimal interpolation scheme. The time evolution of the core state is then controlled by our MoSST core dynamics model. The final outputs (forecasts) are then compared with the surface observations as a means to test the success of the assimilation. We use the surface geomagnetic data back to year 1900 for our studies, with 5-year forecast and 20-year analysis periods. We intend to use the result; to understand time variation of the errors with the assimilation sequences, and the impact of the assimilation on other unobservable quantities, such as the toroidal field and the fluid velocity in the core.

  8. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  9. Constraining the noncommutative spectral action via astrophysical observations.

    PubMed

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-01

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  10. Search for point sources of high energy neutrinos with Amanda

    SciTech Connect

    Ahrens, J.

    2002-08-01

    Report of search for likely point sources for neutrinos observed by the Amanda detector. Places intensity limits on observable point sources. This paper describes the search for astronomical sources of high-energy neutrinos using the AMANDA-B10 detector, an array of 302 photomultiplier tubes, used for the detection of Cherenkov light from upward traveling neutrino-induced muons, buried deep in ice at the South Pole. The absolute pointing accuracy and angular resolution were studied by using coincident events between the AMANDA detector and two independent telescopes on the surface, the GASP air Cherenkov telescope and the SPASE extensive air shower array. Using data collected from April to October of 1997 (130.1 days of livetime), a general survey of the northern hemisphere revealed no statistically significant excess of events from any direction. The sensitivity for a flux of muon neutrinos is based on the effective detection area for through-going muons. Averaged over the Northern sky, the effective detection area exceeds 10,000 m{sup 2} for E{sub {mu}} {approx} 10 TeV. Neutrinos generated in the atmosphere by cosmic ray interactions were used to verify the predicted performance of the detector. For a source with a differential energy spectrum proportional to E{sub {nu}}{sup -2} and declination larger than +40{sup o}, we obtain E{sup 2} (dN{sub {nu}}/dE) {le} 10{sup -6} GeV cm{sup -2} s{sup -1} for an energy threshold of 10 GeV.

  11. Constraining interacting dark energy models with latest cosmological observations

    NASA Astrophysics Data System (ADS)

    Xia, Dong-Mei; Wang, Sai

    2016-11-01

    The local measurement of H0 is in tension with the prediction of Λ cold dark matter model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on cosmic microwave background, the baryon acoustic oscillation, large-scale structure, supernovae, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  12. Constraining interacting dark energy models with latest cosmological observations

    NASA Astrophysics Data System (ADS)

    Xia, Dong-Mei; Wang, Sai

    2016-08-01

    The local measurement of H0 is in tension with the prediction of ΛCDM model based on the Planck data. This tension may imply that dark energy is strengthened in the late-time Universe. We employ the latest cosmological observations on CMB, BAO, LSS, SNe, H(z) and H0 to constrain several interacting dark energy models. Our results show no significant indications for the interaction between dark energy and dark matter. The H0 tension can be moderately alleviated, but not totally released.

  13. Traversable geometric dark energy wormholes constrained by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-he

    2016-09-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.

  14. Constraining the volatile fraction of planets from transit observations

    NASA Astrophysics Data System (ADS)

    Alibert, Y.

    2016-06-01

    Context. The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Aims: Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount of volatiles in the planetary interior, and consequently the amount of gas (defined in this paper to be only H and He) that the planet harbors. We first explore planets that are located close enough to their star to have lost their gas envelope. We then concentrate on planets at larger distances and show that the observation of transiting planets at different evolutionary ages can provide statistical information on their internal composition, in particular on their volatile fraction. Methods: We computed the evolution of low-mass planets (super-Earths to Neptune-like) for different fractions of volatiles and gas. We used a four-layer model (core, silicate mantle, icy mantle, and gas envelope) and computed the internal structure of planets for different luminosities. With this internal structure model, we computed the internal and gravitational energy of planets, which was then used to derive the time evolution of the planet. Since the total energy of a planet depends on its heat capacity and density distribution and therefore on its composition, planets with different ice fractions have different evolution tracks. Results: We show for low-mass gas-poor planets that are located close to their central star that assuming evaporation has efficiently removed the entire gas envelope, it is possible to constrain the volatile fraction of close-in transiting planets. We illustrate this method on the example of 55 Cnc e and show that under the assumption of the absence of

  15. Multiple Observation Types Jointly Constrain Terrestrial Carbon and Water Cycles

    NASA Astrophysics Data System (ADS)

    Raupach, M. R.; Haverd, V.; Briggs, P. R.; Canadell, J.; Davis, S. J.; Isaac, P. R.; Law, R.; Meyer, M.; Peters, G. P.; Pickett Heaps, C.; Roxburgh, S. H.; Sherman, B.; van Gorsel, E.; Viscarra Rossel, R.; Wang, Z.

    2012-12-01

    Information about the carbon cycle potentially constrains the water cycle, and vice versa. This paper explores the utility of multiple observation sets to constrain carbon and water fluxes and stores in a land surface model, and a resulting determination of the Australian terrestrial carbon budget. Observations include streamflow from 416 gauged catchments, measurements of evapotranspiration (ET) and net ecosystem production (NEP) from 12 eddy-flux sites, litterfall data, and data on carbon pools. The model is a version of CABLE (the Community Atmosphere-Biosphere-Land Exchange model), coupled with CASAcnp (a biogeochemical model) and SLI (Soil-Litter-Iso, a soil hydrology model including liquid and vapour water fluxes and the effects of litter). By projecting observation-prediction residuals onto model uncertainty, we find that eddy flux measurements provide a significantly tighter constraint on Australian continental net primary production (NPP) than the other data types. However, simultaneous constraint by multiple data types is important for mitigating bias from any single type. Results emerging from the multiply-constrained model are as follows (with all values applying over 1990-2011 and all ranges denoting ±1 standard error): (1) on the Australian continent, a predominantly semi-arid region, over half (0.64±0.05) of the water loss through ET occurs through soil evaporation and bypasses plants entirely; (2) mean Australian NPP is 2200±400 TgC/y, making the NPP/precipitation ratio about the same for Australia as the global land average; (3) annually cyclic ("grassy") vegetation and persistent ("woody") vegetation respectively account for 0.56±0.14 and 0.43±0.14 of NPP across Australia; (4) the average interannual variability of Australia's NEP (±180 TgC/y) is larger than Australia's total anthropogenic greenhouse gas emissions in 2011 (149 TgCeq/y), and is dominated by variability in desert and savannah regions. The mean carbon budget over 1990

  16. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    NASA Astrophysics Data System (ADS)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  17. Using Jet Observations to Constrain Enceladus' Rotation State

    NASA Technical Reports Server (NTRS)

    Hurford, Terry A.; Porco, C. C.

    2011-01-01

    Observations of Enceladus have revealed active jets of material erupting from cracks on its surface. It has been proposed that diurnal tidal stress may open these cracks daily when they experience tensile stresses across them, allowing eruptions to occur. An analysis of the tidal stress on jet source regions, as identified by the triangulation of jet observations, finds that there is a correlation between observations and tensile stress on the cracks. However, not all regions are predicted to be in tension when jets were observed to be active. Enceladus' rotation state, such as a physical libration or obliquity, will affect the diurnal stresses on these cracks, changing when in its orbit they experience tension and compression. We will use observations of jet activity from 2005-2007 to place constraints on rotation states of Enceladus.

  18. Mercury's thermo-chemical evolution constrained by MESSENGER observations

    NASA Astrophysics Data System (ADS)

    Tosi, Nicola; Grott, Matthias; Breuer, Doris; Plesa, Ana-Catalina

    2013-04-01

    Low-degree coefficients of Mercury's gravity field as obtained from the MESSENGER's Radio Science experiment combined with estimates of Mercury's spin state permit to compute the normalized polar moment of inertia of the planet (C-MR2) as well as the ratio of the moment of inertia of the mantle to that of the planet (Cm-C). These two parameters provide a strong constraint on the internal mass distribution. With C-MR2 = 0.346 and Cm-C = 0.431 [1], interior structure models predict a large core radius but also a large mantle density. The latter requirement can be met with a relatively standard composition of the silicate mantle for which a core radius of ~ 2000 km is expected [2]. Alternatively, the large density of the silicate shell has been interpreted as a consequence of the presence of a solid FeS layer that could form atop the liquid core under suitable temperature conditions [3]. According to this hypothesis, the thickness of the mantle would be reduced to ~ 300 km only. Additionally, the Gamma-Ray Spectrometer measured a surface abundance of U, Th and K, which hints at a bulk mantle composition comparable to other terrestrial planets [4]. Geological evidence also suggests that volcanism was a globally extensive process even after the late heavy bombardment (LHB) and that northern plains were likely emplaced in a flood lava mode by high-temperature, low-viscosity lava. Finally, the analysis of previously unrecognized compressional tectonic features as revealed by recent MESSENGER images yielded revised estimates of the global planetary contraction, which is calculated to be as high as 4-5 km [5]. We employed the above pieces of information to constrain the thermal and magmatic history of Mercury with numerical simulations. Using 1D-parameterized thermo-chemical evolution models, we ran a large set of Monte-Carlo simulations (~ 10000) in which we varied systematically the thickness of the silicate shell, intial mantle and CMB temperatures, mantle rheology

  19. Constraining Galaxy Evolution Using Observed UV-Optical Spectra

    NASA Technical Reports Server (NTRS)

    Heap, Sally

    2007-01-01

    Our understanding of galaxy evolution depends on model spectra of stellar populations, and the models are only as good as the observed spectra and stellar parameters that go into them. We are therefore evaluating modem UV-optical model spectra using Hubble's Next Generation Spectral Library (NGSL) as the reference standard. The NGSL comprises intermediate-resolution (R is approximately 1000) STIS spectra of 378 stars having a wide range in metallicity and age. Unique features of the NGSL include its broad wavelength coverage (1,800-10,100 A) and high-S/N, absolute spectrophotometry. We will report on a systematic comparison of model and observed UV-blue spectra, describe where on the HR diagram significant differences occur, and comment on current approaches to correct the models for these differences.

  20. HEATING OF FLARE LOOPS WITH OBSERVATIONALLY CONSTRAINED HEATING FUNCTIONS

    SciTech Connect

    Qiu Jiong; Liu Wenjuan; Longcope, Dana W.

    2012-06-20

    We analyze high-cadence high-resolution observations of a C3.2 flare obtained by AIA/SDO on 2010 August 1. The flare is a long-duration event with soft X-ray and EUV radiation lasting for over 4 hr. Analysis suggests that magnetic reconnection and formation of new loops continue for more than 2 hr. Furthermore, the UV 1600 Angstrom-Sign observations show that each of the individual pixels at the feet of flare loops is brightened instantaneously with a timescale of a few minutes, and decays over a much longer timescale of more than 30 minutes. We use these spatially resolved UV light curves during the rise phase to construct empirical heating functions for individual flare loops, and model heating of coronal plasmas in these loops. The total coronal radiation of these flare loops are compared with soft X-ray and EUV radiation fluxes measured by GOES and AIA. This study presents a method to observationally infer heating functions in numerous flare loops that are formed and heated sequentially by reconnection throughout the flare, and provides a very useful constraint to coronal heating models.

  1. Observations that Constrain the Scaling of Apparent Stress

    NASA Astrophysics Data System (ADS)

    McGarr, A.; Fletcher, J. B.

    2002-12-01

    Slip models developed for major earthquakes are composed of distributions of fault slip, rupture time, and slip velocity time function over the rupture surface, as divided into many smaller subfaults. Using a recently-developed technique, the seismic energy radiated from each subfault can be estimated from the time history of slip there and the average rupture velocity. Total seismic energies, calculated by summing contributions from all of the subfaults, agree reasonably well with independent estimates based on seismic energy flux in the far-field at regional or teleseismic distances. Two recent examples are the 1999 Izmit, Turkey and the 1999 Hector Mine, California earthquakes for which the NEIS teleseismic measurements of radiated energy agree fairly closely with seismic energy estimates from several different slip models, developed by others, for each of these events. Similar remarks apply to the 1989 Loma Prieta, 1992 Landers, and 1995 Kobe earthquakes. Apparent stresses calculated from these energy and moment results do not indicate any moment or magnitude dependence. The distributions of both fault slip and seismic energy radiation over the rupture surfaces of earthquakes are highly inhomogeneous. These results from slip models, combined with underground and seismic observations of slip for much smaller mining-induced earthquakes, can provide stronger constraint on the possible scaling of apparent stress with moment magnitude M or seismic moment. Slip models for major earthquakes in the range M6.2 to M7.4 show maximum slips ranging from 1.6 to 8 m. Mining-induced earthquakes at depths near 2000 m in South Africa are associated with peak slips of 0.2 to 0.37 m for events of M4.4 to M4.6. These maximum slips, whether derived from a slip model or directly observed underground in a deep gold mine, scale quite definitively as the cube root of the seismic moment. In contrast, peak slip rates (maximum subfault slip/rise time) appear to be scale invariant. A 1.25 m

  2. Observationally constrained estimates of carbonaceous aerosol radiative forcing.

    PubMed

    Chung, Chul E; Ramanathan, V; Decremer, Damien

    2012-07-17

    Carbonaceous aerosols (CA) emitted by fossil and biomass fuels consist of black carbon (BC), a strong absorber of solar radiation, and organic matter (OM). OM scatters as well as absorbs solar radiation. The absorbing component of OM, which is ignored in most climate models, is referred to as brown carbon (BrC). Model estimates of the global CA radiative forcing range from 0 to 0.7 Wm(-2), to be compared with the Intergovernmental Panel on Climate Change's estimate for the pre-Industrial to the present net radiative forcing of about 1.6 Wm(-2). This study provides a model-independent, observationally based estimate of the CA direct radiative forcing. Ground-based aerosol network data is integrated with field data and satellite-based aerosol observations to provide a decadal (2001 through 2009) global view of the CA optical properties and direct radiative forcing. The estimated global CA direct radiative effect is about 0.75 Wm(-2) (0.5 to 1.0). This study identifies the global importance of BrC, which is shown to contribute about 20% to 550-nm CA solar absorption globally. Because of the inclusion of BrC, the net effect of OM is close to zero and the CA forcing is nearly equal to that of BC. The CA direct radiative forcing is estimated to be about 0.65 (0.5 to about 0.8) Wm(-2), thus comparable to or exceeding that by methane. Caused in part by BrC absorption, CAs have a net warming effect even over open biomass-burning regions in Africa and the Amazon. PMID:22753522

  3. Observationally constrained estimates of carbonaceous aerosol radiative forcing.

    PubMed

    Chung, Chul E; Ramanathan, V; Decremer, Damien

    2012-07-17

    Carbonaceous aerosols (CA) emitted by fossil and biomass fuels consist of black carbon (BC), a strong absorber of solar radiation, and organic matter (OM). OM scatters as well as absorbs solar radiation. The absorbing component of OM, which is ignored in most climate models, is referred to as brown carbon (BrC). Model estimates of the global CA radiative forcing range from 0 to 0.7 Wm(-2), to be compared with the Intergovernmental Panel on Climate Change's estimate for the pre-Industrial to the present net radiative forcing of about 1.6 Wm(-2). This study provides a model-independent, observationally based estimate of the CA direct radiative forcing. Ground-based aerosol network data is integrated with field data and satellite-based aerosol observations to provide a decadal (2001 through 2009) global view of the CA optical properties and direct radiative forcing. The estimated global CA direct radiative effect is about 0.75 Wm(-2) (0.5 to 1.0). This study identifies the global importance of BrC, which is shown to contribute about 20% to 550-nm CA solar absorption globally. Because of the inclusion of BrC, the net effect of OM is close to zero and the CA forcing is nearly equal to that of BC. The CA direct radiative forcing is estimated to be about 0.65 (0.5 to about 0.8) Wm(-2), thus comparable to or exceeding that by methane. Caused in part by BrC absorption, CAs have a net warming effect even over open biomass-burning regions in Africa and the Amazon.

  4. Can observations of earthquake scaling constrain slip weakening?

    NASA Astrophysics Data System (ADS)

    Abercrombie, Rachel E.; Rice, James R.

    2005-08-01

    We use observations of earthquake source parameters over a wide magnitude range (MW~ 0-7) to place constraints on constitutive fault weakening. The data suggest a scale dependence of apparent stress and stress drop; both may increase slightly with earthquake size. We show that this scale dependence need not imply any difference in fault zone properties for different sized earthquakes. We select 30 earthquakes well-recorded at 2.5 km depth at Cajon Pass, California. We use individual and empirical Green's function spectral analysis to improve the resolution of source parameters, including static stress drop (Δσ) and total slip (S). We also measure radiated energy ES. We compare the Cajon Pass results with those from larger California earthquakes including aftershocks of the 1994 Northridge earthquake and confirm the results of Abercrombie (1995): μES/M0<<Δσ (where μ= rigidity) and both ES/M0 and Δσ increase as M0 (and S) increases. Uncertainties remain large due to model assumptions and variations between possible models, and earthquake scale independence is possible within the resolution. Assuming that the average trends are real, we define a quantity G'= (Δσ- 2μES/M0)S/2 which is the total energy dissipation in friction and fracture minus σ1S, where σ1 is the final static stress. If σ1=σd, the dynamic shear strength during the last increments of seismic slip, then G'=G, the fracture energy in a slip-weakening interpretation of dissipation. We find that G' increases with S, from ~103 J m-2 at S= 1 mm (M1 earthquakes) to 106-107 J m-2 at S= 1 m (M6). We tentatively interpret these results within slip-weakening theory, assuming G'~G. We consider the common assumption of a linear decrease of strength from the yield stress (σp) with slip (s), up to a slip Dc. In this case, if either Dc, or more generally (σp-σd) Dc, increases with the final slip S we can match the observations, but this implies the unlikely result that the early weakening behaviour of

  5. Models for Near-Ridge Seamounts Constrained by Gravity Observations

    NASA Astrophysics Data System (ADS)

    Kostlan, M.; McClain, J. S.

    2009-12-01

    In an analysis of the seamount chain centered at 105°20’W, 9°05’N, west of the East Pacific Rise and south of the Clipperton transform fault, we compared measured free air gravity anomaly values with modeled gravity anomaly values. The seamount chain contains approximately ten seamounts trending roughly east-west, perpendicular to the mid-ocean ridge axis. They lie on lithosphere between 1.5 and 2.7 Ma in age. Based on its position and age, the seamount chain appears to be associated with the 9°03’N overlapping spreading center (OSC). This OSC includes several associated seamount chains, aligned generally east-west, and of varying ages. The observed data include both free air gravity anomalies and bathymetry of the seamount chain, provided by the National Geophysical Data Center (NGDC), and was selected because the gravity measurements are relatively well covered. We used a series of different structural models of the oceanic crust and mantle to generate gravity anomalies associated with the sea mounts. The models utilize Parker’s algorithm to generate these free air gravity anomalies. We compute a gravity residual by subtracting the calculated anomalies from the observed anomalies. The models include one with a crust of a constant thickness (6 km), while another introduces a constant-thickness Layer 2A. In contrast, a third model included a variable thickness crust, where the thickness is governed by Airy compensation. The calculations show that the seamounts must be partly compensated, because the constant-thickness models predict a high negative residual (or they produce an anomaly which is too high). In contrast, the Airy compensation model produces an anomaly that is too low at the longer wavelengths, indicating that the lithosphere must have some strength, and that flexure must be supporting part of the load of the seamount chain. This contrasts with earlier studies that indicate young, near-ridge seamounts do not result in flexure of the thin

  6. Constraining Gravitational-Wave Propagation Speed with Multimessenger Observations

    NASA Astrophysics Data System (ADS)

    Nishizawa, Atsushi; Nakamura, Takashi

    2015-04-01

    Detection of gravitational waves (GW) provides us an opportunity to test general relativity in strong and dynamical regimes of gravity. One of the tests is checking whether GW propagates with the speed of light or not. This test is crucial because the velocity of GW has not ever been directly measured. Propagation speed of a GW can deviate from the speed of light due to the modification of gravity, graviton mass, and the nontrivial spacetime structure such as extra dimensions and quantum gravity effects. Here we report a simple method to measure the propagation speed of a GW by directly comparing arrival times between gravitational waves, and neutrinos from supernovae or photons from short gamma-ray bursts. As a result, we found that the future multimessenger observations of a GW, neutrinos, and photons can test the GW propagation speed with the precision of 10-16, improving the previous suggestions by 8-10 orders of magnitude. We also propose a novel method that distinguishes the true signal due to the deviation of GW propagation speed from the speed of light and the intrinsic time delay of the emission at a source by looking at the redshift dependence. A. N. is supported by JSPS Postdoctoral Fellowships for Research Abroad.

  7. Using cm observations to constrain the abundance of very small dust grains in Galactic cold cores

    NASA Astrophysics Data System (ADS)

    Tibbs, C. T.; Paladini, R.; Cleary, K.; Muchovej, S. J. C.; Scaife, A. M. M.; Stevenson, M. A.; Laureijs, R. J.; Ysard, N.; Grainge, K. J. B.; Perrott, Y. C.; Rumsey, C.; Villadsen, J.

    2016-03-01

    In this analysis, we illustrate how the relatively new emission mechanism, known as spinning dust, can be used to characterize dust grains in the interstellar medium. We demonstrate this by using spinning dust emission observations to constrain the abundance of very small dust grains (a ≲ 10 nm) in a sample of Galactic cold cores. Using the physical properties of the cores in our sample as inputs to a spinning dust model, we predict the expected level of emission at a wavelength of 1 cm for four different very small dust grain abundances, which we constrain by comparing to 1 cm CARMA observations. For all of our cores, we find a depletion of very small grains, which we suggest is due to the process of grain growth. This work represents the first time that spinning dust emission has been used to constrain the physical properties of interstellar dust grains.

  8. The Search for Muon Neutrinos from Northern HemisphereGamma-Ray Bursts with AMANDA

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Achterberg, A.

    2007-05-08

    We present the results of the analysis of neutrino observations by the Antarctic Muon and Neutrino Detector Array (AMANDA) correlated with photon observations of more than 400 gamma-ray bursts (GRBs) in the Northern Hemisphere from 1997 to 2003. During this time period, AMANDA's effective collection area for muon neutrinos was larger than that of any other existing detector. Based on our observations of zero neutrinos during and immediately prior to the GRBs in the dataset, we set the most stringent upper limit on muon neutrino emission correlated with gamma-ray bursts. Assuming a Waxman-Bahcall spectrum and incorporating all systematic uncertainties, our flux upper limit has a normalization at 1 PeV of E{sup 2}{Phi}{sub {nu}} {le} 6.0 x 10{sup -9} GeV cm{sup -2}s{sup -1}sr{sup -1}, with 90% of the events expected within the energy range of {approx}10 TeV to {approx}3 PeV. The impact of this limit on several theoretical models of GRBs is discussed, as well as the future potential for detection of GRBs by next generation neutrino telescopes. Finally, we briefly describe several modifications to this analysis in order to apply it to other types of transient point sources.

  9. Geochemical record of high emperor penguin populations during the Little Ice Age at Amanda Bay, Antarctica.

    PubMed

    Huang, Tao; Yang, Lianjiao; Chu, Zhuding; Sun, Liguang; Yin, Xijie

    2016-09-15

    Emperor penguins (Aptenodytes forsteri) are sensitive to the Antarctic climate change because they breed on the fast sea ice. Studies of paleohistory for the emperor penguin are rare, due to the lack of archives on land. In this study, we obtained an emperor penguin ornithogenic sediment profile (PI) and performed geochronological, geochemical and stable isotope analyses on the sediments and feather remains. Two radiocarbon dates of penguin feathers in PI indicate that emperor penguins colonized Amanda Bay as early as CE 1540. By using the bio-elements (P, Se, Hg, Zn and Cd) in sediments and stable isotope values (δ(15)N and δ(13)C) in feathers, we inferred relative population size and dietary change of emperor penguins during the period of CE 1540-2008, respectively. An increase in population size with depleted N isotope ratios for emperor penguins on N island at Amanda Bay during the Little Ice Age (CE 1540-1866) was observed, suggesting that cold climate affected the penguin's breeding habitat, prey availability and thus their population and dietary composition.

  10. Geochemical record of high emperor penguin populations during the Little Ice Age at Amanda Bay, Antarctica.

    PubMed

    Huang, Tao; Yang, Lianjiao; Chu, Zhuding; Sun, Liguang; Yin, Xijie

    2016-09-15

    Emperor penguins (Aptenodytes forsteri) are sensitive to the Antarctic climate change because they breed on the fast sea ice. Studies of paleohistory for the emperor penguin are rare, due to the lack of archives on land. In this study, we obtained an emperor penguin ornithogenic sediment profile (PI) and performed geochronological, geochemical and stable isotope analyses on the sediments and feather remains. Two radiocarbon dates of penguin feathers in PI indicate that emperor penguins colonized Amanda Bay as early as CE 1540. By using the bio-elements (P, Se, Hg, Zn and Cd) in sediments and stable isotope values (δ(15)N and δ(13)C) in feathers, we inferred relative population size and dietary change of emperor penguins during the period of CE 1540-2008, respectively. An increase in population size with depleted N isotope ratios for emperor penguins on N island at Amanda Bay during the Little Ice Age (CE 1540-1866) was observed, suggesting that cold climate affected the penguin's breeding habitat, prey availability and thus their population and dietary composition. PMID:27261428

  11. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    SciTech Connect

    Shah, Sweta; Nelemans, Gijs

    2014-08-01

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, the inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.

  12. An observationally constrained evaluation of the oxidative capacity in the tropical western Pacific troposphere

    NASA Astrophysics Data System (ADS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; Campos, Teresa L.; Dickerson, Russell R.; Duncan, Bryan; Emmons, Louisa K.; Evans, Mathew J.; Fernandez, Rafael P.; Flemming, Johannes; Hall, Samuel R.; Hanisco, Thomas F.; Honomichl, Shawn B.; Hornbrook, Rebecca S.; Huijnen, Vincent; Kaser, Lisa; Kinnison, Douglas E.; Lamarque, Jean-Francois; Mao, Jingqiu; Monks, Sarah A.; Montzka, Denise D.; Pan, Laura L.; Riemer, Daniel D.; Saiz-Lopez, Alfonso; Steenrod, Stephen D.; Stell, Meghan H.; Tilmes, Simone; Turquety, Solene; Ullmann, Kirk; Weinheimer, Andrew J.

    2016-06-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO + NO2) by a factor of 2, resulting in OHCOL ~30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  13. Constraining Earth's Rheology of the Barents Sea Using Grace Gravity Change Observations

    NASA Astrophysics Data System (ADS)

    van der Wal, W.; Root, B. C.; Tarasov, L.

    2014-12-01

    The Barents Sea region was ice covered during last glacial maximum and experiences Glacial Isostatic Adjustment (GIA). Because of the limited amount of relevant geological and geodetic observations, it is difficult to constrain GIA models for this region. With improved ice sheet models and gravity observations from GRACE, it is possible to better constrain Earth rheology. This study aims to constrain the upper mantle viscosity and elastic lithosphere thickness from GRACE data in the Barents Sea region. The GRACE observations are corrected for current ice melting on Svalbard, Novaya Zemlya and Frans Joseph Land. A secular trend in gravity rate trend is estimated from the CSR release 5 GRACE data for the period of February 2003 to July 2013. Furthermore, long wavelength effects from distant large mass balance signals such as Greenland ice melting are filtered out. A new high-variance set of ice loading histories from calibrated glaciological modeling are used in the GIA modeling as it is found that ICE-5G over-estimates the observed GIA gravity change in the region. It is found that the rheology structure represented by VM5a results in over-estimation of the observed gravity change in the region for all ice sheet chronologies investigated. Therefore, other rheological Earth models were investigated. The best fitting upper mantle viscosity and elastic lithosphere thickness in the Barents Sea region are 4 (±0.5)*10^20 Pas and 110 (±20) km, respectively. The GRACE satellite mission proves to be a useful constraint in the Barents Sea Region for improving our knowledge on the upper mantle rheology.

  14. Constraining snow model choices in a transitional snow environment with intensive observations

    NASA Astrophysics Data System (ADS)

    Wayand, N. E.; Massmann, A.; Clark, M. P.; Lundquist, J. D.

    2014-12-01

    The performance of existing energy balance snow models exhibits a large spread in the simulated snow water equivalent, snow depth, albedo, and surface temperature. Indentifying poor model representations of physical processes within intercomparison studies is difficult due to multiple differences between models as well as non-orthogonal metrics used. Efforts to overcome these obstacles for model development have focused on a modeling framework that allows multiple representations of each physical process within one structure. However, there still exists a need for snow study sites within complex terrain that observe enough model states and fluxes to constrain model choices. In this study we focus on an intensive snow observational site located in the maritime-transitional snow climate of Snoqualmie Pass WA (Figure 1). The transitional zone has been previously identified as a difficult climate to simulate snow processes; therefore, it represents an ideal model-vetting site. From two water years of intensive observational data, we have learned that a more honest comparison with observations requires that the modeled states or fluxes be as similar to the spatial and temporal domain of the instrument, even if it means changing the model to match what is being observed. For example, 24-hour snow board observations do not capture compaction of the underlying snow; therefore, a modeled "snow board" was created that only includes new snow accumulation and new snow compaction. We extend this method of selective model validation to all available Snoqualmie observations to constrain model choices within the Structure for Understanding Multiple Modeling Alternatives (SUMMA) framework. Our end goal is to provide a more rigorous and systematic method for diagnosing problems within snow models at a site given numerous snow observations.

  15. Future sea level rise constrained by observations and long-term commitment.

    PubMed

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  16. Future sea level rise constrained by observations and long-term commitment

    PubMed Central

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-01-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  17. MULTI-WAVELENGTH OBSERVATIONS OF SOLAR FLARES WITH A CONSTRAINED PEAK X-RAY FLUX

    SciTech Connect

    Bowen, Trevor A.; Testa, Paola; Reeves, Katharine K.

    2013-06-20

    We present an analysis of soft X-ray (SXR) and extreme-ultraviolet (EUV) observations of solar flares with an approximate C8 Geostationary Operational Environmental Satellite (GOES) class. Our constraint on peak GOES SXR flux allows for the investigation of correlations between various flare parameters. We show that the duration of the decay phase of a flare is proportional to the duration of its rise phase. Additionally, we show significant correlations between the radiation emitted in the flare rise and decay phases. These results suggest that the total radiated energy of a given flare is proportional to the energy radiated during the rise phase alone. This partitioning of radiated energy between the rise and decay phases is observed in both SXR and EUV wavelengths. Though observations from the EUV Variability Experiment show significant variation in the behavior of individual EUV spectral lines during different C8 events, this work suggests that broadband EUV emission is well constrained. Furthermore, GOES and Atmospheric Imaging Assembly data allow us to determine several thermal parameters (e.g., temperature, volume, density, and emission measure) for the flares within our sample. Analysis of these parameters demonstrate that, within this constrained GOES class, the longer duration solar flares are cooler events with larger volumes capable of emitting vast amounts of radiation. The shortest C8 flares are typically the hottest events, smaller in physical size, and have lower associated total energies. These relationships are directly comparable with several scaling laws and flare loop models.

  18. Future sea level rise constrained by observations and long-term commitment.

    PubMed

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.

  19. Choosing a 'best' global aerosol model: Can observations constrain parametric uncertainty?

    NASA Astrophysics Data System (ADS)

    Browse, Jo; Reddington, Carly; Pringle, Kirsty; Regayre, Leighton; Lee, Lindsay; Schmidt, Anja; Field, Paul; Carslaw, Kenneth

    2015-04-01

    Anthropogenic aerosol has been shown to contribute to climate change via direct radiative forcing and cloud-aerosol interactions. While the role of aerosol as a climate agent is likely to diminish as CO2 emissions increase, recent studies suggest that uncertainty in modelled aerosol is likely to dominate uncertainty in future forcing projections. Uncertainty in modelled aerosol derives from uncertainty in the representation of emissions and aerosol processes (parametric uncertainty) as well as structural error. Here we utilise Latin hyper-cube sampling methods to produce an ensemble model (composed of 280 runs) of a global model of aerosol processes (GLOMAP) spanning 31 parametric ranges. Using an unprecedented number of observations made available by the GASSP project we have evaluated our ensemble model against a multi-variable (CCN, BC mass, PM2.5) data-set to determine if 'an ideal' aerosol model exists. Ignoring structural errors, optimization of a global model against multiple data-sets to within a factor of 2 is possible, with multiple model runs identified. However, (even regionally) the parametric range of our 'best' model runs is very wide with the same model skill arising from multiple parameter settings. Our results suggest that 'traditional' in-situ measurements are insufficient to constrain parametric uncertainty. Thus, to constrain aerosol in climate models, future evaluations must include process based observations.

  20. Constraining mantle viscosity structure for a thermochemical mantle using the geoid observation

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhong, Shijie

    2016-03-01

    Long-wavelength geoid anomalies provide important constraints on mantle dynamics and viscosity structure. Previous studies have successfully reproduced the observed geoid using seismically inferred buoyancy in whole-mantle convection models. However, it has been suggested that large low shear velocity provinces (LLSVPs) underneath Pacific and Africa in the lower mantle are chemically distinct and are likely denser than the ambient mantle. We formulate instantaneous flow models based on seismic tomographic models to compute the geoid and constrain mantle viscosity by assuming both thermochemical and whole-mantle convection. Geoid modeling for the thermochemical model is performed by considering the compensation effect of dense thermochemical piles and removing buoyancy structure of the compensation layer in the lower mantle. Thermochemical models well reproduce the observed geoid, thus reconciling the geoid with the interpretation of LLSVPs as dense thermochemical piles. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models. In the preferred model, the lower mantle viscosity is ˜10 times higher than the upper mantle viscosity that is ˜10 times higher than the transition zone viscosity. The weak transition zone is consistent with the proposed high water content there. The geoid in thermochemical mantle models is sensitive to seismic structure at midmantle depths, suggesting a need to improve seismic imaging resolution there. The geoid modeling constrains the vertical extent of dense and stable chemical piles to be within ˜500 km above CMB. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modeling.

  1. Healing in forgiveness: A discussion with Amanda Lindhout and Katherine Porterfield, PhD.

    PubMed

    Porterfield, Katherine A; Lindhout, Amanda

    2014-01-01

    In 2008, Amanda Lindhout was kidnapped by a group of extremists while traveling as a freelance journalist in Somalia. She and a colleague were held captive for more than 15 months, released only after their families paid a ransom. In this interview, Amanda discusses her experiences in captivity and her ongoing recovery from this experience with Katherine Porterfield, Ph.D. a clinical psychologist at the Bellevue/NYU Program for Survivors of Torture. Specifically, Amanda describes the childhood experiences that shaped her thirst for travel and knowledge, the conditions of her kidnapping, and her experiences after she was released from captivity. Amanda outlines the techniques that she employed to survive in the early aftermath of her capture, and how these coping strategies changed as her captivity lengthened. She reflects on her transition home, her recovery process, and her experiences with mental health professionals. Amanda's insights provide an example of resilience in the face of severe, extended trauma to researchers, clinicians, and survivors alike. The article ends with an discussion of the ways that Amanda's coping strategies and recovery process are consistent with existing resilience literature. Amanda's experiences as a hostage, her astonishing struggle for physical and mental survival, and her life after being freed are documented in her book, co-authored with Sara Corbett, A House in the Sky.

  2. Healing in forgiveness: A discussion with Amanda Lindhout and Katherine Porterfield, PhD

    PubMed Central

    Porterfield, Katherine A.; Lindhout, Amanda

    2014-01-01

    In 2008, Amanda Lindhout was kidnapped by a group of extremists while traveling as a freelance journalist in Somalia. She and a colleague were held captive for more than 15 months, released only after their families paid a ransom. In this interview, Amanda discusses her experiences in captivity and her ongoing recovery from this experience with Katherine Porterfield, Ph.D. a clinical psychologist at the Bellevue/NYU Program for Survivors of Torture. Specifically, Amanda describes the childhood experiences that shaped her thirst for travel and knowledge, the conditions of her kidnapping, and her experiences after she was released from captivity. Amanda outlines the techniques that she employed to survive in the early aftermath of her capture, and how these coping strategies changed as her captivity lengthened. She reflects on her transition home, her recovery process, and her experiences with mental health professionals. Amanda's insights provide an example of resilience in the face of severe, extended trauma to researchers, clinicians, and survivors alike. The article ends with an discussion of the ways that Amanda's coping strategies and recovery process are consistent with existing resilience literature. Amanda's experiences as a hostage, her astonishing struggle for physical and mental survival, and her life after being freed are documented in her book, co-authored with Sara Corbett, A House in the Sky. PMID:25317259

  3. EQUATION OF STATE AND NEUTRON STAR PROPERTIES CONSTRAINED BY NUCLEAR PHYSICS AND OBSERVATION

    SciTech Connect

    Hebeler, K.; Lattimer, J. M.; Pethick, C. J.; Schwenk, A.

    2013-08-10

    Microscopic calculations of neutron matter based on nuclear interactions derived from chiral effective field theory, combined with the recent observation of a 1.97 {+-} 0.04 M{sub Sun} neutron star, constrain the equation of state of neutron-rich matter at sub- and supranuclear densities. We discuss in detail the allowed equations of state and the impact of our results on the structure of neutron stars, the crust-core transition density, and the nuclear symmetry energy. In particular, we show that the predicted range for neutron star radii is robust. For use in astrophysical simulations, we provide detailed numerical tables for a representative set of equations of state consistent with these constraints.

  4. Global fine-mode aerosol radiative effect, as constrained by comprehensive observations

    NASA Astrophysics Data System (ADS)

    Chung, Chul E.; Chu, Jung-Eun; Lee, Yunha; van Noije, Twan; Jeoung, Hwayoung; Ha, Kyung-Ja; Marks, Marguerite

    2016-07-01

    Aerosols directly affect the radiative balance of the Earth through the absorption and scattering of solar radiation. Although the contributions of absorption (heating) and scattering (cooling) of sunlight have proved difficult to quantify, the consensus is that anthropogenic aerosols cool the climate, partially offsetting the warming by rising greenhouse gas concentrations. Recent estimates of global direct anthropogenic aerosol radiative forcing (i.e., global radiative forcing due to aerosol-radiation interactions) are -0.35 ± 0.5 W m-2, and these estimates depend heavily on aerosol simulation. Here, we integrate a comprehensive suite of satellite and ground-based observations to constrain total aerosol optical depth (AOD), its fine-mode fraction, the vertical distribution of aerosols and clouds, and the collocation of clouds and overlying aerosols. We find that the direct fine-mode aerosol radiative effect is -0.46 W m-2 (-0.54 to -0.39 W m-2). Fine-mode aerosols include sea salt and dust aerosols, and we find that these natural aerosols result in a very large cooling (-0.44 to -0.26 W m-2) when constrained by observations. When the contribution of these natural aerosols is subtracted from the fine-mode radiative effect, the net becomes -0.11 (-0.28 to +0.05) W m-2. This net arises from total (natural + anthropogenic) carbonaceous, sulfate and nitrate aerosols, which suggests that global direct anthropogenic aerosol radiative forcing is less negative than -0.35 W m-2.

  5. Observational techniques for constraining hydraulic and hydrologic models for use in catchment scale flood impact assessment

    NASA Astrophysics Data System (ADS)

    Owen, Gareth; Wilkinson, Mark; Nicholson, Alex; Quinn, Paul; O'Donnell, Greg

    2015-04-01

    There is an increase in the use of Natural Flood Management (NFM) schemes to tackle excessive runoff in rural catchments, but direct evidence of their functioning during extreme events is often lacking. With the availability of low cost sensors, a dense nested monitoring network can be established to provide near continuous optical and physical observations of hydrological processes. This paper will discuss findings for a number of catchments in the North of England where land use management and NFM have been implemented for flood risk reduction; and show how these observations have been used to inform both a hydraulic and a rainfall-runoff model. The value of observations in understanding how measures function is of fundamental importance and is becoming increasingly viable and affordable. Open source electronic platforms such as Arduino and Raspberry Pi are being used with cheap sensors to perform these tasks. For example, a level gauge has been developed for approximately €110 and cameras capable of capturing still or moving pictures are available for approximately €120; these are being used to better understand the behaviour of NFM features such as ponds and woody debris. There is potential for networks of these instruments to be configured and data collected through Wi-Fi or other wireless networks. The potential to expand informative networks of data that can constrain models is now possible. The functioning of small scale runoff attenuation features, such as offline ponds, has been demonstrated at the local scale. Specifically, through the measurement of both instream and in-pond water levels, it has been possible to calculate the impact of storing/attenuating flood flows on the adjacent river flow. This information has been encapsulated in a hydraulic model that allows the extrapolation of impacts to the larger catchment scale, contributing to understanding of the scalability of such features. Using a dense network of level gauges located along the main

  6. Constrained simulations of the Antennae galaxies: comparison with Herschel-PACS observations

    NASA Astrophysics Data System (ADS)

    Karl, S. J.; Lunttila, T.; Naab, T.; Johansson, P. H.; Klaas, U.; Juvela, M.

    2013-09-01

    We present a set of hydro-dynamical numerical simulations of the Antennae galaxies in order to understand the origin of the central overlap starburst. Our dynamical model provides a good match to the observed nuclear and overlap star formation, especially when using a range of rather inefficient stellar feedback efficiencies (0.01 ≲ qEoS ≲ 0.1). In this case a simple conversion of local star formation to molecular hydrogen surface density motivated by observations accounts well for the observed distribution of CO. Using radiative transfer post-processing we model synthetic far-infrared spectral energy distributions (SEDs) and two-dimensional emission maps for direct comparison with Herschel-PACS observations. For a gas-to-dust ratio of 62:1 and the best matching range of stellar feedback efficiencies the synthetic far-infrared SEDs of the central star-forming region peak at values of ˜65-81 Jy at 99-116 μm, similar to a three-component modified blackbody fit to infrared observations. Also the spatial distribution of the far-infrared emission at 70 μm, 100 μm and 160 μm compares well with the observations: ≳50 per cent (≳35 per cent) of the emission in each band is concentrated in the overlap region while only <30 per cent (<15 per cent) is distributed to the combined emission from the two galactic nuclei in the simulations (observations). As a proof of principle we show that parameter variations in the feedback model result in unambiguous changes both in the global and in the spatially resolved observable far-infrared properties of Antennae galaxy models. Our results strengthen the importance of direct, spatially resolved comparative studies of matched galaxy merger simulations as a valuable tool to constrain the fundamental star formation and feedback physics.

  7. Constraining cosmic reionization with quasar, gamma ray burst, and Lyalpha emitter observations

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Ferrara, A.; Choudhury, T. Roy; Fan, X.; Salvaterra, R.; Dayal, P.

    We investigate the cosmic reionization history by comparing semi-analytical models of the Lyalpha forest with observations of high-z quasars and gamma ray bursts absorption spectra. In order to constrain the reionization epoch z_rei, we consider two physically motivated scenarios in which reionization ends either early (ERM, z_reigtrsim 7) or late (LRM, z_rei≈ 6). We analyze the transmitted flux in a sample of 17 QSOs spectra at 5.7< z_em< 6.4 and in the spectrum of the GRB 050904 at z=6.3, studying the wide dark portions (gaps) in the observed absorption spectra. By comparing the statistics of these spectral features with our models, we conclude that current observational data do not require any sudden change in the ionization state of the IGM at z≈ 6, favouring indeed a highly ionized Universe at these epochs, as predicted by the ERM. Moreover, we test the predictions of this model through Lyalpha emitters observations, finding that the ERM provide a good fit to the evolution of the luminosity function of Lyalpha emitting galaxies in the redshift range z=5.7-6.5. The overall result points towards an extended reionization process which starts at zgtrsim 11 and completes at z_reigtrsim 7, in agreement with the recent WMAP5 data.

  8. Predicting the future by explaining the past: constraining carbon-climate feedback using contemporary observations

    NASA Astrophysics Data System (ADS)

    Denning, S.

    2014-12-01

    The carbon-climate community has an historic opportunity to make a step-function improvement in climate prediction by using regional constraints to improve mechanistic model representation of carbon cycle processes. Interactions among atmospheric CO2, global biogeochemistry, and physical climate constitute leading sources of uncertainty in future climate. First-order differences among leading models of these processes produce differences in climate as large as differences in aerosol-cloud-radiation interactions and fossil fuel combustion. Emergent constraints based on global observations of interannual variations provide powerful constraints on model parameterizations. Additional constraints can be defined at regional scales. Organized intercomparison experiments have shown that uncertainties in future carbon-climate feedback arise primarily from model representations of the dependence of photosynthesis on CO2 and drought stress and the dependence of decomposition on temperature. Just as representations of net carbon fluxes have benefited from eddy flux, ecosystem manipulations, and atmospheric CO2, component carbon fluxes (photosynthesis, respiration, decomposition, disturbance) can be constrained at regional scales using new observations. Examples include biogeochemical tracers such as isotopes and carbonyl sulfide as well as remotely-sensed parameters such as chlorophyll fluorescence and biomass. Innovative model evaluation experiments will be needed to leverage the information content of new observations to improve process representations as well as to provide accurate initial conditions for coupled climate model simulations. Successful implementation of a comprehensive benchmarking program could have a huge impact on understanding and predicting future climate change.

  9. Constraining cloud lifetime effects of aerosols using A-Train satellite observations

    SciTech Connect

    Wang, Minghuai; Ghan, Steven J.; Liu, Xiaohong; Ecuyer, Tristan L.; Zhang, Kai; Morrison, H.; Ovchinnikov, Mikhail; Easter, Richard C.; Marchand, Roger; Chand, Duli; Qian, Yun; Penner, Joyce E.

    2012-08-15

    Aerosol indirect effects have remained the largest uncertainty in estimates of the radiative forcing of past and future climate change. Observational constraints on cloud lifetime effects are particularly challenging since it is difficult to separate aerosol effects from meteorological influences. Here we use three global climate models, including a multi-scale aerosol-climate model PNNL-MMF, to show that the dependence of the probability of precipitation on aerosol loading, termed the precipitation frequency susceptibility (S{sub pop}), is a good measure of the liquid water path response to aerosol perturbation ({lambda}), as both Spop and {lambda} strongly depend on the magnitude of autoconversion, a model representation of precipitation formation via collisions among cloud droplets. This provides a method to use satellite observations to constrain cloud lifetime effects in global climate models. S{sub pop} in marine clouds estimated from CloudSat, MODIS and AMSR-E observations is substantially lower than that from global climate models and suggests a liquid water path increase of less than 5% from doubled cloud condensation nuclei concentrations. This implies a substantially smaller impact on shortwave cloud radiative forcing (SWCF) over ocean due to aerosol indirect effects than simulated by current global climate models (a reduction by one-third for one of the conventional aerosol-climate models). Further work is needed to quantify the uncertainties in satellite-derived estimates of S{sub pop} and to examine S{sub pop} in high-resolution models.

  10. Constraining cloud lifetime effects of aerosols using A-Train satellite observations

    NASA Astrophysics Data System (ADS)

    Wang, Minghuai; Ghan, Steven; Liu, Xiaohong; L'Ecuyer, Tristan S.; Zhang, Kai; Morrison, Hugh; Ovchinnikov, Mikhail; Easter, Richard; Marchand, Roger; Chand, Duli; Qian, Yun; Penner, Joyce E.

    2012-08-01

    Aerosol indirect effects have remained the largest uncertainty in estimates of the radiative forcing of past and future climate change. Observational constraints on cloud lifetime effects are particularly challenging since it is difficult to separate aerosol effects from meteorological influences. Here we use three global climate models, including a multi-scale aerosol-climate model PNNL-MMF, to show that the dependence of the probability of precipitation on aerosol loading, termed the precipitation frequency susceptibility (Spop), is a good measure of the liquid water path response to aerosol perturbation (λ), as both Spop and λ strongly depend on the magnitude of autoconversion, a model representation of precipitation formation via collisions among cloud droplets. This provides a method to use satellite observations to constrain cloud lifetime effects in global climate models. Spop in marine clouds estimated from CloudSat, MODIS and AMSR-E observations is substantially lower than that from global climate models and suggests a liquid water path increase of less than 5% from doubled cloud condensation nuclei concentrations. This implies a substantially smaller impact on shortwave cloud radiative forcing over ocean due to aerosol indirect effects than simulated by current global climate models (a reduction by one-third for one of the conventional aerosol-climate models). Further work is needed to quantify the uncertainties in satellite-derived estimates of Spop and to examine Spop in high-resolution models.

  11. Constraining future terrestrial carbon cycle projections using observation-based water and carbon flux estimates.

    PubMed

    Mystakidis, Stefanos; Davin, Edouard L; Gruber, Nicolas; Seneviratne, Sonia I

    2016-06-01

    The terrestrial biosphere is currently acting as a sink for about a third of the total anthropogenic CO2  emissions. However, the future fate of this sink in the coming decades is very uncertain, as current earth system models (ESMs) simulate diverging responses of the terrestrial carbon cycle to upcoming climate change. Here, we use observation-based constraints of water and carbon fluxes to reduce uncertainties in the projected terrestrial carbon cycle response derived from simulations of ESMs conducted as part of the 5th phase of the Coupled Model Intercomparison Project (CMIP5). We find in the ESMs a clear linear relationship between present-day evapotranspiration (ET) and gross primary productivity (GPP), as well as between these present-day fluxes and projected changes in GPP, thus providing an emergent constraint on projected GPP. Constraining the ESMs based on their ability to simulate present-day ET and GPP leads to a substantial decrease in the projected GPP and to a ca. 50% reduction in the associated model spread in GPP by the end of the century. Given the strong correlation between projected changes in GPP and in NBP in the ESMs, applying the constraints on net biome productivity (NBP) reduces the model spread in the projected land sink by more than 30% by 2100. Moreover, the projected decline in the land sink is at least doubled in the constrained ensembles and the probability that the terrestrial biosphere is turned into a net carbon source by the end of the century is strongly increased. This indicates that the decline in the future land carbon uptake might be stronger than previously thought, which would have important implications for the rate of increase in the atmospheric CO2 concentration and for future climate change. PMID:26732346

  12. Constraining future terrestrial carbon cycle projections using observation-based water and carbon flux estimates.

    PubMed

    Mystakidis, Stefanos; Davin, Edouard L; Gruber, Nicolas; Seneviratne, Sonia I

    2016-06-01

    The terrestrial biosphere is currently acting as a sink for about a third of the total anthropogenic CO2  emissions. However, the future fate of this sink in the coming decades is very uncertain, as current earth system models (ESMs) simulate diverging responses of the terrestrial carbon cycle to upcoming climate change. Here, we use observation-based constraints of water and carbon fluxes to reduce uncertainties in the projected terrestrial carbon cycle response derived from simulations of ESMs conducted as part of the 5th phase of the Coupled Model Intercomparison Project (CMIP5). We find in the ESMs a clear linear relationship between present-day evapotranspiration (ET) and gross primary productivity (GPP), as well as between these present-day fluxes and projected changes in GPP, thus providing an emergent constraint on projected GPP. Constraining the ESMs based on their ability to simulate present-day ET and GPP leads to a substantial decrease in the projected GPP and to a ca. 50% reduction in the associated model spread in GPP by the end of the century. Given the strong correlation between projected changes in GPP and in NBP in the ESMs, applying the constraints on net biome productivity (NBP) reduces the model spread in the projected land sink by more than 30% by 2100. Moreover, the projected decline in the land sink is at least doubled in the constrained ensembles and the probability that the terrestrial biosphere is turned into a net carbon source by the end of the century is strongly increased. This indicates that the decline in the future land carbon uptake might be stronger than previously thought, which would have important implications for the rate of increase in the atmospheric CO2 concentration and for future climate change.

  13. Combining Observations of Shock-induced Minerals with Calculations to Constrain the Shock History of Meteorites.

    NASA Astrophysics Data System (ADS)

    de Carli, P. S.; Xie, Z.; Sharp, T. G.

    2007-12-01

    All available evidence from shock Hugoniot and release adiabat measurements and from shock recovery experiments supports the hypothesis that the conditions for shock-induced phase transitions are similar to the conditions under which quasistatic phase transitions are observed. Transitions that require high temperatures under quasistatic pressures require high temperatures under shock pressures. The high-pressure phases found in shocked meteorites are almost invariably associated with shock melt veins. A shock melt vein is analogous to a pseudotachylite, a sheet of locally melted material that was quenched by conduction to surrounding cooler material. The mechanism by which shock melt veins form is not known; possible mechanisms include shock collisions, shock interactions with cracks and pores, and adiabatic shear. If one assumes that the phases within the vein crystallized in their stability fields, then available static high-pressure data constrain the shock pressure range over which the vein solidified. Since the veins have a sheet-like geometry, one may use one-dimensional heat flow calculations to constrain the cooling and crystallization history of the veins (Langenhorst and Poirier, 2000). Although the formation mechanism of a melt vein may involve transient pressure excursions, pressure equilibration of a mm-wide vein will be complete within about a microsecond, whereas thermal equilibration will require seconds. Some of our melt vein studies have indicated that the highly-shocked L chondrite meteorites were exposed to a narrow range of shock pressures, e.g., 18-25 GPa, over a minimum duration of the order of a second. We have used the Autodyn(TM) wave propagation code to calculate details of plausible impacts on the L-chondrite parent body for a variety of possible parent body stratigraphies. We infer that some meteorites probably represent material that was shocked at a depth of >10 km in their parent bodies.

  14. Fast emission estimates in China and South Africa constrained by satellite observations

    NASA Astrophysics Data System (ADS)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  15. Using modern stellar observables to constrain stellar parameters and the physics of the stellar interior

    NASA Astrophysics Data System (ADS)

    van Saders, Jennifer L.

    2014-05-01

    The current state and future evolution of a star is, in principle, specified by a only a few physical quantities: the mass, age, hydrogen, helium, and metal abundance. These same fundamental quantities are crucial for reconstructing the history of stellar systems ranging in scale from planetary systems to galaxies. However, the fundamental parameters are rarely directly observable, and we are forced to use proxies that are not always sensitive or unique functions of the stellar parameters we wish to determine. Imprecise or inaccurate determinations of the fundamental parameters often limit our ability to draw inferences about a given system. As new technologies, instruments, and observing techniques become available, the list of viable stellar observables increases, and we can explore new links between the observables and fundamental quantities in an effort to better characterize stellar systems. In the era of missions such as Kepler, time-domain observables such as the stellar rotation period and stellar oscillations are now available for an unprecedented number of stars, and future missions promise to further expand the sample. Furthermore, despite the successes of stellar evolution models, the processes and detailed structure of the deep stellar interior remains uncertain. Even in the case of well-measured, well understood stellar observables, the link to the underlying parameters contains uncertainties due to our imperfect understanding of stellar interiors. Model uncertainties arise from sources such as the treatment of turbulent convection, transport of angular momentum and mixing, and assumptions about the physical conditions of stellar matter. By carefully examining the sensitivity of stellar observables to physical processes operating within the star and model assumptions, we can design observational tests for the theory of stellar interiors. I propose a series of tools based on new or revisited stellar observables that can be used both to constrain

  16. How essential are Argo observations to constrain a global ocean data assimilation system?

    NASA Astrophysics Data System (ADS)

    Turpin, V.; Remy, E.; Le Traon, P. Y.

    2016-02-01

    Observing system experiments (OSEs) are carried out over a 1-year period to quantify the impact of Argo observations on the Mercator Ocean 0.25° global ocean analysis and forecasting system. The reference simulation assimilates sea surface temperature (SST), SSALTO/DUACS (Segment Sol multi-missions dALTimetrie, d'orbitographie et de localisation précise/Data unification and Altimeter combination system) altimeter data and Argo and other in situ observations from the Coriolis data center. Two other simulations are carried out where all Argo and half of the Argo data are withheld. Assimilating Argo observations has a significant impact on analyzed and forecast temperature and salinity fields at different depths. Without Argo data assimilation, large errors occur in analyzed fields as estimated from the differences when compared with in situ observations. For example, in the 0-300 m layer RMS (root mean square) differences between analyzed fields and observations reach 0.25 psu and 1.25 °C in the western boundary currents and 0.1 psu and 0.75 °C in the open ocean. The impact of the Argo data in reducing observation-model forecast differences is also significant from the surface down to a depth of 2000 m. Differences between in situ observations and forecast fields are thus reduced by 20 % in the upper layers and by up to 40 % at a depth of 2000 m when Argo data are assimilated. At depth, the most impacted regions in the global ocean are the Mediterranean outflow, the Gulf Stream region and the Labrador Sea. A significant degradation can be observed when only half of the data are assimilated. Therefore, Argo observations matter to constrain the model solution, even for an eddy-permitting model configuration. The impact of the Argo floats' data assimilation on other model variables is briefly assessed: the improvement of the fit to Argo profiles do not lead globally to unphysical corrections on the sea surface temperature and sea surface height. The main conclusion

  17. Constraining a Martian general circulation model with the MAVEN/IUVS observations in the thermosphere

    NASA Astrophysics Data System (ADS)

    Moeckel, Chris; Medvedev, Alexander; Nakagawa, Hiromu; Evans, Scott; Kuroda, Takeshi; Hartogh, Paul; Yiğit, Erdal; Jain, Sonal; Lo, Daniel; Schneider, Nicholas M.; Jakosky, Bruce

    2016-10-01

    The recent measurements of the number density of atomic oxygen by Mars Atmosphere and Volatile EvolutioN/ Imaging UltraViolet Spectrograph (MAVEN/IUVS) have been implemented for the first time into a global circulation model to quantify the effect on the Martian thermosphere. The number density has been converted to 1D volume mixing ratio and this profile is compared to the atomic oxygen scenarios based on chemical models. Simulations were performed with the Max Planck Institute Martian General Circulation Model (MPI-MGCM). The simulations closely emulate the conditions at the time of observations. The results are compared to the IUVS-measured CO2 number density and temperature above 130 km to gain knowledge of the processes in the upper atmosphere and further constrain them in MGCMs. The presentation will discuss the role and importance in the thermosphere of the following aspects: (a) impact of the observed atomic oxygen, (b) 27-day solar cycle variations, (c) varying dust load in the lower atmosphere, and (d) gravity waves.

  18. Catching the fish - Constraining stellar parameters for TX Piscium using spectro-interferometric observations

    NASA Astrophysics Data System (ADS)

    Klotz, D.; Paladini, C.; Hron, J.; Aringer, B.; Sacuto, S.; Marigo, P.; Verhoelst, T.

    2013-02-01

    Context. Stellar parameter determination is a challenging task when dealing with galactic giant stars. The combination of different investigation techniques has proven to be a promising approach. Aims: We analyse archive spectra obtained with the Short Wavelength Spectrometer (SWS) onboard ISO, and new interferometric observations from the Very Large Telescope MID-infrared Interferometric instrument (VLTI/MIDI) of a very well studied carbon-rich giant: TX Psc. The aim of this work is to determine stellar parameters using spectroscopy and interferometry. Methods: The observations are used to constrain the model atmosphere, and eventually the stellar evolutionary model in the region where the tracks map the beginning of the carbon star sequence. Two different approaches are used to determine stellar parameters: (i) the "classic" interferometric approach where the effective temperature is fixed by using the angular diameter in the N-band (from interferometry) and the apparent bolometric magnitude; (ii) parameters are obtained by fitting a grid of state-of-the-art hydrostatic models to spectroscopic and interferometric observations. Results: We find good agreement between the parameters of the two methods. The effective temperature and luminosity clearly place TX Psc in the carbon-rich AGB star domain in the H-R-diagram. Current evolutionary tracks suggest that TX Psc became a C-star just recently, which means that the star is still in a "quiet" phase compared to the subsequent strong-wind regime. This agrees with the C/O ratio being only slightly greater than one. Based on observations made with ESO telescopes at Paranal Observatory under program IDs 74.D-0601, 60.A-9224, 77.C-0440, 60.A-9006, 78.D-0112, 84.D-0805.

  19. Observing transiting exoplanets: Removing systematic errors to constrain atmospheric chemistry and dynamics

    NASA Astrophysics Data System (ADS)

    Zellem, Robert Thomas

    2015-03-01

    The > 1500 confirmed exoplanets span a wide range of planetary masses ( 1 MEarth -20 MJupiter), radii ( 0.3 R Earth -2 RJupiter), semi-major axes ( 0.005-100 AU), orbital periods ( 0.3-1 x 105 days), and host star spectral types. The effects of a widely-varying parameter space on a planetary atmosphere's chemistry and dynamics can be determined through transiting exoplanet observations. An exoplanet's atmospheric signal, either in absorption or emission, is on the order of 0.1% which is dwarfed by telescope-specific systematic error sources up to 60%. This thesis explores some of the major sources of error and their removal from space- and ground-based observations, specifically Spitzer /IRAC single-object photometry, IRTF/SpeX and Palomar/TripleSpec low-resolution single-slit near-infrared spectroscopy, and Kuiper/Mont4k multi-object photometry. The errors include pointing-induced uncertainties, airmass variations, seeing-induced signal loss, telescope jitter, and system variability. They are treated with detector efficiency pixel-mapping, normalization routines, a principal component analysis, binning with the geometric mean in Fourier-space, characterization by a comparison star, repeatability, and stellar monitoring to get within a few times of the photon noise limit. As a result, these observations provide strong measurements of an exoplanet's dynamical day-to-night heat transport, constrain its CH4 abundance, investigate emission mechanisms, and develop an observing strategy with smaller telescopes. The reduction methods presented here can also be applied to other existing and future platforms to identify and remove systematic errors. Until such sources of uncertainty are characterized with bright systems with large planetary signals for platforms such as the James Webb Space Telescope, for example, one cannot resolve smaller objects with more subtle spectral features, as expected of exo-Earths.

  20. Dielectric properties of Asteroid Vesta's surface as constrained by Dawn VIR observations

    NASA Astrophysics Data System (ADS)

    Palmer, Elizabeth M.; Heggy, Essam; Capria, Maria T.; Tosi, Federico

    2015-12-01

    Earth and orbital-based radar observations of asteroids provide a unique opportunity to characterize surface roughness and the dielectric properties of their surfaces, as well as potentially explore some of their shallow subsurface physical properties. If the dielectric and topographic properties of asteroid's surfaces are defined, one can constrain their surface textural characteristics as well as potential subsurface volatile enrichment using the observed radar backscatter. To achieve this objective, we establish the first dielectric model of asteroid Vesta for the case of a dry, volatile-poor regolith-employing an analogy to the dielectric properties of lunar soil, and adjusted for the surface densities and temperatures deduced from Dawn's Visible and InfraRed mapping spectrometer (VIR). Our model suggests that the real part of the dielectric constant at the surface of Vesta is relatively constant, ranging from 2.3 to 2.5 from the night- to day-side of Vesta, while the loss tangent shows slight variation as a function of diurnal temperature, ranging from 6 × 10-3 to 8 × 10-3. We estimate the surface porosity to be ∼55% in the upper meter of the regolith, as derived from VIR observations. This is ∼12% higher than previous estimation of porosity derived from previous Earth-based X- and S-band radar observation. We suggest that the radar backscattering properties of asteroid Vesta will be mainly driven by the changes in surface roughness rather than potential dielectric variations in the upper regolith in the X- and S-band.

  1. Constraining the Properties of Small Stars and Small Planets Observed by K2

    NASA Astrophysics Data System (ADS)

    Dressing, Courtney D.; Newton, Elisabeth R.; Charbonneau, David; Schlieder, Josh; Hawaii/California/Arizona/Indiana K2 Follow-up Consortium, HARPS-N Consortium

    2016-01-01

    We are using the results of the NASA K2 mission (the second career of the Kepler spacecraft) to study how the frequency and architectures of planetary systems orbiting M dwarfs throughout the ecliptic plane compare to those of the early M dwarf planetary systems observed by Kepler. In a previous analysis of the Kepler data set, we found that planets orbiting early M dwarfs are common: we measured a cumulative planet occurrence rate of 2.45 +/- 0.22 planets per M dwarf with periods of 0.5-200 days and planet radii of 1-4 Earth radii. Within a conservative habitable zone based on the moist greenhouse inner limit and maximum greenhouse outer limit, we estimated an occurrence rate of 0.15 (+0.18/-0.06) Earth-size planets and 0.09 (+0.10/-0.04) super-Earths per M dwarf HZ. Applying these occurrence rates to the population of nearby stars and assuming that mid- and late-M dwarfs host planets at the same rate as early M dwarfs, we predicted that the nearest potentially habitable Earth-size planet likely orbits an M dwarf a mere 2.6 ± 0.4 pc away. We are now testing the assumption of equal planet occurrence rates for M dwarfs of all types by inspecting the population of planets detected by K2 and conducting follow-up observations of planet candidate host stars to identify false positives and better constrain system parameters. I will present the results of recent observing runs with SpeX on the IRTF to obtain near-infrared spectra of low-mass stars targeted by K2 and determine the radii, temperatures, and metallicities of our target stars using empirical relations. We gratefully acknowledge funding from the NASA XRP Program, the John Templeton Foundation, and the NASA Sagan Fellowship Program.

  2. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE PAGES

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    2016-08-26

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  3. GRACE gravity observations constrain Weichselian ice thickness in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Root, B. C.; Tarasov, L.; Wal, W.

    2015-05-01

    The Barents Sea is subject to ongoing postglacial uplift since the melting of the Weichselian ice sheet that covered it. The regional ice sheet thickness history is not well known because there is only data at the periphery due to the locations of Franz Joseph Land, Svalbard, and Novaya Zemlya surrounding this paleo ice sheet. We show that the linear trend in the gravity rate derived from a decade of observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission can constrain the volume of the ice sheet after correcting for current ice melt, hydrology, and far-field gravitational effects. Regional ice-loading models based on new geologically inferred ice margin chronologies show a significantly better fit to the GRACE data than that of ICE-5G. The regional ice models contain less ice in the Barents Sea than present in ICE-5G (5-6.3 m equivalent sea level versus 8.5 m), which increases the ongoing difficulty in closing the global sea level budget at the Last Glacial Maximum.

  4. Global estimate of submarine groundwater discharge based on an observationally constrained radium isotope model

    NASA Astrophysics Data System (ADS)

    Kwon, Eun Young; Kim, Guebuem; Primeau, Francois; Moore, Willard S.; Cho, Hyung-Mi; DeVries, Timothy; Sarmiento, Jorge L.; Charette, Matthew A.; Cho, Yang-Ki

    2014-12-01

    Along the continental margins, rivers and submarine groundwater supply nutrients, trace elements, and radionuclides to the coastal ocean, supporting coastal ecosystems and, increasingly, causing harmful algal blooms and eutrophication. While the global magnitude of gauged riverine water discharge is well known, the magnitude of submarine groundwater discharge (SGD) is poorly constrained. Using an inverse model combined with a global compilation of 228Ra observations, we show that the SGD integrated over the Atlantic and Indo-Pacific Oceans between 60°S and 70°N is (12 ± 3) × 1013 m3 yr-1, which is 3 to 4 times greater than the freshwater fluxes into the oceans by rivers. Unlike the rivers, where more than half of the total flux is discharged into the Atlantic, about 70% of SGD flows into the Indo-Pacific Oceans. We suggest that SGD is the dominant pathway for dissolved terrestrial materials to the global ocean, and this necessitates revisions for the budgets of chemical elements including carbon.

  5. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Reed, Sasha C.; Smith, William K.; Cleveland, Cory C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    Background/Question/Methods Currently, the United States (U.S.) supplies roughly half the world’s biofuel (secondary bioenergy), with the Energy Independence and Security Act of 2007 (EISA) stipulating an additional three-fold increase in annual production by 2022. Implicit in such energy targets is an associated increase in annual biomass demand (primary bioenergy) from roughly 2.9 to 7.4 exajoules (EJ; 1018 Joules). Yet, many of the factors used to estimate future bioenergy potential are relatively unresolved, bringing into question the practicality of the EISA’s ambitious bioenergy targets. Here, our objective was to constrain estimates of primary bioenergy potential (PBP) for the conterminous U.S. using satellite-derived net primary productivity (NPP) data (measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S) as the most geographically explicit measure of terrestrial growth capacity. Results/Conclusions We show that the annual primary bioenergy potential (PBP) of the conterminous U.S. realistically ranges from approximately 5.9 (± 1.4) to 22.2 (± 4.4) EJ, depending on land use. The low end of this range represents current harvest residuals, an attractive potential energy source since no additional harvest land is required. In contrast, the high end represents an annual harvest over an additional 5.4 million km2 or 75% of vegetated land in the conterminous U.S. While we identify EISA energy targets as achievable, our results indicate that meeting such targets using current technology would require either an 80% displacement of current croplands or the conversion of 60% of total rangelands. Our results differ from previous evaluations in that we use high resolution, satellite-derived NPP as an upper-envelope constraint on bioenergy potential, which removes the need for extrapolation of plot-level observed yields over large spatial areas. Establishing realistically constrained estimates of bioenergy potential seems a

  6. CONSTRAINING HIGH-SPEED WINDS IN EXOPLANET ATMOSPHERES THROUGH OBSERVATIONS OF ANOMALOUS DOPPLER SHIFTS DURING TRANSIT

    SciTech Connect

    Miller-Ricci Kempton, Eliza; Rauscher, Emily

    2012-06-01

    Three-dimensional (3D) dynamical models of hot Jupiter atmospheres predict very strong wind speeds. For tidally locked hot Jupiters, winds at high altitude in the planet's atmosphere advect heat from the day side to the cooler night side of the planet. Net wind speeds on the order of 1-10 km s{sup -1} directed towards the night side of the planet are predicted at mbar pressures, which is the approximate pressure level probed by transmission spectroscopy. These winds should result in an observed blueshift of spectral lines in transmission on the order of the wind speed. Indeed, Snellen et al. recently observed a 2 {+-} 1 km s{sup -1} blueshift of CO transmission features for HD 209458b, which has been interpreted as a detection of the day-to-night (substellar to anti-stellar) winds that have been predicted by 3D atmospheric dynamics modeling. Here, we present the results of a coupled 3D atmospheric dynamics and transmission spectrum model, which predicts the Doppler-shifted spectrum of a hot Jupiter during transit resulting from winds in the planet's atmosphere. We explore four different models for the hot Jupiter atmosphere using different prescriptions for atmospheric drag via interaction with planetary magnetic fields. We find that models with no magnetic drag produce net Doppler blueshifts in the transmission spectrum of {approx}2 km s{sup -1} and that lower Doppler shifts of {approx}1 km s{sup -1} are found for the higher drag cases, results consistent with-but not yet strongly constrained by-the Snellen et al. measurement. We additionally explore the possibility of recovering the average terminator wind speed as a function of altitude by measuring Doppler shifts of individual spectral lines and spatially resolving wind speeds across the leading and trailing terminators during ingress and egress.

  7. Neutrino Data from IceCube and its Predecessor at the South Pole, the Antarctic Muon and Neutrino Detector Array (AMANDA)

    DOE Data Explorer

    Abbasi, R.

    IceCube is a neutrino observatory for astrophysics with parts buried below the surface of the ice at the South Pole and an air-shower detector array exposed above. The international group of sponsors, led by the National Science Foundation (NSF), that designed and implemented the experiment intends for IceCube to operate and provide data for 20 years. IceCube records the interactions produced by astrophysical neutrinos with energies above 100 GeV, observing the Cherenkov radiation from charged particles produced in neutrino interactions. Its goal is to discover the sources of high-energy cosmic rays. These sources may be active galactic nuclei (AGNs) or massive, collapsed stars where black holes have formed.[Taken from http://www.icecube.wisc.edu/] The data from IceCube's predecessor experiment and detector, AMANDA, IceCube’s predecessor detector and experiment is also available at this website. AMANDA pioneered neutrino detection in ice. Over a period of years in the 1990s, detecting “strings” were buried and activated and by 2000, AMANDA was successfully recording an average of 1,000 neutrino events per year. This site also makes available many images and video from the two experiments.

  8. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.

    2012-12-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations - notable differences include large differences in multi-decadal variability between proxy reconstructions, and big uncertainties in the effect of volcanic eruptions. Because the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. By constraining key aspects of the reconstructions and simulations, instrumental observations, inevitably from a limited period, can reduce reconstruction uncertainty throughout the millennium. A considerable quantity of early instrumental observations are preserved in the world's archives. One organisation which systematically made observations and collected the results was the English East-India Company (EEIC), and 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure have been preserved in the British Library. Similar records from voyages of exploration and scientific investigation are preserved in published literature and the records in National Archives. Some of these records have been extracted and digitised, providing hundreds of thousands of new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809

  9. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    NASA Astrophysics Data System (ADS)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  10. Constraining Ammonia in Air Quality Models with Remote Sensing Observations and Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Zhu, Liye

    Ammonia is an important species in the atmosphere as it contributes to air pollution, climate change and environmental health. Ammonia emissions are known to be primarily from agricultural sources, however there is persistent uncertainty in the magnitudes and seasonal trends of these sources, as ammonia has not traditionally been routinely monitored. The first detection of boundary layer ammonia from space by the NASA Tropospheric Emissions Spectrometer (TES) satellite has provided an exciting new means of reducing this uncertainty. In this thesis, I explore how forward and inverse modeling can be used with satellite observations to constrain ammonia emissions. Model simulations are used to build and validate the TES ammonia retrieval product. TES retrievals are then used to characterize global ammonia distributions and model estimates. Correlations between ammonia and carbon monoxide, observed simultaneously by TES, provide additional insight into observed and modeled ammonia from biomass burning. Next, through inverse modeling, I show that ammonia emissions are broadly underestimated throughout the U.S., particularly in the West. Optimized model simulations capture the range and variability of in-situ observation in April and October, while estimates in July are biased high. To understand these adjustments, several aspects of the retrieval are considered, such as spatial and temporal sampling biases. These investigations lead to revisions of fundamental aspects of how ammonia emissions are modeled, such as the diurnal variability of livestock ammonia emissions. While this improves comparison to hourly in situ measurements in the SE U.S., ammonia concentrations decrease throughout the globe, up to 17 ppb in India and Southeastern China. Lastly, the bi-directional air-surface exchange of ammonia is implemented for the first time in a global model and its adjoint. Ammonia bi-directional exchange generally increases ammonia gross emissions (10.9%) and surface

  11. Constraining atmospheric ammonia emissions through new observations with an open-path, laser-based sensor

    NASA Astrophysics Data System (ADS)

    Sun, Kang

    emission estimates. Finally, NH3 observations from the TES instrument on NASA Aura satellite were validated with mobile measurements and aircraft observations. Improved validations will help to constrain NH3 emissions at continental to global scales. Ultimately, these efforts will improve the understanding of NH3 emissions from all scales, with implications on the global nitrogen cycle and atmospheric chemistry-climate interactions.

  12. Cosmic ray energy spectrum measurement with the Antarctic Muon and Neutrino Detector Array (AMANDA)

    NASA Astrophysics Data System (ADS)

    Chirkin, Dmitry Aleksandrovich

    AMANDA-II is a neutrino telescope composed of 677 optical sensors organized along 19 strings buried deep in the Antarctic ice cap. It is designed to detect Cherenkov light produced by cosmic-ray- and neutrino-induced charged leptons. The majority of events recorded by AMANDA-II are caused by muons which are produced in the atmosphere by high-energy cosmic rays. The leading uncertainties in simulating such events come from the choice of the high-energy model used to describe the first interaction of the cosmic rays, uncertainties in our knowledge and implementation of the ice properties at the depth of the detector, and individual optical module sensitivities. Contributions from uncertainties in the atmospheric conditions and muon cross sections in ice are smaller. The downgoing muon simulation was substantially improved by using the extensive air shower generator CORSIKA to describe the shower development in the atmosphere, and by writing a new software package for the muon propagation (MMC), which reduced computational and algorithm errors below the level of uncertainties of the muon cross sections in ice. A method was developed that resulted in a flux measurement of cosmic rays with energies 1.5--200 TeV per nucleon (95% of primaries causing low-multiplicity events in AMANDA-II have energies in this range) independent of ice model and optical module sensitivities. Predictions of six commonly used high-energy interaction models (QGSJET, VENUS, NEXUS, DPMJET, HDPM, and SIBYLL) are compared to data. The best agreement with direct measurements is achieved with QGSJET, VENUS, and NEXUS. Assuming a power-law energy spectrum (phi0,i · E -gammai) for cosmic-ray components from hydrogen to iron (i = H,..., Fe) and their mass distribution according to Wiebel-South (Wiebel-South & Biermann, 1999), phi 0,i and gammai were corrected to achieve the best description of the data. For the hydrogen component, values of phi0,H = 0.106 +/- 0.007 m-2 sr-1s-1TeV-1 , gammaH = 2

  13. How wild is your model fire? Constraining WRF-Chem wildfire smoke simulations with satellite observations

    NASA Astrophysics Data System (ADS)

    Fischer, E. V.; Ford, B.; Lassman, W.; Pierce, J. R.; Pfister, G.; Volckens, J.; Magzamen, S.; Gan, R.

    2015-12-01

    Exposure to high concentrations of particulate matter (PM) present during acute pollution events is associated with adverse health effects. While many anthropogenic pollution sources are regulated in the United States, emissions from wildfires are difficult to characterize and control. With wildfire frequency and intensity in the western U.S. projected to increase, it is important to more precisely determine the effect that wildfire emissions have on human health, and whether improved forecasts of these air pollution events can mitigate the health risks associated with wildfires. One of the challenges associated with determining health risks associated with wildfire emissions is that the low spatial resolution of surface monitors means that surface measurements may not be representative of a population's exposure, due to steep concentration gradients. To obtain better estimates of ambient exposure levels for health studies, a chemical transport model (CTM) can be used to simulate the evolution of a wildfire plume as it travels over populated regions downwind. Improving the performance of a CTM would allow the development of a new forecasting framework that could better help decision makers estimate and potentially mitigate future health impacts. We use the Weather Research and Forecasting model with online chemistry (WRF-Chem) to simulate wildfire plume evolution. By varying the model resolution, meteorology reanalysis initial conditions, and biomass burning inventories, we are able to explore the sensitivity of model simulations to these various parameters. Satellite observations are used first to evaluate model skill, and then to constrain the model results. These data are then used to estimate population-level exposure, with the aim of better characterizing the effects that wildfire emissions have on human health.

  14. Potential sea-level rise from Antarctic ice-sheet instability constrained by observations.

    PubMed

    Ritz, Catherine; Edwards, Tamsin L; Durand, Gaël; Payne, Antony J; Peyaud, Vincent; Hindmarsh, Richard C A

    2015-12-01

    Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers.

  15. Potential sea-level rise from Antarctic ice-sheet instability constrained by observations.

    PubMed

    Ritz, Catherine; Edwards, Tamsin L; Durand, Gaël; Payne, Antony J; Peyaud, Vincent; Hindmarsh, Richard C A

    2015-12-01

    Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers. PMID:26580020

  16. Constraining the Properties of Dark Matter with Observations of the Cosmic Microwave Background

    NASA Astrophysics Data System (ADS)

    Thomas, Daniel B.; Kopp, Michael; Skordis, Constantinos

    2016-10-01

    We examine how the properties of dark matter, parameterized by an equation-of-state parameter w and two perturbative generalized dark matter (GDM) parameters, c 2 s (the sound speed) and {c}{vis}2 (the viscosity), are constrained by existing cosmological data, particularly the Planck 2015 data release. We find that the GDM parameters are consistent with zero, and are strongly constrained, showing no evidence for extending the model of dark matter beyond the cold dark matter (CDM) paradigm. The equation of state of dark matter is constrained to be within -0.000896\\lt w\\lt 0.00238 at the 99.7% confidence level (CL), which is several times stronger than constraints found previously using WMAP data. The parameters c 2 s and {c}{vis}2 are constrained to be less than 3.21× {10}-6 and 6.06 × 10‑6 respectively at the 99.7% CL. The inclusion of the GDM parameters does significantly affect the error bars on several ΛCDM parameters, notably the dimensionless dark matter density ω g and the derived parameters σ 8 and H 0. This can be partially alleviated with the inclusion of data constraining the expansion history of the universe.

  17. Constraining the shape distribution and binary fractions of asteroids observed by NEOWISE

    NASA Astrophysics Data System (ADS)

    Sonnett, Sarah M.; Mainzer, Amy; Grav, Tommy; Masiero, Joseph; Bauer, James; Vernazza, Pierre; Ries, Judit Gyorgyey; Kramer, Emily

    2015-11-01

    Knowing the shape distribution of an asteroid population gives clues to its collisional and dynamical history. Constraining light curve amplitudes (brightness variations) offers a first-order approximation to the shape distribution, provided all asteroids in the distribution were subject to the same observing biases. Asteroids observed by the NEOWISE space mission at roughly the same heliocentric distances have essentially the same observing biases and can therefore be inter-compared. We used the archival NEOWISE photometry of a statistically significant sample of Jovian Trojans, Hildas, and Main belt asteroids to compare the amplitude (and by proxy, shape) distributions of L4 vs. L5 Trojans, Trojans vs. Hildas of the same size range, and several subpopulations of Main belt asteroids.For asteroids with near-fluid rubble pile structures, very large light curve amplitudes can only be explained by close or contact binary systems, offering the potential to catalog and characterize binaries within a population and gleaning more information on its dynamical evolution. Because the structure of most asteroids is not known to a high confidence level, objects with very high light curve amplitudes can only be considered candidate binaries. In Sonnett et al. (2015), we identified several binary candidates in the Jovian Trojan and Hilda populations. We have since been conducting a follow-up campaign to obtain densely sampled light curves of the binary candidates to allow detailed shape and binary modeling, helping identify true binaries. Here, we present preliminary results from the follow-up campaign, including rotation properties.This research was carried out at the Jet Propulsion Laboratory (JPL), California Institute of Technology (CalTech) under a contract with the National Aeronautics and Space Administration (NASA) and was supported by the NASA Postdoctoral Program at JPL. We make use of data products from the Wide-field Infrared Survey Explorer, which is a joint project

  18. Constraining Climate Forcing of Ice Nucleation with SPartICus/MACPEX Observations

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, K.; Wang, M.; Comstock, J. M.; Mitchell, D. L.; Mace, G. G.; Jensen, E. J.

    2012-12-01

    Cirrus clouds composed of ice crystals play an important role in modifying the global radiative balance through scattering shortwave (SW) radiation and absorbing and emitting longwave (LW) terrestrial radiation. Cirrus clouds also modulate water vapor in the upper troposphere and lower stratosphere, which is an important greenhouse gas. Although cirrus clouds are an important player in the global climate system, there are still large uncertainties in the understanding of cirrus cloud properties and processes and their treatments in global climate models, due to the scarcity of cirrus measurements and instrument artifacts of in situ ice crystal number measurements. The DOE Atmospheric Radiation Measurement (ARM)'s Small Particles in Cirrus (SPartICus) campaign (http://campaign.arm.gov/sparticus/) and the NASA's Mid-latitude Airborne Cirrus Properties Experiment (MACPEX, http://www.espo.nasa.gov/macpex/) conducted airborne measurements over central North America with special emphasis in the vicinity of the DOE ARM's Southern Great Plains (SGP) site to investigate the properties of mid-latitude cirrus clouds, the processes affecting these properties and their impact on radiation. With a new generation of probes designed to minimize artifacts due to ice shattering, SPartICus and MACPEX provide unprecedented datasets characterizing cirrus microphysical properties and dynamics. In this study we use the SPartICus/MACPEX observations to constrain the parameterizations of formation and growth of ice crystals in the Community Atmospheric Model version 5 (CAM5). This is achieved by comparing modeled ice crystal number concentration, ice water content, updraft velocity and relative humidity in- and outside cirrus, and their covariance with temperature with the statistics from SPartICus/MACPEX observations. Model sensitivity tests are performed with different ice nucleation mechanisms (homogeneous versus heterogeneous nucleation) and different vapor deposition coefficients to

  19. Paleoproterozoic Collisional Structures in the Hudson Bay Lithosphere Constrained by Multi-Observable Probabilistic Inversion

    NASA Astrophysics Data System (ADS)

    Darbyshire, F. A.; Afonso, J. C.; Porritt, R. W.

    2015-12-01

    The Paleozoic Hudson Bay intracratonic basin conceals a Paleoproterozoic Himalayan-scale continental collision, the Trans-Hudson Orogen (THO), which marks an important milestone in the assembly of the Canadian Shield. The geometry of the THO is complex due to the double-indentor geometry of the collision between the Archean Superior and Western Churchill cratons. Seismic observations at regional scale show a thick, seismically fast lithospheric keel beneath the entire region; an intriguing feature of recent models is a 'curtain' of slightly lower wavespeeds trending NE-SW beneath the Bay, which may represent the remnants of more juvenile material trapped between the two Archean continental cores. The seismic models alone, however, cannot constrain the nature of this anomaly. We investigate the thermal and compositional structure of the Hudson Bay lithosphere using a multi-observable probabilistic inversion technique. This joint inversion uses Rayleigh wave phase velocity data from teleseismic earthquakes and ambient noise, geoid anomalies, surface elevation and heat flow to construct a pseudo-3D model of the crust and upper mantle. Initially a wide range of possible mantle compositions is permitted, and tests are carried out to ascertain whether the lithosphere is stratified with depth. Across the entire Hudson Bay region, low temperatures and a high degree of chemical depletion characterise the mantle lithosphere. Temperature anomalies within the lithosphere are modest, as may be expected from a tectonically-stable region. The base of the thermal lithosphere lies at depths of >250 km, reaching to ~300 km depth in the centre of the Bay. Lithospheric stratification, with a more-depleted upper layer, is best able to explain the geophysical data sets and surface observables. Some regions, where intermediate-period phase velocities are high, require stronger mid-lithospheric depletion. In addition, a narrow region of less-depleted material extends NE-SW across the Bay

  20. Relative merits of different types of rest-frame optical observations to constrain galaxy physical parameters

    NASA Astrophysics Data System (ADS)

    Pacifici, Camilla; Charlot, Stéphane; Blaizot, Jérémy; Brinchmann, Jarle

    2012-04-01

    We present a new approach to constrain galaxy physical parameters from the combined interpretation of stellar and nebular emission in wide ranges of observations. This approach relies on the Bayesian analysis of any type of galaxy spectral energy distribution using a comprehensive library of synthetic spectra assembled using state-of-the-art models of star formation and chemical enrichment histories, stellar population synthesis, nebular emission and attenuation by dust. We focus on the constraints set by five-band ugriz photometry and low- and medium-resolution spectroscopy at rest wavelengths λ= 3600-7400 Å on a few physical parameters of galaxies: the observer-frame absolute r-band stellar mass-to-light ratio, M*/Lr; the fraction of a current galaxy stellar mass formed during the last 2.5 Gyr, fSFH; the specific star formation rate, ψS; the gas-phase oxygen abundance, 12 + log(O/H); the total effective V-band absorption optical depth of the dust, ?; and the fraction of this arising from dust in the ambient interstellar medium, μ. Since these parameters cannot be known a priori for any galaxy sample, we assess the accuracy to which they can be retrieved from observations by simulating 'pseudo-observations' using models with known parameters. Assuming that these models are good approximations of true galaxies, we find that the combined analysis of stellar and nebular emission in low-resolution [50 Å full width at half-maximum (FWHM)] galaxy spectra provides valuable constraints on all physical parameters. The typical uncertainties in high-quality spectra are about 0.13 dex for M*/Lr, 0.23 for fSFH, 0.24 dex for ψS, 0.28 for 12 + log(O/H), 0.64 for ? and 0.16 for μ. The uncertainties in 12 + log(O/H) and ? tighten by about 20 per cent for galaxies with detectable emission lines and by another 45 per cent when the spectral resolution is increased to 5 Å FWHM. At this spectral resolution, the analysis of the combined stellar and nebular emission in the high

  1. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    NASA Astrophysics Data System (ADS)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  2. Photochemical modeling of H2O in Titan's atmosphere constrained by Herschel Observations

    NASA Astrophysics Data System (ADS)

    Lara, L. M.; Lellouch, E.; Moreno, R.; Courtin, R.; Hartogh, P.; Rengel, M.

    2012-04-01

    As a species subject to photolytic, chemical and condensation losses, H2O present in Titan's stratosphere must be of external origin. The discovery of CO2 by Voyager (Samuelson et al. 1981) pointed to an external supply of oxygen to Titan's atmosphere. Indeed, CO2, which also condenses, was recognized to be formed via CO+OH, where OH was likely produced by H2O photolysis. This view was supported by the ground-based discovery of CO (Lutz et al. 1983) and subsequent measurements confirming an abundance of ~50 ppm. The source of CO itself remained elusive, but inspired by the Cassini/CAPS discovery of a O+ influx rate (Hartle et al. 2006), Hörst et al. (2008) showed that an external source of O or O+ leads to the formation of CO, also pointing to the likely external origin of this compound. The most up-to-date model of Titan's oxygen chemistry by Hörst et al. (2008) adjusted the OH/H2O deposition rate as a function of the eddy diffusion coefficient below 200 km to match the observed CO2 mixing ratio (15 ppb, uniform over 100-200 km), and producing a H2O profile that was deemed consistent with ISO/SWS measurement of the H2O abundance at a nominal altitude of 400 km (Coustenis et al. 1998). Therefore, the Hörst et al. (2008) study provided an apparently self-consistent picture of the origin of oxygen compounds in Titan's atmosphere, with the three main species (CO, CO2 and H2O) being produced from a permanent external supply of oxygen in two distinct forms. However, recent measurements of several H2O lines by the HIFI and PACS instruments (Herschel Space Observatory) have shown that none of the H2O profiles calculated in Hörst et al. (2008) reproduces the observed lines (Moreno et al., this workshop), and neither does the Lara et al. (1996) H2O profile. Here we revisit the Lara et al. (1996) photochemical model by including (i) an updated eddy diffusion coefficient profile (K(z)), constrained by the C2H6 vertical distribution (ii) an adjustable O+/OH/H2O influx. Our

  3. Constraining sterile neutrino warm dark matter with Chandra observations of the Andromeda galaxy

    SciTech Connect

    Watson, Casey R.; Polley, Nicholas K.; Li, Zhiyuan E-mail: zyli@astro.ucla.edu

    2012-03-01

    We use the Chandra unresolved X-ray emission spectrum from a 12'–28' (2.8–6.4 kpc) annular region of the Andromeda galaxy to constrain the radiative decay of sterile neutrino warm dark matter. By excising the most baryon-dominated, central 2.8 kpc of the galaxy, we reduce the uncertainties in our estimate of the dark matter mass within the field of view and improve the signal-to-noise ratio of prospective sterile neutrino decay signatures relative to hot gas and unresolved stellar emission. Our findings impose the most stringent limit on the sterile neutrino mass to date in the context of the Dodelson-Widrow model, m{sub s} < 2.2 keV (95% C.L.). Our results also constrain alternative sterile neutrino production scenarios at very small active-sterile neutrino mixing angles.

  4. Mercury's thermo-chemical evolution from numerical models constrained by Messenger observations

    NASA Astrophysics Data System (ADS)

    Tosi, N.; Breuer, D.; Plesa, A. C.; Wagner, F.; Laneuville, M.

    2012-04-01

    The Messenger spacecraft, in orbit around Mercury for almost one year, has been delivering a great deal of new information that is changing dramatically our understanding of the solar system's innermost planet. Tracking data of the Radio Science experiment yielded improved estimates of the first coefficients of the gravity field that permit to determine the normalized polar moment of inertia of the planet (C/MR2) and the ratio of the moment of inertia of the mantle to that of the whole planet (Cm/C). These two parameters provide a strong constraint on the internal mass distribution and, in particular, on the core mass fraction. With C/MR2 = 0.353 and Cm/C = 0.452 [1], interior structure models predict a core radius as large as 2000 km [2], leaving room for a silicate mantle shell with a thickness of only ~ 400 km, a value significantly smaller than that of 600 km usually assumed in parametrized [3] as well as in numerical models of Mercury's mantle dynamics and evolution [4]. Furthermore, the Gamma-Ray Spectrometer measured the surface abundance of radioactive elements, revealing, besides uranium and thorium, the presence of potassium. The latter, being moderately volatile, rules out traditional formation scenarios from highly refractory materials, favoring instead a composition not much dissimilar from a chondritic model. Considering a 400 km thick mantle, we carry out a large series of 2D and 3D numerical simulations of the thermo-chemical evolution of Mercury's mantle. We model in a self-consistent way the formation of crust through partial melting using Lagrangian tracers to account for the partitioning of radioactive heat sources between mantle and crust and variations of thermal conductivity. Assuming the relative surface abundance of radiogenic elements observed by Messenger to be representative of the bulk mantle composition, we attempt at constraining the degree to which uranium, thorium and potassium are concentrated in the silicate mantle through a broad

  5. Local propagation speed constrained estimation of the slowness vector from non-planar array observations.

    PubMed

    Nouvellet, Adrien; Roueff, François; Le Pichon, Alexis; Charbit, Maurice; Vergoz, Julien; Kallel, Mohamed; Mejri, Chourouq

    2016-01-01

    The estimation of the slowness vector of infrasound waves propagating across an array is a critical process leading to the determination of parameters of interest such as the direction of arrival. The sensors of an array are often considered to be located in a horizontal plane. However, due to topography, the altitudes of the sensors are not identical and introduce a bias on the estimate if neglected. However, the unbiased 3D estimation procedure, while suppressing the bias, leads to an increase of the variance. Accounting for an a priori constraint on the slowness vector significantly reduces the variance and could therefore improve the performance of the estimation if the introduced bias by incorrect a priori information remains negligible. This study focuses on measuring the benefits of this approach with a thorough investigation of the bias and variance of the constrained 3D estimator, which is not available in the existing literature. This contribution provides such computations based on an asymptotic Gaussian approximation. Simulations are carried out to assess the theoretical results both with synthetic and real data. Thus, a constrained 3D estimator is proposed yielding the best bias/variance compromise if good knowledge of the propagation wave speed is accessible. PMID:26827049

  6. An offline constrained data assimilation technique for aerosols: Improving GCM simulations over South Asia using observations from two satellite sensors

    NASA Astrophysics Data System (ADS)

    Baraskar, Ankit; Bhushan, Mani; Venkataraman, Chandra; Cherian, Ribu

    2016-05-01

    Aerosol properties simulated by general circulation models (GCMs) exhibit large uncertainties due to biases in model processes and inaccuracies in aerosol emission inputs. In this work, we propose an offline, constrained optimization based procedure to improve these simulations by assimilating them with observational data. The proposed approach explicitly incorporates the non-negativity constraint on the aerosol optical depth (AOD) which is a key metric to quantify aerosol distributions. The resulting optimization problem is quadratic programming in nature and can be easily solved by available optimization routines. The utility of the approach is demonstrated by performing offline assimilation of GCM simulated aerosol optical properties and radiative forcing over South Asia (40-120 E, 5-40 N), with satellite AOD measurements from two sensors, namely Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-Angle Imaging SpectroRadiometer (MISR). Uncertainty in observational data used in the assimilation is computed by developing different error bands around regional AOD observations, based on their quality assurance flags. The assimilation, evaluated on monthly and daily scales, compares well with Aerosol Robotic Network (AERONET) observations as determined by goodness of fit statistics. Assimilation increased both model predicted atmospheric absorption and clear sky radiative forcing by factors consistent with recent estimates in literature. Thus, the constrained assimilation algorithm helps in systematically reducing uncertainties in aerosol simulations.

  7. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Smith, W. Kolby; Cleveland, Cory C.; Reed, Sasha C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    United States (U.S.) energy policy includes an expectation that bioenergy will be a substantial future energy source. In particular, the Energy Independence and Security Act of 2007 (EISA) aims to increase annual U.S. biofuel (secondary bioenergy) production by more than 3-fold, from 40 to 136 billion liters ethanol, which implies an even larger increase in biomass demand (primary energy), from roughly 2.9 to 7.4 EJ yr–1. However, our understanding of many of the factors used to establish such energy targets is far from complete, introducing significgant uncertainty into the feasibility of current estimates of bioenergy potential. Here, we utilized satellite-derived net primary productivity (NPP) data—measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S.—to estimate primary bioenergy potential (PBP). Our results indicate that PBP of the conterminous U.S. ranges from roughly 5.9 to 22.2 EJ yr–1, depending on land use. The low end of this range represents the potential when harvesting residues only, while the high end would require an annual biomass harvest over an area more than three times current U.S. agricultural extent. While EISA energy targets are theoretically achievable, we show that meeting these targets utilizing current technology would require either an 80% displacement of current crop harvest or the conversion of 60% of rangeland productivity. Accordingly, realistically constrained estimates of bioenergy potential are critical for effective incorporation of bioenergy into the national energy portfolio.

  8. Model-data assimilation of multiple phenological observations to constrain and predict leaf area index.

    PubMed

    Viskari, Toni; Hardiman, Brady; Desai, Ankur R; Dietze, Michael C

    2015-03-01

    Our limited ability to accurately simulate leaf phenology is a leading source of uncertainty in models of ecosystem carbon cycling. We evaluate if continuously updating canopy state variables with observations is beneficial for predicting phenological events. We employed ensemble adjustment Kalman filter (EAKF) to update predictions of leaf area index (LAI) and leaf extension using tower-based photosynthetically active radiation (PAR) and moderate resolution imaging spectrometer (MODIS) data for 2002-2005 at Willow Creek, Wisconsin, USA, a mature, even-aged, northern hardwood, deciduous forest. The ecosystem demography model version 2 (ED2) was used as the prediction model, forced by offline climate data. EAKF successfully incorporated information from both the observations and model predictions weighted by their respective uncertainties. The resulting. estimate reproduced the observed leaf phenological cycle in the spring and the fall better than a parametric model prediction. These results indicate that during spring the observations contribute most in determining the correct bud-burst date, after which the model performs well, but accurately modeling fall leaf senesce requires continuous model updating from observations. While the predicted net ecosystem exchange (NEE) of CO2 precedes tower observations and unassimilated model predictions in the spring, overall the prediction follows observed NEE better than the model alone. Our results show state data assimilation successfully simulates the evolution of plant leaf phenology and improves model predictions of forest NEE.

  9. Constraining Atmospheric Particle Size in Gale Crater Using REMS UV Measurements and Mastcam Observations at 440 and 880 nm

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.; de la Torre-Juárez, M.; Vicente-Retortillo, A.; Martinez, G.

    2015-12-01

    Optical depth measured in Gale crater has been shown to vary seasonally, and this variation is potentially linked to a change in dust size visible from the surface. The Mast Camera (Mastcam) on the Mars Science Laboratory (MSL) has performed cross-sky brightness surveys similar to those obtained at the Phoenix Lander site. Since particle size can be constrained by observing airborne dust across multiple wavelengths and angles, surveys at 440 and 880 nm can be used to characterize atmospheric dust within and above the crater. In addition, Rover Environmental Monitoring Station (REMS) on MSL provides downward radiation flux from 250 nm (UVD) to 340 nm (UVA), which would further constrain aerosol properties. The dust, which is not spherical and likely contains irregular particles, can be modeled using randomly oriented triaxial ellipsoids with predetermined microphysical optical properties and fit to sky survey observations to retrieve an effective radius. This work provides a discussion on the constraints of particle size distribution using REMS measurements as well as shape of the particle in Gale crater in comparison to Mastcam at the specified wavelengths.

  10. CONSTRAINING THE PLANETARY SYSTEM OF FOMALHAUT USING HIGH-RESOLUTION ALMA OBSERVATIONS

    SciTech Connect

    Boley, A. C.; Payne, M. J.; Ford, E. B.; Shabram, M.; Corder, S.; Dent, W. R. F.

    2012-05-01

    The dynamical evolution of planetary systems leaves observable signatures in debris disks. Optical images trace micron-sized grains, which are strongly affected by stellar radiation and need not coincide with their parent body population. Observations of millimeter-sized grains accurately trace parent bodies, but previous images lack the resolution and sensitivity needed to characterize the ring's morphology. Here we present ALMA 350 GHz observations of the Fomalhaut debris ring. These observations demonstrate that the parent body population is 13-19 AU wide with a sharp inner and outer boundary. We discuss three possible origins for the ring and suggest that debris confined by shepherd planets is the most consistent with the ring's morphology.

  11. Task performance on constrained reconstructions: human observer performance compared with suboptimal Bayesian performance

    NASA Astrophysics Data System (ADS)

    Wagner, Robert F.; Myers, Kyle J.; Hanson, Kenneth M.

    1992-06-01

    We have previously described how imaging systems and image reconstruction algorithms can be evaluated on the basis of how well binary-discrimination tasks can be performed by a machine algorithm that `views' the reconstructions. Algorithms used in these investigations have been based on approximations to the ideal observer of Bayesian statistical decision theory. The present work examines the performance of an extended family of such algorithmic observers viewing tomographic images reconstructed from a small number of views using the Cambridge Maximum Entropy software, MEMSYS 3. We investigate the effects on the performance of these observers due to varying the parameter (alpha) ; this parameter controls the stopping point of the iterative reconstruction technique and effectively determines the smoothness of the reconstruction. For the detection task considered here, performance is maximum at the lowest values of (alpha) studied; these values are encountered as one moves toward the limit of maximum likelihood estimation while maintaining the positivity constraint intrinsic to entropic priors. A breakdown in the validity of a Gaussian approximation used by one of the machine algorithms (the posterior probability) was observed in this region. Measurements on human observers performing the same task show that they perform comparably to the best machine observers in the region of highest machine scores, i.e., smallest values of (alpha) . For increasing values of (alpha) , both human and machine observer performance degrade. The falloff in human performance is more rapid than that of the machine observer at the largest values of (alpha) (lowest performance) studied. This behavior is common to all such studies of the so-called psychometric function.

  12. Constraining properties of GRB magnetar central engines using the observed plateau luminosity and duration correlation

    NASA Astrophysics Data System (ADS)

    Rowlinson, A.; Gompertz, B. P.; Dainotti, M.; O'Brien, P. T.; Wijers, R. A. M. J.; van der Horst, A. J.

    2014-09-01

    An intrinsic correlation has been identified between the luminosity and duration of plateaus in the X-ray afterglows of gamma-ray bursts (GRBs; Dainotti et al. 2008), suggesting a central engine origin. The magnetar central engine model predicts an observable plateau phase, with plateau durations and luminosities being determined by the magnetic fields and spin periods of the newly formed magnetar. This paper analytically shows that the magnetar central engine model can explain, within the 1σ uncertainties, the correlation between plateau luminosity and duration. The observed scatter in the correlation most likely originates in the spread of initial spin periods of the newly formed magnetar and provides an estimate of the maximum spin period of ˜35 ms (assuming a constant mass, efficiency and beaming across the GRB sample). Additionally, by combining the observed data and simulations, we show that the magnetar emission is most likely narrowly beamed and has ≲20 per cent efficiency in conversion of rotational energy from the magnetar into the observed plateau luminosity. The beaming angles and efficiencies obtained by this method are fully consistent with both predicted and observed values. We find that short GRBs and short GRBs with extended emission lie on the same correlation but are statistically inconsistent with being drawn from the same distribution as long GRBs, this is consistent with them having a wider beaming angle than long GRBs.

  13. Constraining the Parameterization of Polar Inertia Gravity Waves in WACCM with Observations

    NASA Astrophysics Data System (ADS)

    Smith, A. K.; Murphy, D. J.; Garcia, R. R.; Kinnison, D. E.

    2014-12-01

    A discrepancy that has been seen in a number of climate models is that simulated temperatures in the Antarctic lower stratosphere during winter and spring are much lower than observed; this is referred to as the "cold pole" problem. Recent simulations with the NCAR Whole Atmosphere Community Climate Model have shown that polar stratospheric temperatures are much improved by including a parameterization of gravity waves, which have inertial periods, longer horizontal wavelengths and shorter vertical wavelengths than the mesoscale gravity waves already parameterized in this and most other middle atmosphere models. Improvements include a more realistic seasonal development of the ozone hole and somewhat better timing for the winter to summer transition in the zonal winds and Brewer-Dobson Circulation. Although the availability and quality of observations of gravity waves in the middle atmosphere has been increasing, there are still not sufficient observations to validate the inertial gravity wave morphology and distribution in the model. Here, we use constraints from new analyses of radiosonde observations to provide guidance for the horizontal and vertical wavelengths of the waves, their seasonal variability, and their potential sources such as fronts or flow imbalance. Tighter observational constraints remove an element of arbitrary "tuning" and tie the model simulations of the middle atmosphere more closely to the simulated climate.

  14. Model-data assimilation of multiple phenological observations to constrain and forecast leaf area

    NASA Astrophysics Data System (ADS)

    Viskari, T.; Dietze, M.; Desai, A. R.

    2013-12-01

    For deciduous trees, the spring leaf-out and the fall senescence are the defining characteristics of the phenological cycle. This annual cycle has a large impact on the ecological carbon cycle, water and heat transfer as well as the radiation balance. Phenology remains an important source of uncertainty due to large errors in the phenological cycles predicted by the current generation of ecosystem models. Here, state data assimilation is introduced as a method to produce improved phenological predictions. In data assimilation, neither the model or data is accepted as truth, but instead the new state is estimated by combining information from both model predictions and observations based on their respective reliabilities. This state estimate is then used as the basis for the next prediction. Thus the state estimate contains information from the previous states, understanding of the dynamical system and current observations. With data collected from 2002-2005 at the Willow Creek, Wisconsin, AmeriFlux site, we used the Ensemble Kalman Filter (EnKF) to improve predictions of leaf area index (LAI). We used the Ecosystem Demography model version 2 (ED2) as the prediction model, forced by offline climate data. Observations included Leaf Area Index (LAI) measurements from Moderate Resolution Imaging Spectrometer (MODIS) and within-canopy Photosynthethically Active Radiation (PAR) measurements from flux towers belonging to the Chequamegon Ecosystem Atmospheric Study (ChEAS). EnKF successfully combined information from the observations and model predictions based on their respective reliabilities. The state estimate produced the observed leaf phenological cycle in the spring and the fall better than parametric model prediction fitted from observations. The ensemble spread converges during the summer and winter as the leaves are either fully leafed-out or dropped. The next step will be to estimate and forecast phenological cycles at a regional scale.

  15. Constraining sediment thickness in the San Francisco Bay area using observed resonances and P-to-S conversions

    SciTech Connect

    Hough, S.E. )

    1990-08-01

    Between October 20, 1989, and October 27, 1989, aftershocks of the Loma Prieta earthquake were recorded at five locations on a variety of site conditions in the West Oakland, California region. The author shows evidence of P-to-S conversions that have comparable characteristics on both mud and alluvium sites, suggesting that these conversions are generated at the base of the alluvium layer. The inferred transfer function for the sedimentary layer and the travel-time observations for direct and converted phases can be used to constrain the depth of the sedimentary layers. Both the amplitude and many characteristics of the observed transfer functions on mud and alluvium are well predicted with one-dimensional modelling. The results suggest that, in some cases, separate resonance peaks result from multiple sediment layers.

  16. Ability of the current global observing network to constrain N2O sources and sinks

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Wells, K. C.; Chaliyakunnel, S.; Griffis, T. J.; Henze, D. K.; Bousserez, N.

    2014-12-01

    The global observing network for atmospheric N2O combines flask and in-situ measurements at ground stations with sustained and campaign-based aircraft observations. In this talk we apply a new global model of N2O (based on GEOS-Chem) and its adjoint to assess the strengths and weaknesses of this network for quantifying N2O emissions. We employ an ensemble of pseudo-observation analyses to evaluate the relative constraints provided by ground-based (surface, tall tower) and airborne (HIPPO, CARIBIC) observations, and the extent to which variability (e.g. associated with pulsing or seasonality of emissions) not captured by the a priori inventory can bias the inferred fluxes. We find that the ground-based and HIPPO datasets each provide a stronger constraint on the distribution of global emissions than does the CARIBIC dataset on its own. Given appropriate initial conditions, we find that our inferred surface fluxes are insensitive to model errors in the stratospheric loss rate of N2O over the timescale of our analysis (2 years); however, the same is not necessarily true for model errors in stratosphere-troposphere exchange. Finally, we examine the a posteriori error reduction distribution to identify priority locations for future N2O measurements.

  17. Constraining the Sources and Sinks of Atmospheric Methane Using Stable Isotope Observations and Chemistry Climate Modeling

    NASA Astrophysics Data System (ADS)

    Feinberg, A.; Coulon, A.; Stenke, A.; Peter, T.

    2015-12-01

    Methane acts as both a greenhouse gas and a driver of atmospheric chemistry. There is a lack of consensus for the explanation behind the atmospheric methane trend in recent years (1980-2010). High uncertainties are associated with the magnitudes of individual methane source and sink processes. Methane isotopes have the potential to distinguish between the different methane fluxes, as each flux is characterized by an isotopic signature. Methane emissions from each source category are expressed explicitly in a chemistry climate model SOCOL, including wetlands, rice paddies, biomass burning, industry, etc. The model includes 48 methane tracers based on source type and geographical origin in order to track methane after it has been emitted. SOCOL simulations for the years 1980-2010 are performed in "nudged mode", so that model dynamics reflect observed meteorology. Available database estimates of the various surface emission fluxes are inputted into SOCOL. The model diagnostic methane tracers are compared to methane isotope observations from measurement networks. Inconsistencies between the model results and observations point to deficiencies in the available emission estimates or model sink processes. Because of their dependence on the OH sink, deuterated methane observations and methyl chloroform tracers are used to investigate the variability of OH mixing ratios in the model and the real world. The analysis examines the validity of the methane source and sink category estimates over the last 30 years.

  18. Constraining Middle Atmospheric Moisture in GEOS-5 Using EOS-MLS Observations

    NASA Technical Reports Server (NTRS)

    Jin, Jianjun; Pawson, Steven; =Wargan, Krzysztof; Livesey, Nathaniel

    2012-01-01

    Middle atmospheric water vapor plays an important role in climate and atmospheric chemistry. In the middle atmosphere, water vapor, after ozone and carbon dioxide, is an important radiatively active gas that impacts climate forcing and the energy balance. It is also the source of the hydroxyl radical (OH) whose abundances affect ozone and other constituents. The abundance of water vapor in the middle atmosphere is determined by upward transport of dehydrated air through the tropical tropopause layer, by the middle atmospheric circulation, production by the photolysis of methane (CH4), and other physical and chemical processes in the stratosphere and mesosphere. The Modern-Era Retrospective analysis for Research and Applications (MERRA) reanalysis with GEOS-5 did not assimilate any moisture observations in the middle atmosphere. The plan is to use such observations, available sporadically from research satellites, in future GEOS-5 reanalyses. An overview will be provided of the progress to date with assimilating the EOS-Aura Microwave Limb Sounder (MLS) moisture retrievals, alongside ozone and temperature, into GEOS-5. Initial results demonstrate that the MLS observations can significantly improve the middle atmospheric moisture field in GEOS-5, although this result depends on introducing a physically meaningful representation of background error covariances for middle atmospheric moisture into the system. High-resolution features in the new moisture field will be examined, and their relationships with ozone, in a two-year assimilation experiment with GEOS-5. Discussion will focus on how Aura MLS moisture observations benefit the analyses.

  19. Observationally-constrained estimates of aerosol optical depths (AODs) over East Asia via data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Lee, K.; Lee, S.; Song, C. H.

    2015-12-01

    Not only aerosol's direct effect on climate by scattering and absorbing the incident solar radiation, but also they indirectly perturbs the radiation budget by influencing microphysics and dynamics of clouds. Aerosols also have a significant adverse impact on human health. With an importance of aerosols in climate, considerable research efforts have been made to quantify the amount of aerosols in the form of the aerosol optical depth (AOD). AOD is provided with ground-based aerosol networks such as the Aerosol Robotic NETwork (AERONET), and is derived from satellite measurements. However, these observational datasets have a limited areal and temporal coverage. To compensate for the data gaps, there have been several studies to provide AOD without data gaps by assimilating observational data and model outputs. In this study, AODs over East Asia simulated with the Community Multi-scale Air Quality (CMAQ) model and derived from the Geostationary Ocean Color Imager (GOCI) observation are interpolated via different data assimilation (DA) techniques such as Cressman's method, Optimal Interpolation (OI), and Kriging for the period of the Distributed Regional Aerosol Gridded Observation Networks (DRAGON) Campaign (March - May 2012). Here, the interpolated results using the three DA techniques are validated intensively by comparing with AERONET AODs to examine the optimal DA method providing the most reliable AODs over East Asia.

  20. Constraining the Assimilation of SWOT Satellite Observations Using Hydraulic Geometry Relationships

    NASA Astrophysics Data System (ADS)

    Andreadis, K.; Mersel, M. K.; Durand, M. T.; Smith, L. C.; Alsdorf, D. E.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission that will be launched in 2019, will offer measurements of the spatial and temporal variability of surface water with unprecedented accuracy. These observations will include surface water elevation, slope, and river channel top width along with estimates of river discharge globally at a spatial resolution of about 50m. One potential source of uncertainty, for estimating discharge, is the inability of SWOT to measure the baseflow depth, i.e. depth of flow beneath the lowest water surface elevation observed during the mission lifetime. This study evaluates the potential of a data assimilation algorithm to reduce that uncertainty by estimating river channel bathymetry. A synthetic experiment is performed wherein a detailed hydraulic model is used to simulate river discharge and water surface elevations over two study areas: a 172 km reach in the middle Rio Grande River, and a 180 km reach in the Upper Mississippi River. These simulations are designated as "truth", and are then used to generate "virtual" SWOT observations with the correct orbital and error characteristics. Appropriate errors are added primarily to the "true" river channel bathymetry among other parameters (e.g. bank widths) to emulate data availability and accuracy globally for hydraulic modeling. Two assimilation techniques are evaluated that merge SWOT observations with a simple gradually-varied flow model to correct river bed topography: variational assimilation and a two-stage Ensemble Kalman Filter. Classic at-a-station hydraulic geometry theory, that posits the interrelationship of hydraulic characteristics as power functions of discharge, can be adapted to SWOT observations. Initial work has shown the potential value of these relationships to detecting in-channel versus overbank flow and deducing a relationship between SWOT observables and local discharge. These relationships are used as additional constraints to the assimilation

  1. Chemical Nature Of Titan’s Organic Aerosols Constrained from Spectroscopic and Mass Spectrometric Observations

    NASA Astrophysics Data System (ADS)

    Imanaka, Hiroshi; Cruikshank, D. P.

    2012-10-01

    The Cassini-Huygens observations greately extend our knowledge about Titan’s organic aerosols. The Cassini INMS and CAPS observations clearly demonstrate the formation of large organic molecules in the ionosphere [1, 2]. The VIMS and CIRS instruments have revealed spectral features of the haze covering the mid-IR and far-IR wavelengths [3, 4, 5, 6]. This study attempts to speculate the possible chemical nature of Titan’s aerosols by comparing the currently available observations with our laboratory study. We have conducted a series of cold plasma experiment to investigate the mass spectrometric and spectroscopic properties of laboratory aerosol analogs [7, 8]. Titan tholins and C2H2 plasma polymer are generated with cold plasma irradiations of N2/CH4 and C2H2, respectively. Laser desorption mass spectrum of the C2H2 plasma polymer shows a reasonable match with the CAPS positive ion mass spectrum. Furthermore, spectroscopic features of the the C2H2 plasma polymer in mid-IR and far-IR wavelegths qualitatively show reasonable match with the VIMS and CIRS observations. These results support that the C2H2 plasma polymer is a good candidate material for Titan’s aerosol particles at the altitudes sampled by the observations. We acknowledge funding supports from the NASA Cassini Data Analysis Program, NNX10AF08G, and from the NASA Exobiology Program, NNX09AM95G, and the Cassini Project. [1] Waite et al. (2007) Science 316, 870-875. [2] Crary et al. (2009) Planet. Space Sci. 57, 1847-1856. [3] Bellucci et al. (2009) Icarus 201, 198-216. [4] Anderson and Samuelson (2011) Icarus 212, 762-778. [5] Vinatier et al. (2010) Icarus 210, 852-866. [6] Vinatier et al. (2012) Icarus 219, 5-12. [7] Imanaka et al. (2004) Icarus 168, 344-366. [8] Imanaka et al. (2012) Icarus 218, 247-261.

  2. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    NASA Astrophysics Data System (ADS)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  3. Constraining Methane Flux Estimates Using Atmospheric Observations of Methane and 1^3C in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Miller, J. B.; Bruhwiler, L. M.

    2002-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood due in part to the large spatial and temporal variability of the methane flux. Model simulations of methane mixing ratios using most process-based source estimates typically over-predict the latitudinal gradient of atmospheric methane relative to the observations; however, the specific source processes responsible for this discrepancy have not been identified definitively. The aim of this work is to use the isotopic signatures of the sources to attribute these discrepancies to a source process or group of source processes and create global and regional budget estimates that are in agreement with both the atmospheric observations of methane and 1^3C in methane. To this end, observations of isotopic ratios of 1^3C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of the methane budget. Inverse modeling is a top-down approach which uses observations of trace gases in the atmosphere, an estimate of the spatial pattern of trace gas fluxes, and a model of atmospheric transport to estimate the sources and sinks. The atmospheric transport was represented by the TM3 three-dimensional transport model. The GLOBALVIEW 2001 methane observations were used along with flask measurements of 1^3C in methane at six of the CMDL-NOAA stations by INSTAAR. Initial results imply interesting differences from previous methane budget estimates. For example, the 1^3C isotope observations in methane call for an increase in southern hemisphere sources with a bacterial isotopic signature such as wetlands, rice paddies, termites, and ruminant animals. The

  4. Comparing Simulations and Observations of Galaxy Evolution: Methods for Constraining the Nature of Stellar Feedback

    NASA Astrophysics Data System (ADS)

    Hummels, Cameron

    Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic

  5. CONSTRAINING THE ENVIRONMENT OF CH{sup +} FORMATION WITH CH{sup +}{sub 3} OBSERVATIONS

    SciTech Connect

    Indriolo, Nick; McCall, Benjamin J.; Oka, Takeshi; Geballe, T. R.

    2010-03-10

    The formation of CH{sup +} in the interstellar medium (ISM) has long been an outstanding problem in chemical models. In order to probe the physical conditions of the ISM in which CH{sup +} forms, we propose the use of CH{sup +}{sub 3} observations. The pathway to forming CH{sup +}{sub 3} begins with CH{sup +}, and a steady-state analysis of CH{sup +}{sub 3} and the reaction intermediary CH{sup +}{sub 2} results in a relationship between the CH{sup +} and CH{sup +}{sub 3} abundances. This relationship depends on the molecular hydrogen fraction, f{sub H{sub 2}}, and gas temperature, T, so observations of CH{sup +} and CH{sup +}{sub 3} can be used to infer the properties of the gas in which both species reside. We present observations of both molecules along the diffuse cloud sight line toward Cyg OB2 No. 12. Using our computed column densities and upper limits, we put constraints on the f{sub H{sub 2}} versus T parameter space in which CH{sup +} and CH{sup +}{sub 3} form. We find that average, static, diffuse molecular cloud conditions (i.e., f{sub H{sub 2}}{approx}>0.2, T {approx} 60 K) are excluded by our analysis. However, current theory suggests that non-equilibrium effects drive the reaction C{sup +} + H{sub 2} -> CH{sup +} + H, endothermic by 4640 K. If we consider a higher effective temperature due to collisions between neutrals and accelerated ions, the CH{sup +}{sub 3} partition function predicts that the overall population will be spread out into several excited rotational levels. As a result, observations of more CH{sup +}{sub 3} transitions with higher signal-to-noise ratios are necessary to place any constraints on models where magnetic acceleration of ions drives the formation of CH{sup +}.

  6. Toward observationally constrained high space and time resolution CO2 urban emission inventories

    NASA Astrophysics Data System (ADS)

    Maness, H.; Teige, V. E.; Wooldridge, P. J.; Weichsel, K.; Holstius, D.; Hooker, A.; Fung, I. Y.; Cohen, R. C.

    2013-12-01

    The spatial patterns of greenhouse gas (GHG) emission and sequestration are currently studied primarily by sensor networks and modeling tools that were designed for global and continental scale investigations of sources and sinks. In urban contexts, by design, there has been very limited investment in observing infrastructure, making it difficult to demonstrate that we have an accurate understanding of the mechanism of emissions or the ability to track processes causing changes in those emissions. Over the last few years, our team has built a new high-resolution observing instrument to address urban CO2 emissions, the BErkeley Atmospheric CO2 Observing Network (BEACON). The 20-node network is constructed on a roughly 2 km grid, permitting direct characterization of the internal structure of emissions within the San Francisco East Bay. Here we present a first assessment of BEACON's promise for evaluating the effectiveness of current and upcoming local emissions policy. Within the next several years, a variety of locally important changes are anticipated--including widespread electrification of the motor vehicle fleet and implementation of a new power standard for ships at the port of Oakland. We describe BEACON's expected performance for detecting these changes, based on results from regional forward modeling driven by a suite of projected inventories. We will further describe the network's current change detection capabilities by focusing on known high temporal frequency changes that have already occurred; examples include a week of significant freeway traffic congestion following the temporary shutdown of the local commuter rail (the Bay Area Rapid Transit system).

  7. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.; Allan, R.; Freeman, E.; Wheeler, D.; Wilkinson, C.; Williamson, F.

    2012-10-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores, etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations. As the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. One organisation which systematically made observations and collected the results was the English East India Company (EEIC), and their archives have been preserved in the British Library. Inspection of those archives revealed 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure, and subjective estimates of wind speed and direction, from voyages across the Atlantic and Indian Oceans between 1789 and 1834. Those records have been extracted and digitised, providing 273 000 new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809 eruption was modest (perhaps 0.5 °C). This provides an out-of-sample validation for the proxy reconstructions - supporting their use for longer-term climate reconstructions. However, some of the climate model simulations in the CMIP5 ensemble show much larger volcanic effects than this - such simulations are unlikely to be accurate in this respect.

  8. PANCHROMATIC OBSERVATIONS OF THE TEXTBOOK GRB 110205A: CONSTRAINING PHYSICAL MECHANISMS OF PROMPT EMISSION AND AFTERGLOW

    SciTech Connect

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Zhang, B.; Gorosabel, J.; Urata, Y.; Sugita, S.; Pozanenko, A.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E. E-mail: rfshen@astro.utoronto.ca; and others

    2012-06-01

    We present a comprehensive analysis of a bright, long-duration (T{sub 90} {approx} 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb, and BOOTES telescopes when the gamma-ray burst (GRB) was still radiating in the {gamma}-ray band, with optical light curve showing correlation with {gamma}-ray data. Nearly 200 s of observations were obtained simultaneously from optical, X-ray, to {gamma}-ray (1 eV to 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution during the prompt emission phase. In particular, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard synchrotron emission model in the fast cooling regime. Shortly after prompt emission ({approx}1100 s), a bright (R = 14.0) optical emission hump with very steep rise ({alpha} {approx} 5.5) was observed, which we interpret as the reverse shock (RS) emission. It is the first time that the rising phase of an RS component has been closely observed. The full optical and X-ray afterglow light curves can be interpreted within the standard reverse shock (RS) + forward shock (FS) model. In general, the high-quality prompt and afterglow data allow us to apply the standard fireball model to extract valuable information, including the radiation mechanism (synchrotron), radius of prompt emission (R{sub GRB} {approx} 3 Multiplication-Sign 10{sup 13} cm), initial Lorentz factor of the outflow ({Gamma}{sub 0} {approx} 250), the composition of the ejecta (mildly magnetized), the collimation angle, and the total energy budget.

  9. Panchromatic Observations of the Textbook GRB 110205A: Constraining Physical Mechanisms of Prompt Emission and Afterglow

    NASA Technical Reports Server (NTRS)

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Gorosabel, J.; Urata, Y.; Sugita, S.; Zhang, B.; Pozanenko, A.; Nissinen, M.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E.; Volnova, A.; Akerlof, C. W.; Anto, P.; Barthelmy, S. D.; Breeveld, A.; Carsenty, U.; Gehrels, N.; Sonbas, E.

    2011-01-01

    We present a comprehensive analysis of a bright, long duration (T(sub 90) approx. 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb and BOOTES telescopes when the GRB was still radiating in the gamma-ray band. Thanks to its long duration, nearly 200 s of observations were obtained simultaneously from optical, X-ray to gamma-ray (1 eV - 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution across 6 orders of magnitude in energy during the prompt emission phase. In particular, by fitting the time resolved prompt spectra, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard GRB synchrotron emission model in the fast cooling regime. Although the prompt optical emission is brighter than the extrapolation of the best fit X/ -ray spectra, it traces the -ray light curve shape, suggesting a relation to the prompt high energy emission. The synchrotron + synchrotron self-Compton (SSC) scenario is disfavored by the data, but the models invoking a pair of internal shocks or having two emission regions can interpret the data well. Shortly after prompt emission (approx. 1100 s), a bright (R = 14.0) optical emission hump with very steep rise ( alpha approx. 5.5) was observed which we interpret as the emission from the reverse shock. It is the first time that the rising phase of a reverse shock component has been closely observed.

  10. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.; Allan, R.; Freeman, E.; Wheeler, D.; Wilkinson, C.; Williamson, F.

    2012-05-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations. As the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. One organisation which systematically made observations and collected the results was the English East-India Company (EEIC), and their archives have been preserved in the British Library. Inspection of those archives revealed 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure, and subjective estimates of wind speed and direction, from voyages across the Atlantic and Indian Oceans between 1789 and 1834. Those records have been extracted and digitised, providing 273 000 new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809 eruption was modest (perhaps 0.5 °C). This provides a powerful out-of-sample validation for the proxy reconstructions - supporting their use for longer-term climate reconstructions. However, some of the climate model simulations in the CMIP5 ensemble show much larger volcanic effects than this - such simulations are unlikely to be accurate in this respect.

  11. Constraining friction laws by experimental observations and numerical simulations of various rupture modes

    NASA Astrophysics Data System (ADS)

    Lu, X.; Lapusta, N.; Rosakis, A. J.

    2006-12-01

    Several different types of friction laws, such as linear slip-weakening law and variants of rate- and state- dependent friction laws, are widely used in earthquake modeling. It is important to understand how much complexity one needs to include in a friction law to properly capture the dynamics of frictional rupture. Observations suggest that earthquake ruptures propagate as slip pulses (Heaton, 1990). In the absence of local heterogeneities and bimaterial effect, only one mechanism, namely strong rate-weakening friction, is shown, theoretically and numerically, to be capable of generating pulses on homogeneous interfaces separating two identical materials. We have observed pulses in our recent experiments designed to reproduce such a setting (Rosakis, Lu, Lapusta, AGU, 2006). By exploring experimental parameter space, we have identified different dynamic rupture modes including pulse-like, crack-like, and mixed modes. This suggests that rate weakening may play an important role in rupture dynamics. The systematic transition between rupture modes in the experiments is consistent with the theoretical and numerical study of Zheng and Rice (1998), who studied the behavior of rate-weakening interfaces. They concluded that whether strong rate weakening results in a pulse-like or crack-like behavior depends on the combination of two parameters: the level of prestress before rupture propagation and the amount of rate weakening on the fault. If we use Dieterich-Ruina rate-and-state friction laws with enhanced rate weakening at high slip rates, as appropriate for flash heating, to describe frictional properties of Homalite, use reasonable friction parameters motivated by previous studies, and apply Zheng and Rice analysis, we can qualitatively explain the rupture modes observed in experiments. Our current work is focused on modeling the experimental setup numerically to confirm that one indeed requires rate dependence of friction to reproduce experimental results. This

  12. Observing cirrus halos to constrain in-situ measurements of ice crystal size

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.; Kimball, M. B.; Mace, G. G.; Baumgardner, D. G.

    2007-01-01

    In this study, characteristic optical sizes of ice crystals in synoptic cirrus are determined using airborne measurements of ice crystal size distributions, optical extinction and water content. The measurements are compared with coincident visual observations of ice cloud optical phenomena, in particular the 22° and 46° halos. In general, the scattering profiles derived from the in-situ cloud probe measurements are consistent with the observed halo characteristics. It is argued that this implies that the measured ice crystals were small, probably with characteristic optical radii between 10 and 20 μm. There is a current contention that in-situ measurements of high concentrations of small ice crystals reflect artifacts from the shattering of large ice crystals on instrument inlets. Significant shattering cannot be entirely excluded using this approximate technique, but it is not indicated. On the basis of the in-situ measurements, a parameterization is provided that relates the optical effective radius of ice crystals to the temperature in mid-latitude synoptic cirrus.

  13. A generic method to constrain the dark matter model parameters from Fermi observations of dwarf spheroids

    NASA Astrophysics Data System (ADS)

    Sming Tsai, Yue-Lin; Yuan, Qiang; Huang, Xiaoyuan

    2013-03-01

    Observation of γ-rays from dwarf galaxies is an effective way to search for particle dark matter. Using 4-year data of Fermi-LAT observations on a series of Milky Way satellites, we develop a general way to search for the signals from dark matter annihilation in such objects. Instead of giving prior information about the energy spectrum of dark matter annihilation, we bin the Fermi-LAT data into several energy bins and build a likelihood map in the ``energy bin - flux'' plane. The final likelihood of any spectrum can be easily derived through combining the likelihood of all the energy bins. It gives consistent result with that directly calculated using the Fermi Scientific Tool. This method is very efficient for the study of any specific dark matter models with γ-rays. We use the new likelihood map with Fermi-LAT 4 year data to fit the parameter space in three representative dark matter models: i) toy dark matter model, ii) effective dark matter operators, and iii) supersymmetric neutralino dark matter.

  14. A generic method to constrain the dark matter model parameters from Fermi observations of dwarf spheroids

    SciTech Connect

    Tsai, Yue-Lin Sming; Yuan, Qiang; Huang, Xiaoyuan E-mail: yuanq@ihep.ac.cn

    2013-03-01

    Observation of γ-rays from dwarf galaxies is an effective way to search for particle dark matter. Using 4-year data of Fermi-LAT observations on a series of Milky Way satellites, we develop a general way to search for the signals from dark matter annihilation in such objects. Instead of giving prior information about the energy spectrum of dark matter annihilation, we bin the Fermi-LAT data into several energy bins and build a likelihood map in the ''energy bin - flux'' plane. The final likelihood of any spectrum can be easily derived through combining the likelihood of all the energy bins. It gives consistent result with that directly calculated using the Fermi Scientific Tool. This method is very efficient for the study of any specific dark matter models with γ-rays. We use the new likelihood map with Fermi-LAT 4 year data to fit the parameter space in three representative dark matter models: i) toy dark matter model, ii) effective dark matter operators, and iii) supersymmetric neutralino dark matter.

  15. Observationally Constrained Metal Signatures of Galaxy Evolution in the Stars and Gas of Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren N.

    The halos of galaxies - consisting of gas, stars, and satellite galaxies - are formed and shaped by the most fundamental processes: hierarchical merging and the flow of gas into and out of galaxies. While these processes are hard to disentangle, metals are tied to the gas that fuels star formation and entrained in the wind that the deaths of these stars generate. As such, they can act as important indicators of the star formation, the chemical enrichment, and the outflow histories of galaxies. Thus, this thesis aims to take advantage of such metal signatures in the stars and gas to place observational constraints on current theories of galaxy evolution as implemented in cosmological simulations. The first two chapters consider the metallicities of stars in the stellar halo of the Milky Way and its surviving satellite dwarf galaxies. Chapter 2 pairs an N-body simulation with a semi-analytic model for supernova-driven winds to examine the early environment of a Milky Way-like galaxy. At z = 10, progenitors of surviving z = 0 satellite galaxies are found to sit preferentially on the outskirts of progenitor halos of the eventual main halo. The consequence of these positions is that main halo progenitors are found to more effectively cross-pollute each other than satellite progenitors. Thus, inhomogeneous cross-pollution as a result of different high-z spatial locations of different progenitors can help to explain observed differences in abundance patterns measured today. Chapter 3 expands this work into the analysis of a cosmological, hydrodynamical simulation of dwarf galaxies in the early universe. We find that simple assumptions for modeling the extent of supernova-driven winds used in Chapter 2 agree well with the simulation whereas the presence of inhomogeneous mixing in the simulation has a large effect on the stellar metallicities. Furthermore, the star-forming halos show both bursty and continuous SFHs, two scenarios proposed by stellar metallicity data

  16. A New Method to Constrain Supernova Fractions Using X-ray Observations of Clusters of Galaxies

    NASA Technical Reports Server (NTRS)

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-01-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112.We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% plus or minus 5.4% to 37.1% plus or minus 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 plus or minus 0.34) x 10(exp 9), to (1.28 plus or minus 0.43) x 10(exp 9), fromsnapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kiloparsecs of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  17. Observation-constrained modeling of the ionospheric impact of negative sprites

    NASA Astrophysics Data System (ADS)

    Liu, Ningyu; Boggs, Levi D.; Cummer, Steven A.

    2016-03-01

    This paper reports observation and modeling of five negative sprites occurring above two Florida thunderstorms. The sprites were triggered by unusual types of negative cloud-to-ground (CG) lightning discharges with impulse charge moment change ranging from 600 to 1300 C km and charge transfer characterized by a timescale of 0.1-0.2 ms. The negative sprite typically consists of a few generally vertical elements that each contain a bright core and dimmer streamers extending from the core in both downward and upward directions. Modeling results using the measured charge moment change waveforms indicate that the lower ionosphere was significantly modified by the CGs and the lower ionospheric density might have been increased by nearly 4 orders of magnitude due to the most intense CG. Finally, streamer modeling results show that the ionospheric inhomogeneities produced by atmospheric gravity waves can initiate negative sprite streamers, assuming that they can modulate the ionization coefficient.

  18. Iapetus' near surface thermal emission modeled and constrained using Cassini RADAR Radiometer microwave observations

    NASA Astrophysics Data System (ADS)

    Le Gall, A.; Leyrat, C.; Janssen, M. A.; Keihm, S.; Wye, L. C.; West, R.; Lorenz, R. D.; Tosi, F.

    2014-10-01

    Since its arrival at Saturn, the Cassini spacecraft has had only a few opportunities to observe Iapetus, Saturn's most distant regular satellite. These observations were all made from long ranges (>100,000 km) except on September 10, 2007, during Cassini orbit 49, when the spacecraft encountered the two-toned moon during its closest flyby so far. In this pass it collected spatially resolved data on the object's leading side, mainly over the equatorial dark terrains of Cassini Regio (CR). In this paper, we examine the radiometry data acquired by the Cassini RADAR during both this close-targeted flyby (referred to as IA49-3) and the distant Iapetus observations. In the RADAR's passive mode, the receiver functions as a radiometer to record the thermal emission from planetary surfaces at a wavelength of 2.2-cm. On the cold icy surfaces of Saturn's moons, the measured brightness temperatures depend both on the microwave emissivity and the physical temperature profile below the surface down to a depth that is likely to be tens of centimeters or even a few meters. Combined with the concurrent active data, passive measurements can shed light on the composition, structure and thermal properties of planetary regoliths and thus on the processes from which they have formed and evolved. The model we propose for Iapetus' microwave thermal emission is fitted to the IA49-3 observations and reveals that the thermal inertias sensed by the Cassini Radiometer over both CR and the bright mid-to-high latitude terrains, namely Ronceveaux Terra (RT) in the North and Saragossa Terra (ST) in the South, significantly exceed those measured by Cassini's CIRS (Composite Infrared Spectrometer), which is sensitive to much smaller depths, generally the first few millimeters of the surface. This implies that the subsurface of Iapetus sensed at 2.2-cm wavelength is more consolidated than the uppermost layers of the surface. In the case of CR, a thermal inertia of at least 50 J m-2 K-1 s-1/2, and

  19. Asteroid Properties from Photometric Observations: Constraining Non-Gravitational Processes in Asteroids

    NASA Astrophysics Data System (ADS)

    Pravec, P.

    2013-05-01

    From October 2012 we run our NEOSource project on the Danish 1.54-m telescope on La Silla. The primary aim of the project is to study non-gravitational processes in asteroids near the Earth and in their source regions in the main asteroidal belt. In my talk, I will give a brief overview of our current knowledge of the asteroidal non- gravitational processes and how we study them with photometric observations. I will talk especially about binary and paired asteroids that appear to be formed by rotational fission, about detecting the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and BYORP (binary YORP) effects of anisotropic thermal emission from asteroids that change their spins and satellite orbits, and about non-principal axis rotators (the so called "tumblers") among the smallest, super-critically rotating asteroids with sizes < 100 meters.

  20. Constraining hot plasma in a non-flaring solar active region with FOXSI hard X-ray observations

    NASA Astrophysics Data System (ADS)

    Ishikawa, Shin-nosuke; Glesener, Lindsay; Christe, Steven; Ishibashi, Kazunori; Brooks, David H.; Williams, David R.; Shimojo, Masumi; Sako, Nobuharu; Krucker, Säm

    2014-12-01

    We present new constraints on the high-temperature emission measure of a non-flaring solar active region using observations from the recently flown Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload. FOXSI has performed the first focused hard X-ray (HXR) observation of the Sun in its first successful flight on 2012 November 2. Focusing optics, combined with small strip detectors, enable high-sensitivity observations with respect to previous indirect imagers. This capability, along with the sensitivity of the HXR regime to high-temperature emission, offers the potential to better characterize high-temperature plasma in the corona as predicted by nanoflare heating models. We present a joint analysis of the differential emission measure (DEM) of active region 11602 using coordinated observations by FOXSI, Hinode/XRT, and Hinode/EIS. The Hinode-derived DEM predicts significant emission measure between 1 MK and 3 MK, with a peak in the DEM predicted at 2.0-2.5 MK. The combined XRT and EIS DEM also shows emission from a smaller population of plasma above 8 MK. This is contradicted by FOXSI observations that significantly constrain emission above 8 MK. This suggests that the Hinode DEM analysis has larger uncertainties at higher temperatures and that > 8 MK plasma above an emission measure of 3 × 1044 cm-3 is excluded in this active region.

  1. Using Two-Ribbon Flare Observations and MHD Simulations to Constrain Flare Properties

    NASA Astrophysics Data System (ADS)

    Kazachenko, Maria D.; Lynch, Benjamin J.; Welsch, Brian

    2016-05-01

    Flare ribbons are emission structures that are frequently observed during flares in transition-region and chromospheric radiation. These typically straddle a polarity inversion line (PIL) of the radial magnetic field at the photosphere, and move apart as the flare progresses. The ribbon flux - the amount of unsigned photospheric magnetic flux swept out by flare ribbons - is thought to be related to the amount coronal magnetic reconnection, and hence provides a key diagnostic tool for understanding the physical processes at work in flares and CMEs. Previous measurements of the magnetic flux swept out by flare ribbons required time-consuming co-alignment between magnetograph and intensity data from different instruments, explaining why those studies only analyzed, at most, a few events. The launch of the Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA), both aboard the Solar Dynamics Observatory (SDO), presented a rare opportunity to compile a much larger sample of flare-ribbon events than could readily be assembled before. We created a dataset of 363 events of both flare ribbon positions and fluxes, as a function of time, for all C9.-class and greater flares within 45 degrees of disk center observed by SDO from June 2010 till April 2015. For this purpose, we used vector magnetograms (2D magnetic field maps) from HMI and UV images from AIA. A critical problem with using unprocessed AIA data is the existence of spurious intensities in AIA data associated with strong flare emission, most notably "blooming" (spurious smearing of saturated signal into neighboring pixels, often in streaks). To overcome this difficulty, we have developed an algorithmic procedure that effectively excludes artifacts like blooming. We present our database and compare statistical properties of flare ribbons, e.g. evolutions of ribbon reconnection fluxes, reconnection flux rates and vertical currents with the properties from MHD simulations.

  2. Constraining GRB as Source for UHE Cosmic Rays through Neutrino Observations

    NASA Astrophysics Data System (ADS)

    Chen, P.

    2013-07-01

    The origin of ultra-high energy cosmic rays (UHECR) has been widely regarded as one of the major questions in the frontiers of particle astrophysics. Gamma ray bursts (GRB), the most violent explosions in the universe second only to the Big Bang, have been a popular candidate site for UHECR productions. The recent IceCube report on the non-observation of GRB induced neutrinos therefore attracts wide attention. This dilemma requires a resolution: either the assumption of GRB as UHECR accelerator is to be abandoned or the expected GRB induced neutrino yield was wrong. It has been pointed out that IceCube has overestimated the neutrino flux at GRB site by a factor of ~5. In this paper we point out that, in addition to the issue of neutrino production at source, the neutrino oscillation and the possible neutrino decay during their flight from GRB to Earth should further reduce the detectability of IceCube, which is most sensitive to the muon-neutrino flavor as far as point-source identification is concerned. Specifically, neutrino oscillation will reduce the muon-neutrino flavor ratio from 2/3 per neutrino at GRB source to 1/3 on Earth, while neutrino decay, if exists and under the assumption of normal hierarchy of mass eigenstates, would result in a further reduction of muon-neutrino ratio to 1/8. With these in mind, we note that there have been efforts in recent years in pursuing other type of neutrino telescopes based on Askaryan effect, which can in principle observe and distinguish all three flavors with comparable sensitivities. Such new approach may therefore be complementary to IceCube in shedding more lights on this cosmic accelerator question.

  3. EVLA OBSERVATIONS CONSTRAIN THE ENVIRONMENT AND PROGENITOR SYSTEM OF Type Ia SUPERNOVA 2011fe

    SciTech Connect

    Chomiuk, Laura; Soderberg, Alicia M.; Moe, Maxwell; Margutti, Raffaella; Fong, Wen-fai; Dittmann, Jason A.; Chevalier, Roger A.; Rupen, Michael P.; Badenes, Carles; Fransson, Claes

    2012-05-10

    We report unique Expanded Very Large Array observations of SN 2011fe representing the most sensitive radio study of a Type Ia supernova to date. Our data place direct constraints on the density of the surrounding medium at radii {approx}10{sup 15}-10{sup 16} cm, implying an upper limit on the mass loss rate from the progenitor system of M-dot < or approx. 6 x 10{sup -10} M{sub Sun} yr{sup -1} (assuming a wind speed of 100 km s{sup -1}) or expansion into a uniform medium with density n{sub CSM} {approx}< 6 cm{sup -3}. Drawing from the observed properties of non-conservative mass transfer among accreting white dwarfs, we use these limits on the density of the immediate environs to exclude a phase space of possible progenitor systems for SN 2011fe. We rule out a symbiotic progenitor system and also a system characterized by high accretion rate onto the white dwarf that is expected to give rise to optically thick accretion winds. Assuming that a small fraction, 1%, of the mass accreted is lost from the progenitor system, we also eliminate much of the potential progenitor parameter space for white dwarfs hosting recurrent novae or undergoing stable nuclear burning. Therefore, we rule out much of the parameter space associated with popular single degenerate progenitor models for SN 2011fe, leaving a limited phase space largely inhabited by some double degenerate systems, as well as exotic single degenerates with a sufficient time delay between mass accretion and SN explosion.

  4. Comparing inversion techniques for constraining CO2 fluxes in the Brazilian Amazon Basin with aircraft observations

    NASA Astrophysics Data System (ADS)

    Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.

    2012-12-01

    aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.

  5. Observationally Constrained Estimates of Nitryl Chloride Production on Regional and Global Scales

    NASA Astrophysics Data System (ADS)

    Riedel, T.; Thornton, J. A.; Kercher, J. P.; Wagner, N.; Dubé, W. P.; Cozic, J.; Holloway, J.; Wolfe, G. M.; Quinn, P.; Middlebrook, A. M.; Brown, S. S.

    2009-12-01

    Nitryl chloride (ClNO2) is a Cl atom source produced during the night by heterogeneous reactions of dinitrogen pentoxide (N2O5) on chloride-containing aerosol particles. We observed ClNO2 in Boulder, CO, about 1400 km from any source of sea salt on nearly every night when we intercepted the urban NOx-rich plume. We use these observations as motivation to investigate the potential importance of ClNO2 formation over the contiguous United States and extend the relationships to the global scale. For this purpose, we use available ambient aerosol composition data from the IMPROVE network and from large-scale field intensives, precipitation composition from the NADP, nitrogen oxide emission databases, and global model outputs of the fraction of nitrogen oxide radicals that react on aerosol particles via N2O5. Uncertainty concerning the reactivity of N2O5 and the distribution of chloride mass across the particle size distribution ultimately limits the accuracy of these extrapolations, further supporting the need for better constraints on N2O5 reactivity and for single particle composition measurements. Nonetheless, a number of different approaches suggest that between 1 - 5 Tg Cl of ClNO2 are produced each year across the U.S. alone. This source is similar to that estimated for global coastal and marine boundary layer regions. Based on the results for the United States, we extrapolate to the global scale under the assumption that NOx and particulate chloride sources are distributed similarly to that in the U.S. We predict that continental scale ClNO2 production leads to a global Cl atom source of 8 - 22 Tg Cl per year, which is of the same order as that inferred from methane isotopes over the remote MBL. These estimates indicate that ClNO2 plays a substantial role in the global chlorine atom budget, much larger than was previously thought, with implications for the role of human activities on the tropospheric halogen budget.

  6. Transient Earth system responses to cumulative carbon dioxide emissions: linearities, uncertainties, and probabilities in an observation-constrained model ensemble

    NASA Astrophysics Data System (ADS)

    Steinacher, M.; Joos, F.

    2016-02-01

    Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ˜ 1000-member ensemble of the Bern3D-LPJ carbon-climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.

  7. Interannual and Seasonal Variability of Biomass Burning Emissions Constrained by Satellite Observations

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan N.; Martin, Randall V.; Staudt, Amanda C.; Yevich, Rosemarie; Logan, Jennifer A.

    2003-01-01

    We present a methodology for estimating the seasonal and interannual variation of biomass burning designed for use in global chemical transport models. The average seasonal variation is estimated from 4 years of fire-count data from the Along Track Scanning Radiometer (ATSR) and 1-2 years of similar data from the Advanced Very High Resolution Radiometer (AVHRR) World Fire Atlases. We use the Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data product as a surrogate to estimate interannual variability in biomass burning for six regions: Southeast Asia, Indonesia and Malaysia, Brazil, Central America and Mexico, Canada and Alaska, and Asiatic Russia. The AI data set is available from 1979 to the present with an interruption in satellite observations from mid-1993 to mid-1996; this data gap is filled where possible with estimates of area burned from the literature for different regions. Between August 1996 and July 2000, the ATSR fire-counts are used to provide specific locations of emissions and a record of interannual variability throughout the world. We use our methodology to estimate mean seasonal and interannual variations for emissions of carbon monoxide from biomass burning, and we find that no trend is apparent in these emissions over the last two decades, but that there is significant interannual variability.

  8. A model of earthquake triggering probabilities and application to dynamic deformations constrained by ground motion observations

    USGS Publications Warehouse

    Gomberg, J.; Felzer, K.

    2008-01-01

    We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.

  9. Leveraging atmospheric CO2 observations to constrain the climate sensitivity of terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Richter, A.; Franklin, O.; Evans, S. E.; Dieckmann, U.

    2014-12-01

    A significant challenge in understanding, and therefore modeling, the response of terrestrial carbon cycling to climate and environmental drivers is that vegetation varies on spatial scales of order a few kilometers whereas Earth system models (ESMs) are run with characteristic length scales of order 100 km. Atmospheric CO2 provides a constraint on carbon fluxes at spatial scales compatible with the resolution of ESMs due to the fact that atmospheric mixing renders a single site representative of fluxes within a large spatial footprint. The variations in atmospheric CO2 at both seasonal and interannual timescales largely reflect terrestrial influence. I discuss the use of atmospheric CO2 observations to benchmark model carbon fluxes over a range of spatial scales. I also discuss how simple models can be used to test functional relationships between the CO2 growth rate and climate variations. In particular, I show how atmospheric CO2 provides constraints on ecosystem sensitivity to climate drivers in the tropics, where tropical forests and semi-arid ecosystems are thought to account for much of the variability in the contemporary carbon sink.

  10. Deep source model for Nevado del Ruiz Volcano, Colombia, constrained by interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, P.; Samsonov, S. V.; López, C. M.; Ordoñez, M.

    2015-12-01

    Nevado del Ruiz (NRV) is part of a large volcano complex in the northern Andes of Colombia with a large glacier that erupted in 1985, generating a lahar killing over 23,000 people in the city of Armero and 2,000 people in the town of Chinchina. NRV is the most active volcano in Colombia and since 2012 has generated small eruptions, with no casualties, and constant gas and ash emissions. Interferometric synthetic aperture radar (InSAR) observations from ascending and descending track RADARSAT-2 data show a large (>20 km) wide inflation pattern apparently starting in late 2011 to early 2012 and continuing to the time of this study in early 2015 at a LOS rate of over 3-4 cm/yr (Fig. 1). Volcano pressure volume models for both a point source (Mogi) and a spheroidal (Yang) source find solutions over 14 km beneath the surface, or 10 km below sea level, and centered 10 km to the SW of Nevado del Ruiz volcano. The spheroidal source has a roughly horizontal long axis oriented parallel to the Santa Isabel - Nevado del Ruiz volcanic line and perpendicular to the ambient compressive stress direction. Its solution provides a statistically significant improvement in fit compared to the point source, though consideration of spatially correlated noise sources may diminish this significance. Stress change computations do not favor one model over the other but show that propagating dikes would become trapped in sills, leading to a more complex pathway to the surface and possibly explaining the significant lateral distance between the modeled sources and Nevado del Ruiz volcano.

  11. Constraining the variation of the fine-structure constant with observations of narrow quasar absorption lines

    SciTech Connect

    Songaila, A.; Cowie, L. L.

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure in even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of

  12. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    NASA Astrophysics Data System (ADS)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  13. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    SciTech Connect

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-08-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms.

  14. The numerical simulation on the seismogenic mechanism of the Lushan Ms 7.0 earthquake constrained by deformation observation

    NASA Astrophysics Data System (ADS)

    zhu, aiyu; zhang, dongning

    2016-04-01

    We established a two-dimensional finite element model of the Lushan earthquake and its adjacent region. The model is based on the Natural seismic imaging, magnetotelluric sounding, artificial seismic sounding profile, precise aftershock location, focal rupture inversion, geological survey, GPS observation and tectonic stress field. Using the results of deformation observation of the Lushan Ms7.0 earthquake in April 20, 2013 as constrains, we explore some possible factors, such as Qinghai Tibet Plateau eastward extrusion, characters of regional topography, lower velocity zone, detachment surface, tectonic faults, et al, which control the earthquake preparation and the rupture character of the Lushan earthquake. The numerical results showed that, the movement rate of the material in the eastern part of the Qinghai Tibet Plateau increased after the Wenchuan earthquake, which is the main dynamic factor causing or accelerating the Lushan earthquake, the existence of low velocity zone and detachment surface in the upper middle crust of the Longmen mountain fault zone is an important condition for controlling the Lushan epicenter location, and the other factors are the dynamic factors of controlling the tectonic activity of the Longmen mountain fault zone in the long time scale. Also, this paper gives the simulated result of coseismic displacement caused by the complex "y"type rupture, which further supports the speculation on Lushan mainshock rupture surface which is the "y"type.

  15. Constraining Volcano Source Rheology and Mechanisms: 3D Full Wavefield Simulations and Very High Resolution Observations From Mt Etna.

    NASA Astrophysics Data System (ADS)

    Bean, C. J.; O'Brien, G.; de Barros, L.; Murphy, S.; Lokmer, I.; Saccorotti, G.; Patane, D.; Metaxian, J.

    2009-05-01

    Recent field observation and laboratory experiments have demonstrated a broad range of deformation mechanisms in volcanic rocks, and a juxtaposition of brittle and ductile deformation in both space and time. On the other hand seismological observations of transient deformation at volcanoes yield an equally wide variety of signal types including Volcano Tectonic (VT), Long Period (LP), Very Long Period (VLP) and tremor. A clear goal is to find robust connections between these independent sets of observations, linking detailed field studies, well controlled laboratory experiments and volcano seismology. In volcano seismology VT events are usually interpreted as the brittle response of the edifice to stressing whereas LP and VLP events are thought to result from fluid-filled conduit dynamics. However, strong wave propagation path effects and a large number of possible source mechanisms make it difficult to find a quantitative interpretation of mechanism/rheology. Numerical simulations have a key role to play in making the connection between well-controlled laboratory experiments and the field. Furthermore, many of the features seen in real volcano seismograms can be reproduced in 3D full wavefield simulations of both wet (coupled multi phase fluids and solids) and dry (rupture propagation) models. Even in simulated data the underlying rheology/source mechanisms are difficult to determine from an inversion of the synthetic seismograms, especially for sparse data with poor velocity control. With this in mind a detailed field experiment was undertaken on Mt Etna in June 2008, comprising 30+ stations in the summit area. Aided by simulated data in realistic velocity models, this has given us an unprecedented picture of shallow LP activity on Etna. These high resolution observations will be compared with recent results from laboratory experiments and with numerical simulations in an effort to better constrain the rheology/mechanism of the sources.

  16. An Experimental Path to Constraining the Origins of the Jupiter Trojans Using Observations, Theoretical Predictions, and Laboratory Simulants

    NASA Astrophysics Data System (ADS)

    Blacksberg, Jordana; Eiler, John; Brown, Mike; Ehlmann, Bethany; Hand, Kevin; Hodyss, Robert; Mahjoub, Ahmed; Poston, Michael; Liu, Yang; Choukroun, Mathieu; Carey, Elizabeth; Wong, Ian

    2014-11-01

    Hypotheses based on recent dynamical models (e.g. the Nice Model) shape our current understanding of solar system evolution, suggesting radical rearrangement in the first hundreds of millions of years of its history, changing the orbital distances of Jupiter, Saturn, and a large number of small bodies. The goal of this work is to build a methodology to concretely tie individual solar system bodies to dynamical models using observables, providing evidence for their origins and evolutionary pathways. Ultimately, one could imagine identifying a set of chemical or mineralogical signatures that could quantitatively and predictably measure the radial distance at which icy and rocky bodies first accreted. The target of the work presented here is the Jupiter Trojan asteroids, predicted by the Nice Model to have initially formed in the Kuiper belt and later been scattered inward to co-orbit with Jupiter. Here we present our strategy which is fourfold: (1) Generate predictions about the mineralogical, chemical, and isotopic compositions of materials accreted in the early solar system as a function of distance from the Sun. (2) Use temperature and irradiation to simulate evolutionary processing of ices and silicates, and measure the alteration in spectral properties from the UV to mid-IR. (3) Characterize simulants to search for potential fingerprints of origin and processing pathways, and (4) Use telescopic observations to increase our knowledge of the Trojan asteroids, collecting data on populations and using spectroscopy to constrain their compositions. In addition to the overall strategy, we will present preliminary results on compositional modeling, observations, and the synthesis, processing, and characterization of laboratory simulants including ices and silicates. This work has been supported by the Keck Institute for Space Studies (KISS). The research described here was carried out at the Jet Propulsion Laboratory, Caltech, under a contract with the National

  17. Constraining Annual Water Balance Estimates with Basin-Scale Observations from the Airborne Snow Observatory during the Current Californian Drought

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Hedrick, A. R.; Deems, J. S.; Patterson, V.; McGurk, B. J.

    2015-12-01

    One of the great unknowns in mountain hydrology is how much water is stored within a seasonal snowpack at the basin scale. Quantifying mountain water resources is critical for assisting with water resource management, but has proven elusive due to high spatial and temporal variability of mountain snow cover, complex terrain, accessibility constraints and limited in-situ networks. The Airborne Snow Observatory (ASO, aso.jpl.nasa.gov) uses coupled airborne LiDAR and spectrometer instruments for high resolution snow depth retrievals which are used to derive unprecedented basin-wide estimates of snow water mass (snow water equivalent, SWE). ASO has been operational over key basins in the Sierra Nevada Mountains in California since 2013. Each operational year has been very dry, with precipitation in 2013 at 75% of average, 2014 at 50% of average and 2015 - the lowest snow year on record for the region. With vastly improved estimates of the snowpack water content from ASO, we can now for the first time conduct observation-based mass balance accounting of surface water in snow-dominated basins, and reconcile these estimates with observed reservoir inflows. In this study we use ASO SWE data to constrain mass balance accounting of basin annual water storages to quantify the water contained within the snowpack above the Hetch Hetchy water supply reservoir (Tuolumne River basin, California). The analysis compares and contrasts annual snow water volumes from observed reservoir inflows, snow water volume estimates from ASO, a physically based model that simulates the snowpack from meteorological inputs and a semi-distributed hydrological model. The study provides invaluable insight to the overall volume of water contained within a seasonal snowpack during a severe drought and how these quantities are simulated in our modelling systems. We envisage that this research will be of great interest to snowpack modellers, hydrologists, dam operators and water managers worldwide.

  18. Analysis and implications of the miscarriages of justice of Amanda Knox and Raffaele Sollecito.

    PubMed

    Gill, Peter

    2016-07-01

    The case of the 'murder of Meredith Kercher' has been the subject of intense media scrutiny since 2007 when the offence was committed. Three individuals were arrested and accused of the crime. Amanda Knox and Raffaele Sollecito were exonerated in March 2015. Another defendant, Rudy Guede, remains convicted as the sole perpetrator. He was implicated by multiple DNA profiles recovered from the murder room and the bathroom. However, the evidence against Guede contrasted strongly with the limited evidence against two co-defendants, Amanda Knox and Raffaele Sollecito. There were no DNA profiles pertaining to Amanda Knox in the murder room itself. She was separately implicated by a knife recovered remote from the crime scene (discovered in a cutlery drawer at Sollecito's apartment), along with DNA profiles in a bathroom that she had shared with the victim. Upon analysis a low level trace of DNA attributed to the murder victim was found on the blade of a knife, along with DNA profiles attributed to Amanda Knox from the handle. However, there was no evidence of blood on the knife blade itself. A separate key piece of evidence was a DNA profile attributed to Raffaele Sollecito recovered from a forcibly removed bra-clasp found in the murder room. There followed an extraordinary series of trials and retrials where the pair were convicted, exonerated, re-convicted and finally, in March 2015 they were finally exonerated (no further appeal is possible). Since Knox and Sollecito have been found innocent it is opportune to carry out an extensive review of the case to discover the errors that led to conviction so that similar mistakes do not occur in the future. It is accepted that the DNA profiles attributed to them were transferred by methods unrelated to the crime event itself. There is a wealth of material available from the judgements and other reports which can be analysed in order to show the errors of thinking. The final judgement of the case-the Marasca-Bruno motivation

  19. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, E.; Barret, B.; Massart, S.; Le Flochmoen, E.; Piacentini, A.; El Amraoui, L.; Pannekoucke, O.; Cariolle, D.

    2013-08-01

    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. In this study we examine the impact of assimilating ozone observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI) in a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE). The assimilation of the two instruments is performed by means of a variational algorithm (4-D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozone-sondes measurements to verify the presence of observations and model biases. It is found that the IASI Tropospheric Ozone Column (TOC, 1000-225 hPa) should be bias-corrected prior to assimilation and MLS lowermost level (215 hPa) excluded from the analysis. Furthermore, a longer analysis of 6 months (July-August 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (Root Mean Square Error, RMSE) of the modeled ozone columns from 30% to 15% in the Upper-Troposphere/Lower-Stratosphere (UTLS, 70-225 hPa) and from 25% to 20% in the free troposphere. The positive effect of assimilating IASI tropospheric observations is very significant at low latitudes (30° S-30° N), whereas it is not demonstrated at higher latitudes. Results are confirmed by a comparison with additional ozone datasets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be little sensitive to the assimilation parameters and the model chemical scheme, due to the high frequency of satellite observations compared to the average life-time of free

  20. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  1. Search for Ultra High-Energy Neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Ackermann, M.

    2007-11-19

    A search for diffuse neutrinos with energies in excess of 10{sup 5} GeV is conducted with AMANDA-II data recorded between 2000 and 2002. Above 10{sup 7} GeV, the Earth is essentially opaque to neutrinos. This fact, combined with the limited overburden of the AMANDA-II detector (roughly 1.5 km), concentrates these ultra high-energy neutrinos at the horizon. The primary background for this analysis is bundles of downgoing, high-energy muons from the interaction of cosmic rays in the atmosphere. No statistically significant excess above the expected background is seen in the data, and an upper limit is set on the diffuse all-flavor neutrino flux of E{sup 2} {Phi}{sub 90%CL} < 2.7 x 10{sup -7} GeV cm{sup -2}s{sup -1} sr{sup -1} valid over the energy range of 2 x 10{sup 5} GeV to 10{sup 9} GeV. A number of models which predict neutrino fluxes from active galactic nuclei are excluded at the 90% confidence level.

  2. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  3. Limits on the high-energy gamma and neutrino fluxes from the SGR 1806-20 giant flare of 27 December 2004 with the AMANDA-II detector.

    PubMed

    Achterberg, A; Ackermann, M; Adams, J; Ahrens, J; Andeen, K; Atlee, D W; Bahcall, J N; Bai, X; Baret, B; Bartelt, M; Barwick, S W; Bay, R; Beattie, K; Becka, T; Becker, J K; Becker, K-H; Berghaus, P; Berley, D; Bernardini, E; Bertrand, D; Besson, D Z; Blaufuss, E; Boersma, D J; Bohm, C; Bolmont, J; Böser, S; Botner, O; Bouchta, A; Braun, J; Burgess, C; Burgess, T; Castermans, T; Chirkin, D; Christy, B; Clem, J; Cowen, D F; D'Agostino, M V; Davour, A; Day, C T; De Clercq, C; Demirörs, L; Descamps, F; Desiati, P; Deyoung, T; Diaz-Velez, J C; Dreyer, J; Dumm, J P; Duvoort, M R; Edwards, W R; Ehrlich, R; Eisch, J; Ellsworth, R W; Evenson, P A; Fadiran, O; Fazely, A R; Feser, T; Filimonov, K; Fox, B D; Gaisser, T K; Gallagher, J; Ganugapati, R; Geenen, H; Gerhardt, L; Goldschmidt, A; Goodman, J A; Gozzini, R; Grullon, S; Gross, A; Gunasingha, R M; Gurtner, M; Hallgren, A; Halzen, F; Han, K; Hanson, K; Hardtke, D; Hardtke, R; Harenberg, T; Hart, J E; Hauschildt, T; Hays, D; Heise, J; Helbing, K; Hellwig, M; Herquet, P; Hill, G C; Hodges, J; Hoffman, K D; Hommez, B; Hoshina, K; Hubert, D; Hughey, B; Hulth, P O; Hultqvist, K; Hundertmark, S; Hülss, J-P; Ishihara, A; Jacobsen, J; Japaridze, G S; Jones, A; Joseph, J M; Kampert, K-H; Karle, A; Kawai, H; Kelley, J L; Kestel, M; Kitamura, N; Klein, S R; Klepser, S; Kohnen, G; Kolanoski, H; Köpke, L; Krasberg, M; Kuehn, K; Landsman, H; Leich, H; Liubarsky, I; Lundberg, J; Madsen, J; Mase, K; Matis, H S; McCauley, T; McParland, C P; Meli, A; Messarius, T; Mészáros, P; Miyamoto, H; Mokhtarani, A; Montaruli, T; Morey, A; Morse, R; Movit, S M; Münich, K; Nahnhauer, R; Nam, J W; Niessen, P; Nygren, D R; Ogelman, H; Olbrechts, Ph; Olivas, A; Patton, S; Peña-Garay, C; Pérez de Los Heros, C; Piegsa, A; Pieloth, D; Pohl, A C; Porrata, R; Pretz, J; Price, P B; Przybylski, G T; Rawlins, K; Razzaque, S; Refflinghaus, F; Resconi, E; Rhode, W; Ribordy, M; Rizzo, A; Robbins, S; Roth, P; Rott, C; Rutledge, D; Ryckbosch, D; Sander, H-G; Sarkar, S; Schlenstedt, S; Schmidt, T; Schneider, D; Seckel, D; Seo, S H; Seunarine, S; Silvestri, A; Smith, A J; Solarz, M; Song, C; Sopher, J E; Spiczak, G M; Spiering, C; Stamatikos, M; Stanev, T; Steffen, P; Stezelberger, T; Stokstad, R G; Stoufer, M C; Stoyanov, S; Strahler, E A; Straszheim, T; Sulanke, K-H; Sullivan, G W; Sumner, T J; Taboada, I; Tarasova, O; Tepe, A; Thollander, L; Tilav, S; Toale, P A; Turcan, D; van Eijndhoven, N; Vandenbroucke, J; Van Overloop, A; Voigt, B; Wagner, W; Walck, C; Waldmann, H; Walter, M; Wang, Y-R; Wendt, C; Wiebusch, C H; Wikström, G; Williams, D R; Wischnewski, R; Wissing, H; Woschnagg, K; Xu, X W; Yodh, G; Yoshida, S; Zornoza, J D

    2006-12-01

    On 27 December 2004, a giant gamma flare from the Soft Gamma-Ray Repeater 1806-20 saturated many satellite gamma-ray detectors, being the brightest transient event ever observed in the Galaxy. AMANDA-II was used to search for down-going muons indicative of high-energy gammas and/or neutrinos from this object. The data revealed no significant signal, so upper limits (at 90% C.L.) on the normalization constant were set: 0.05(0.5) TeV-1 m;{-2} s;{-1} for gamma=-1.47 (-2) in the gamma flux and 0.4(6.1) TeV-1 m;{-2} s;{-1} for gamma=-1.47 (-2) in the high-energy neutrino flux.

  4. Using seismic array-processing to enhance observations of PcP waves to constrain lowermost mantle structure

    NASA Astrophysics Data System (ADS)

    Ventosa, S.; Romanowicz, B. A.

    2014-12-01

    The topography of the core-mantle boundary (CMB) and the structure and composition of the D" region are essential to understand the interaction between the earth's mantle and core. A variety of seismic data-processing techniques have been used to detect and measure travel-times and amplitudes of weak short-period teleseismic body-waves phases that interact with CMB and D", which is crucial to constrain properties of the lowermost mantle at short wavelengths. Major challenges in enhancing these observations are: (1) increasing signal-to-noise ratio of target phases and (2) isolating them from unwanted neighboring phases. Seismic array-processing can address these problems by combining signals from groups of seismometers and exploiting information that allows to separate the coherent signals from the noise. Here, we focus on the study of the Pacific large-low shear-velocity province (LLSVP) and surrounding areas using differential travel-times and amplitude ratios of the P and PcP phases, and their depth phases. We particularly design scale-dependent slowness filters that do not compromise time-space resolution. This is a local delay-and-sum (i.e. slant-stack) approach implemented in the time-scale domain using the wavelet transform to enhance time-space resolution (i.e. reduce array aperture). We group stations from USArray and other nearby networks, and from Hi-Net and F-net in Japan, to define many overlapping local arrays. The aperture of each array varies mainly according (1) to the space resolution target and (2) to the slowness resolution required to isolate the target phases at each period. Once the target phases are well separated, we measure their differential travel-times and amplitude ratios, and we project these to the CMB. In this process, we carefully analyze and, when possible and significant, correct for the main sources of bias, i.e., mantle heterogeneities, earthquake mislocation and intrinsic attenuation. We illustrate our approach in a series of

  5. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, Emanuele; Barret, Brice; Massart, Sebastien; Piacentini, Andrea; Pannekoucke, Olivier; Cariolle, Daniel

    2013-04-01

    Ozone acts as the main shield against UV radiation in the stratosphere, it contributes to the greenhouse effect in the troposphere and it is a major pollutant in the planetary boundary layer. In the last decades models and satellite observations reached a mature level, providing estimates of ozone with an accuracy of few percents in the stratosphere. On the other hand, tropospheric ozone still represents a challenge, because its signal is less detectable by space-borne sensors, its modelling depends on the knowledge of gaseous emissions at the surface, and stratosphere/troposphere exchanges might rapidly increase its abundance by several times. Moreover there is generally lack of in-situ observations of tropospheric ozone in many regions of the world. For these reasons the assimilation of satellite data into chemical transport models represents a promising technique to overcome limitations of both satellites and models. The objective of this study is to assess the value of vertically resolved observations from the Infrared Atmospheric Sounding Interferometer (IASI) and the Microwave Limb Sounder (MLS) to constrain both the tropospheric and stratospheric ozone profile in a global model. While ozone total columns and stratospheric profiles from UV and microwave sensors are nowadays routinely assimilated in operational models, still few studies have explored the assimilation of ozone products from IR sensors such as IASI, which provide better sensitivity in the troposphere. We assimilate both MLS ozone profiles and IASI tropospheric (1000-225 hPa) ozone columns in the Météo France chemical transport model MOCAGE for 2008. The model predicts ozone concentrations on a 2x2 degree global grid and for 60 vertical levels, ranging from the surface up to 0.1 hPa. The assimilation is based on a 4D-VAR algorithm, employs a linear chemistry scheme and accounts for the satellite vertical sensitivity via the averaging kernels. The assimilation of the two products is first tested

  6. Constraining U.S. ammonia emissions using TES remote sensing observations and the GEOS-Chem adjoint model

    EPA Science Inventory

    Ammonia (NH(3)has significant impacts on biodiversity, eutrophication, and acidification. Widespread uncertainty in the magnitude and seasonality of NH3 emissions hinders efforts to address these issues. In this work, we constrain U.S. NH3 sources using obse...

  7. Determination of the Atmospheric Neutrino Flux and Searches for New Physics with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Collaboration, IceCube

    2009-06-02

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLI and QD parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  8. Constraining a land-surface model with multiple observations by application of the MPI-Carbon Cycle Data Assimilation System V1.0

    NASA Astrophysics Data System (ADS)

    Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke

    2016-09-01

    We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.

  9. A search for neutrino-induced electromagnetic showers in the 2008 combined IceCube and AMANDA detectors

    NASA Astrophysics Data System (ADS)

    Rutledge, Douglas Lowery

    The Antarctic Muon and Neutrino Detector Array (AMANDA) and its successor experiment, IceCube, are both Cherenkov detectors deployed very near the geographic South Pole. The Cherenkov technique uses the light emitted by charged particles that travel faster than the propagation velocity of light in the detector medium. This can be used to detect the daughter particles from the interaction in the ice of neutrinos of all flavors. The topology of neutrino interaction events is strongly dependent on the neutrino flavor, allowing separate measurements to be made. Electrons resulting from neutrino interactions leave spherical events by depositing all of their energy within a small region. Events of this type are often referred to as "Cascades." Muons propagate over long distances, leaving Cherenkov light distributed over a line. The principal event topology for taus is called "Double Bangs," with two spatially separated cascades. There are many potential benefits to running a search for neutrino-induced cascades using the combined readout from both the IceCube and the AMANDA detectors. AMANDA is sensitive to lower energies, owing to its denser distribution of PMTs. IceCube has a much larger volume, allowing it to make better measurements of the background. This allows for better background rejection techniques, and thus a higher final signal rate. This work presents a search for cascades from the atmospheric neutrino flux using the combined data from AMANDA's Transient Waveform Recorder (TWR) data acquisition system, and IceCube's 40 string detector configuration. After the 200 Hz background rate is removed the final measured rate of cascade candidates is 2.5 x 10-7 Hz+3.8x10-7-9.9x10 -8 Hz(stat) +/- 9.8 x 10-8 Hz(syst). The dataset used in this work was collected over 187 days from April to November in 2008.

  10. Aerosol optical depth assimilation for a size-resolved sectional model: impacts of observationally constrained, multi-wavelength and fine mode retrievals on regional scale analyses and forecasts

    NASA Astrophysics Data System (ADS)

    Saide, P. E.; Carmichael, G. R.; Liu, Z.; Schwartz, C. S.; Lin, H. C.; da Silva, A. M.; Hyer, E.

    2013-10-01

    An aerosol optical depth (AOD) three-dimensional variational data assimilation technique is developed for the Gridpoint Statistical Interpolation (GSI) system for which WRF-Chem forecasts are performed with a detailed sectional model, the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC). Within GSI, forward AOD and adjoint sensitivities are performed using Mie computations from the WRF-Chem optical properties module, providing consistency with the forecast. GSI tools such as recursive filters and weak constraints are used to provide correlation within aerosol size bins and upper and lower bounds for the optimization. The system is used to perform assimilation experiments with fine vertical structure and no data thinning or re-gridding on a 12 km horizontal grid over the region of California, USA, where improvements on analyses and forecasts is demonstrated. A first set of simulations was performed, comparing the assimilation impacts of using the operational MODIS (Moderate Resolution Imaging Spectroradiometer) dark target retrievals to those using observationally constrained ones, i.e., calibrated with AERONET (Aerosol RObotic NETwork) data. It was found that using the observationally constrained retrievals produced the best results when evaluated against ground based monitors, with the error in PM2.5 predictions reduced at over 90% of the stations and AOD errors reduced at 100% of the monitors, along with larger overall error reductions when grouping all sites. A second set of experiments reveals that the use of fine mode fraction AOD and ocean multi-wavelength retrievals can improve the representation of the aerosol size distribution, while assimilating only 550 nm AOD retrievals produces no or at times degraded impact. While assimilation of multi-wavelength AOD shows positive impacts on all analyses performed, future work is needed to generate observationally constrained multi-wavelength retrievals, which when assimilated will generate size

  11. On the convergence of ionospheric constrained precise point positioning (IC-PPP) based on undifferential uncombined raw GNSS observations.

    PubMed

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-01-01

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190

  12. Optimal locations for absolute gravity measurements and sensitivity of GRACE observations for constraining glacial isostatic adjustment on the northern hemisphere

    NASA Astrophysics Data System (ADS)

    Steffen, Holger; Wu, Patrick; Wang, Hansheng

    2012-09-01

    Gravity rate of change is an important quantity in the investigation of glacial isostatic adjustment (GIA). However, measurements with absolute and relative gravimeters are laborious and time-consuming, especially in the vast GIA-affected regions of high latitudes with insufficient infrastructure. Results of the Gravity Recovery And Climate Experiment (GRACE) satellite mission have thus provided tremendous new insight as they fully cover those areas. To better constrain the GIA model (i.e. improve the glaciation history and Earth parameters) with new gravity data, we analyse the currently determined errors in gravity rate of change from absolute gravity (AG) and GRACE measurements in North America and Fennoscandia to test their sensitivity for different ice models, lithospheric thickness, background viscosity and lateral mantle viscosity variations. We provide detailed sensitivity maps for these four parameters and highlight areas that need more AG measurements to further improve our understanding of GIA. The best detectable parameter with both methods in both regions is the sensitivity to ice model changes, which covers large areas in the sensitivity maps. Also, most of these areas are isolated from sensitive areas of the other three parameters. The latter mainly overlap with ice model sensitivity and each other. Regarding existing AG stations, more stations are strongly needed in northwestern and Arctic Canada. In contrast, a quite dense network of stations already exists in Fennoscandia. With an extension to a few sites in northwestern Russia, a complete station network is provided to study the GIA parameters. The data of dense networks would yield a comprehensive picture of gravity change, which can be further used for studies of the Earth's interior and geodynamic processes.

  13. On the convergence of ionospheric constrained precise point positioning (IC-PPP) based on undifferential uncombined raw GNSS observations.

    PubMed

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-11-18

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications.

  14. On the Convergence of Ionospheric Constrained Precise Point Positioning (IC-PPP) Based on Undifferential Uncombined Raw GNSS Observations

    PubMed Central

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-01-01

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190

  15. Multi-year search for a diffuse flxu of muon neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Achterberg, A.; Collaboration, IceCube

    2008-04-13

    A search for TeV-PeV muon neutrinos from unresolved sources was performed on AMANDA-II data collected between 2000 and 2003 with an equivalent livetime of 807 days. This diffuse analysis sought to find an extraterrestrial neutrino flux from sources with non-thermal components. The signal is expected to have a harder spectrum than the atmospheric muon and neutrino backgrounds. Since no excess of events was seen in the data over the expected background, an upper limit of E{sup 2}{Phi}{sub 90%C.L.} < 7.4 x 10{sup -8} GeV cm{sup -2} s{sup -1} sr{sup -1} is placed on the diffuse flux of muon neutrinos with a {Phi} {proportional_to} E{sup -2} spectrum in the energy range 16 TeV to 2.5 PeV. This is currently the most sensitive {Phi} {proportional_to} E{sup -2} diffuse astrophysical neutrino limit. We also set upper limits for astrophysical and prompt neutrino models, all of which have spectra different than {Phi} {proportional_to} E{sup -2}.

  16. Constraining precipitation initiation in marine stratocumulus using aircraft observations and LES with high spectral resolution bin microphysics

    NASA Astrophysics Data System (ADS)

    Witte, M.; Chuang, P. Y.; Rossiter, D.; Ayala, O.; Wang, L. P.

    2015-12-01

    Turbulence has been suggested as one possible mechanism to accelerate the onset of autoconversion and widen the process "bottleneck" in the formation of warm rain. While direct observation of the collision-coalescence process remains beyond the reach of present-day instrumentation, co-located sampling of atmospheric motion and the drop size spectrum allows for comparison of in situ observations with simulation results to test representations of drop growth processes. This study evaluates whether observations of drops in the autoconversion regime can be replicated using our best theoretical understanding of collision-coalescence. A state-of-the-art turbulent collisional growth model is applied to a bin microphysics scheme within a large-eddy simulation such that the full range of cloud drop growth mechanisms are represented (i.e. CCN activation, condensation, collision-coalescence, mixing, etc.) at realistic atmospheric conditions. The spectral resolution of the microphysics scheme has been quadrupled in order to (a) more closely match the resolution of the observational instrumentation and (b) limit numerical diffusion, which leads to spurious broadening of the drop size spectrum at standard mass-doubling resolution. We compare simulated cloud drop spectra with those obtained from aircraft observations to assess the quality and limits of our theoretical knowledge. The comparison is performed for two observational cases from the Physics of Stratocumulus Top (POST) field campaign: 12 August 2008 (drizzling night flight, Rmax~2 mm/d) and 15 August 2008 (nondrizzling day flight, Rmax<0.5 mm/d). Both flights took place off the coast of Monterey, CA and the two cases differ in their radiative cooling rates, shear, cloud-top temperature and moisture jumps, and entrainment rates. Initial results from a collision box model suggest that enhancements of approximately 2 orders of magnitude over theoretical turbulent collision rates may be necessary to reproduce the

  17. Gravitational-wave Observations May Constrain Gamma-Ray Burst Models: The Case of GW150914-GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Mészáros, P.; Burns, E.; Connaughton, V.

    2016-08-01

    The possible short gamma-ray burst (GRB) observed by Fermi/GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (˜10-3 cm-3) and a high Lorentz factor (˜2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford-Znajek model. If future joint observations confirm the GW-short-GRB association we can provide similar but more detailed tests for prompt emission models.

  18. Gravitational-wave Observations May Constrain Gamma-Ray Burst Models: The Case of GW150914–GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Mészáros, P.; Burns, E.; Connaughton, V.

    2016-08-01

    The possible short gamma-ray burst (GRB) observed by Fermi/GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (˜10‑3 cm‑3) and a high Lorentz factor (˜2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  19. Fault and anthropogenic processes in central California constrained by satellite and airborne InSAR and in-situ observations

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Lundgren, Paul

    2016-07-01

    , but are subject to severe decorrelation. The L-band ALOS and UAVSAR SAR sensors provide improved coherence compared to the shorter wavelength radar data. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. Modeling selected fault transects reveals a distinct change in surface creep and shallow slip deficit from the central creeping section towards the Parkfield transition. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground subsidence in the SJV due to over-exploitation of groundwater. Groundwater related deformation is spatially and temporally variable and is composed of both recoverable elastic and non-recoverable inelastic components. InSAR time series are compared to GPS and well-water hydraulic head in-situ time series to understand water storage processes and mass loading changes. We are currently developing poroelastic finite element method models to assess the influence of anthropogenic processes on surface deformation and fault mechanics. Ongoing work is to better constrain both tectonic and non-tectonic processes and understand their interaction and implication for regional earthquake hazard.

  20. Effect of time-varying tropospheric models on near-regional and regional infrasound propagation as constrained by observational data

    NASA Astrophysics Data System (ADS)

    McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris

    2008-06-01

    The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.

  1. Constraining the dark fluid

    SciTech Connect

    Kunz, Martin; Liddle, Andrew R.; Parkinson, David; Gao Changjun

    2009-10-15

    Cosmological observations are normally fit under the assumption that the dark sector can be decomposed into dark matter and dark energy components. However, as long as the probes remain purely gravitational, there is no unique decomposition and observations can only constrain a single dark fluid; this is known as the dark degeneracy. We use observations to directly constrain this dark fluid in a model-independent way, demonstrating, in particular, that the data cannot be fit by a dark fluid with a single constant equation of state. Parametrizing the dark fluid equation of state by a variety of polynomials in the scale factor a, we use current kinematical data to constrain the parameters. While the simplest interpretation of the dark fluid remains that it is comprised of separate dark matter and cosmological constant contributions, our results cover other model types including unified dark energy/matter scenarios.

  2. Survey Observation of S-bearing Species toward Neptune's Atmosphere to Constrain the Origin of Abundant Volatile Gases

    NASA Astrophysics Data System (ADS)

    Iino, T.; Mizuno, A.; Nagahama, T.; Nakajima, T.

    2013-09-01

    We present our recent sub-mm waveband observation result of CS, CO and HCN gases on Neptune's atmosphere. Obtained abundance of both CO and HCN were comparable to previous observations. In turn, CS gas, which was produced largely after the impact of comet Shoemaker-Levy 9 on Jupiter in 1994 was not detected. Obtained [CS]/[CO] value was at least 300 times more lower than the case of SL9 event while the calculated lifetime of CS gas by thermo-chemical simulation is quite longer than other S-bearing species. The interpretation of the absence of CS bring the new mystery of the origin of trace gases on Neptune's atmosphere.

  3. Using Observations of Deep Convective Systems to Constrain Atmospheric Column Absorption of Solar Radiation in the Optically Thick Limit

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan; Wielicki, Bruce A.; Xi, Baike; Hu, Yongxiang; Mace, Gerald G.; Benson, Sally; Rose, Fred; Kato, Seiji; Charlock, Thomas; Minnis, Patrick

    2008-01-01

    Atmospheric column absorption of solar radiation A(sub col) is a fundamental part of the Earth's energy cycle but is an extremely difficult quantity to measure directly. To investigate A(sub col), we have collocated satellite-surface observations for the optically thick Deep Convective Systems (DCS) at the Department of Energy Atmosphere Radiation Measurement (ARM) Tropical Western Pacific (TWP) and Southern Great Plains (SGP) sites during the period of March 2000 December 2004. The surface data were averaged over a 2-h interval centered at the time of the satellite overpass, and the satellite data were averaged within a 1 deg X 1 deg area centered on the ARM sites. In the DCS, cloud particle size is important for top-of-atmosphere (TOA) albedo and A(sub col) although the surface absorption is independent of cloud particle size. In this study, we find that the A(sub col) in the tropics is approximately 0.011 more than that in the middle latitudes. This difference, however, disappears, i.e., the A(sub col) values at both regions converge to the same value (approximately 0.27 of the total incoming solar radiation) in the optically thick limit (tau greater than 80). Comparing the observations with the NASA Langley modified Fu_Liou 2-stream radiative transfer model for optically thick cases, the difference between observed and model-calculated surface absorption, on average, is less than 0.01, but the model-calculated TOA albedo and A(sub col) differ by 0.01 to 0.04, depending primarily on the cloud particle size observation used. The model versus observation discrepancies found are smaller than many previous studies and are just within the estimated error bounds. We did not find evidence for a large cloud absorption anomaly for the optically thick limit of extensive ice cloud layers. A more modest cloud absorption difference of 0.01 to 0.04 cannot yet be ruled out. The remaining uncertainty could be reduced with additional cases, and by reducing the current

  4. The linkage between stratospheric water vapor and surface temperature in an observation-constrained coupled general circulation model

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Su, Hui; Jiang, Jonathan H.; Livesey, Nathaniel J.; Santee, Michelle L.; Froidevaux, Lucien; Read, William G.; Anderson, John

    2016-06-01

    We assess the interactions between stratospheric water vapor (SWV) and surface temperature during the past two decades using satellite observations and the Community Earth System Model (CESM). From 1992 to 2013, to first order, the observed SWV exhibited three distinct piece-wise trends: a steady increase from 1992 to 2000, an abrupt drop from 2000 to 2004, and a gradual recovery after 2004, while the global-mean surface temperature experienced a strong increase until 2000 and a warming hiatus after 2000. The atmosphere-only CESM shows that the seasonal variation of tropical-mean (30°S-30°N) SWV is anticorrelated with that of the tropical-mean sea surface temperature (SST), while the correlation between the tropical SWV and SST anomalies on the interannual time scale is rather weak. By nudging the modeled SWV to prescribed profiles in coupled atmosphere-slab ocean experiments, we investigate the impact of SWV variations on surface temperature change. We find that a uniform 1 ppmv (0.5 ppmv) SWV increase (decrease) leads to an equilibrium global mean surface warming (cooling) of 0.12 ± 0.05 °C (-0.07 ± 0.05 °C). Sensitivity experiments show that the equilibrium response of global mean surface temperature to SWV perturbations over the extratropics is larger than that over the tropics. The observed sudden drop of SWV from 2000 to 2004 produces a global mean surface cooling of about -0.048 ± 0.041 °C, which suggests that a persistent change in SWV would make an imprint on long-term variations of global-mean surface temperature. A constant linear increase in SWV based on the satellite-observed rate of SWV change yields a global mean surface warming of 0.03 ± 0.01 °C/decade over a 50-year period, which accounts for about 19 % of the observed surface temperature increase prior to the warming hiatus. In the same experiment, trend analyses during different periods reveal a multi-year adjustment of surface temperature before the response to SWV forcing becomes

  5. Constraining the recent increase of the North Atlantic CO2 uptake by bringing together observations and models

    NASA Astrophysics Data System (ADS)

    Lebehot, Alice; Halloran, Paul; Watson, Andrew; McNeall, Doug; Schuster, Ute

    2016-04-01

    The North Atlantic Ocean is one of the strongest sinks for anthropogenic carbon dioxide (CO2) on the planet. To predict the North Atlantic response to the on-going increase in atmospheric CO2, we need to understand, with robust estimates of uncertainty, how it has changed in the recent past. Although the number of sea surface pCO2 observations has increased by about a factor 5 since 2002, the non-uniform temporal and spatial distribution of these measurements makes it difficult to estimate basin-wide CO2 uptake variability. To fill the observation gaps, and generate basin-wide pCO2 estimates, Multi Linear Regression (MLR) mapping techniques have been used (e.g. Watson et al., 2009). While this approach is powerful, it does not allow one to directly estimate the uncertainty in predictions away from the location of observations. To overcome this challenge we subsample, then using the MLR approach, predict, the CMIP5 model data, data for which we know the 'true' pCO2 and can therefore quantify the error in the prediction. Making the assumption that the CMIP5 models are a set of equally plausible realisations of reality, we use this approach to assign an uncertainty to a new basin-wide estimate of North Atlantic CO2 uptake over the past 20 years. Examining this time-series we find that the real world exhibits a strong increase in CO2 uptake, which is not matched by any of the CMIP5 models.

  6. Global scale river network extraction based on high-resolution topography, constrained by lithology, climate, and observed drainage density

    NASA Astrophysics Data System (ADS)

    Schneider, Ana; Ducharne, Agnès; Jost, Anne; Coulon, Cécile; Silvestre, Marie; Théry, Sylvain

    2016-04-01

    To improve the representation of surface and groundwater flows, global land surface models started to rely on high-resolution parameters, being able to provide a more realistic description of the hydrological cycle. To fulfill this demand, several studies focused on algorithms to produce hydrologically conditioned digital elevation models, and corrected flow directions. River pixels are routinely defined as the pixels with sufficient flow accumulation, usually with a unique value of flow accumulation threshold over the globe. This takes into consideration the first-order control of topography onto the river network, its length and the resulting drainage density, but it overlooks the effects lithology and local climate. This work proposes a calibration of the flow accumulation threshold, based on global lithology and precipitation. We use the high-resolution hydrologically corrected flow directions and flow accumulations from HydroSHEDS, where threshold values are calibrated to match good quality observed river networks (from national databases in France and USA), by distinguishing several precipitation and lithological classes. The calibrated thresholds are then used for river network extraction over the globe. All threshold values remain under 5 km2, with higher thresholds, corresponding to smaller drainage density, in areas with carbonate and unconsolidated sediments and/or low precipitation. The results are presented at 15 arc-seconds resolution (~ 500 m), with global river network and drainage density information. All drainage density results remain in the same order of magnitude as the observations, with an error under 1%. Independent validation will be presented against observed river networks from Brazil. The resulting thresholds provide a tool to extract more realistic river network from any digital elevation model, and the global river network here presented can be incorporated to land surface and climate models.

  7. A Synthesized Model-Observation Approach to Constraining Gross Urban CO2 Fluxes Using 14CO2 and carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    LaFranchi, B. W.; Campbell, J. E.; Cameron-Smith, P. J.; Bambha, R.; Michelsen, H. A.

    2013-12-01

    Urbanized regions are responsible for a disproportionately large percentage (30-40%) of global anthropogenic greenhouse gas (GHG) emissions, despite covering only 2% of the Earth's surface area [Satterthwaite, 2008]. As a result, policies enacted at the local level in these urban areas can, in aggregate, have a large global impact, both positive and negative. In order to address the scientific questions that are required to drive these policy decisions, methods are needed that resolve gross CO2 flux components from the net flux. Recent work suggests that the critical knowledge gaps in CO2 surface fluxes could be addressed through the combined analysis of atmospheric carbonyl sulfide (COS) and radiocarbon in atmospheric CO2 (14CO2) [e.g. Campbell et al., 2008; Graven et al., 2009]. The 14CO2 approach relies on mass balance assumptions about atmospheric CO2 and the large differences in 14CO2 abundance between fossil and natural sources of CO2 [Levin et al., 2003]. COS, meanwhile, is a potentially transformative tracer of photosynthesis because its variability in the atmosphere has been found to be influenced primarily by vegetative uptake, scaling linearly will gross primary production (GPP) [Kettle et al., 20027]. Taken together, these two observations provide constraints on two of the three main components of the CO2 budget at the urban scale: photosynthesis and fossil fuel emissions. The third component, respiration, can then be determined by difference if the net flux is known. Here we present a general overview of our synthesized model-observation approach for improving surface flux estimates of CO2 for the upwind fetch of a ~30m tower located in Livermore, CA, USA, a suburb (pop. ~80,000) at the eastern edge of the San Francisco Bay Area. Additionally, we will present initial results from a one week observational intensive, which includes continuous CO2, CH4, CO, SO2, NOx, and O3 observations in addition to measurements of 14CO2 and COS from air samples

  8. Constraining magnetic-activity modulations in three solar-like stars observed by CoRoT and NARVAL

    NASA Astrophysics Data System (ADS)

    Mathur, S.; García, R. A.; Morgenthaler, A.; Salabert, D.; Petit, P.; Ballot, J.; Régulo, C.; Catala, C.

    2013-02-01

    Context. Stellar activity cycles are the manifestation of dynamo process running in the stellar interiors. They have been observed from years to decades thanks to the measurement of stellar magnetic proxies on the surface of the stars, such as the chromospheric and X-ray emissions, and to the measurement of the magnetic field with spectropolarimetry. However, all of these measurements rely on external features that cannot be visible during, for example, a Maunder-type minimum. With the advent of long observations provided by space asteroseismic missions, it has been possible to penetrate the stars and study their properties. Moreover, the acoustic-mode properties are also perturbed by the presence of these dynamos. Aims: We track the temporal variations of the amplitudes and frequencies of acoustic modes allowing us to search for signature of magnetic activity cycles, as has already been done in the Sun and in the CoRoT target HD 49933. Methods: We used asteroseimic tools and more classical spectroscopic measurements performed with the NARVAL spectropolarimeter to check that there are hints of any activity cycle in three solar-like stars observed continuously for more than 117 days by the CoRoT satellite: HD 49385, HD 181420, and HD 52265. To consider that we have found a hint of magnetic activity in a star we require finding a change in the amplitude of the p modes that should be anti-correlated with a change in their frequency shifts, as well as a change in the spectroscopic observations in the same direction as the asteroseismic data. Results: Our analysis gives very small variation in the seismic parameters preventing us from detecting any magnetic modulation. However, we are able to provide a lower limit of any magnetic-activity change in the three stars that should be longer than 120 days, which is the length of the time series. Moreover we computed the upper limit for the line-of-sight magnetic field component being 1, 3, and 0.6 G for HD 49385, HD 181420

  9. Observationally constrained modeling of sound in curved ocean internal waves: examination of deep ducting and surface ducting at short range.

    PubMed

    Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin

    2011-09-01

    A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones).

  10. Weak ductile shear zone beneath the western North Anatolian Fault Zone: inferences from earthquake cycle model constrained by geodetic observations

    NASA Astrophysics Data System (ADS)

    Yamasaki, T.; Wright, T. J.; Houseman, G. A.

    2013-12-01

    After large earthquakes, rapid postseismic transient motions are commonly observed. Later in the loading cycle, strain is typically focused in narrow regions around the fault. In simple two-layer models of the loading cycle for strike-slip faults, rapid post-seismic transients require low viscosities beneath the elastic layer, but localized strain later in the cycle implies high viscosities in the crust. To explain this apparent paradox, complex transient rheologies have been invoked. Here we test an alternative hypothesis in which spatial variations in material properties of the crust can explain the geodetic observations. We use a 3D viscoelastic finite element code to examine two simple models of periodic fault slip: a stratified model in which crustal viscosity decreases exponentially with depth below an upper elastic layer, and a block model in which a low viscosity domain centered beneath the fault is embedded in a higher viscosity background representing normal crust. We test these models using GPS data acquired before and after the 1999 Izmit/Duzce earthquakes on the North Anatolian Fault Zone (Turkey). The model with depth-dependent viscosity can show both high postseismic velocities, and preseismic localization of the deformation, if the viscosity contrast from top to bottom of layer exceeds a factor of about 104. However, with no lateral variations in viscosity, this model cannot explain the proximity to the fault of maximum postseismic velocities. In contrast, the model which includes a localized weak zone beneath the faulted elastic lid can explain all the observations, if the weak zone extends down to mid-crustal levels and outward to 10 or 20 km from the fault. The non-dimensional ratio of relaxation time to earthquake repeat time, τ/Δt, is the critical parameter in controlling the observed deformation. In the weak-zone model, τ/Δt should be in the range 0.005 to 0.01 in the weak domain, and larger than ~ 1.0 elsewhere. This implies a viscosity

  11. Magnetotelluric observations over the Rhine Graben, France: a simple impedance tensor analysis helps constrain the dominant electrical features

    NASA Astrophysics Data System (ADS)

    Mareschal, M.; Jouanne, V.; Menvielle, M.; Chouteau, M.; Grandis, H.; Tarits, P.

    1992-12-01

    A simple impedance tensor analysis of four magnetotelluric soundings recorded over the ECORS section of the Rhine Graben shows that for periods shorter than about 30 s, induction dominates over channelling. For longer periods, 2-D induction galvanically distorted by surface heterogeneities and/or current chanelled in the Graben can explain the observations; the role of chanelling becomes dominant at periods of the order of a few hundred seconds. In the area considered, induction appears to be controlled by inclusions of saline water in a porous limestone layer (Grande Oolithe) and not by the limits of the Graben with its crystalline shoulder (Vosges). The simple analysis is supported by tipper analyses and by the results of schematic 2-D modelling.

  12. The evolution of the diffuse cosmic ultraviolet background constrained by the Hubble Space Telescope observations of 3C 273

    NASA Technical Reports Server (NTRS)

    Ikeuchi, Satoru; Turner, Edwin L.

    1991-01-01

    Results are presented of recent HST UV spectroscopy of 3C 273, which revealed more low-redshift Lyman-alpha absorption lines (IGM clouds) than expected from the extrapolation from high-redshift (not less than 1.6) observations. It is shown on the basis of the standard pressure confined cloud model of the Lyman-alpha forest that this result indicates a sharp drop in the diffuse cosmic UV background from 2 to 0 redshift. It is predicted that the H I optical depth will drop slowly or perhaps even increase with decreasing redshift at less than 2 redshift. The implied constraints on the density and pressure of the diffuse IGM at 0 redshift are also derived. The inferred evolution of the diffuse UV flux bears a striking resemblance to the most recent direct determinations of the volume emissivity of the quasar population.

  13. The Power of Imaging: Constraining the Plasma Properties of GRMHD Simulations using EHT Observations of Sgr A*

    NASA Astrophysics Data System (ADS)

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Saḑowski, Aleksander

    2015-01-01

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  14. THE POWER OF IMAGING: CONSTRAINING THE PLASMA PROPERTIES OF GRMHD SIMULATIONS USING EHT OBSERVATIONS OF Sgr A*

    SciTech Connect

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Sadowski, Aleksander

    2015-01-20

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  15. The Energy Spectrum of Atmospheric Neutrinos between 2 and 200 TeV with the AMANDA-II Detector

    SciTech Connect

    IceCube Collaboration; Abbasi, R.

    2010-05-11

    The muon and anti-muon neutrino energy spectrum is determined from 2000-2003 AMANDA telescope data using regularised unfolding. This is the first measurement of atmospheric neutrinos in the energy range 2-200 TeV. The result is compared to different atmospheric neutrino models and it is compatible with the atmospheric neutrinos from pion and kaon decays. No significant contribution from charm hadron decays or extraterrestrial neutrinos is detected. The capabilities to improve the measurement of the neutrino spectrum with the successor experiment IceCube are discussed.

  16. Analysis of long-term observations of NOx and CO in megacities and application to constraining emissions inventories

    NASA Astrophysics Data System (ADS)

    Hassler, Birgit; McDonald, Brian C.; Frost, Gregory J.; Borbon, Agnes; Carslaw, David C.; Civerolo, Kevin; Granier, Claire; Monks, Paul S.; Monks, Sarah; Parrish, David D.; Pollack, Ilana B.; Rosenlof, Karen H.; Ryerson, Thomas B.; Schneidemesser, Erika; Trainer, Michael

    2016-09-01

    Long-term atmospheric NOx/CO enhancement ratios in megacities provide evaluations of emission inventories. A fuel-based emission inventory approach that diverges from conventional bottom-up inventory methods explains 1970-2015 trends in NOx/CO enhancement ratios in Los Angeles. Combining this comparison with similar measurements in other U.S. cities demonstrates that motor vehicle emissions controls were largely responsible for U.S. urban NOx/CO trends in the past half century. Differing NOx/CO enhancement ratio trends in U.S. and European cities over the past 25 years highlights alternative strategies for mitigating transportation emissions, reflecting Europe's increased use of light-duty diesel vehicles and correspondingly slower decreases in NOx emissions compared to the U.S. A global inventory widely used by global chemistry models fails to capture these long-term trends and regional differences in U.S. and Europe megacity NOx/CO enhancement ratios, possibly contributing to these models' inability to accurately reproduce observed long-term changes in tropospheric ozone.

  17. Modelled Black Carbon Radiative Forcing and Atmospheric Lifetime in AeroCom Phase II Constrained by Aircraft Observations

    SciTech Connect

    Samset, B. H.; Myhre, G.; Herber, Andreas; Kondo, Yutaka; Li, Shao-Meng; Moteki, N.; Koike, Makoto; Oshima, N.; Schwarz, Joshua P.; Balkanski, Y.; Bauer, S.; Bellouin, N.; Berntsen, T.; Bian, Huisheng; Chin, M.; Diehl, Thomas; Easter, Richard C.; Ghan, Steven J.; Iversen, T.; Kirkevag, A.; Lamarque, Jean-Francois; Lin, Guang; Liu, Xiaohong; Penner, Joyce E.; Schulz, M.; Seland, O.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, Kostas; Zhang, Kai

    2014-11-27

    Black carbon (BC) aerosols absorb solar radiation, and are generally held to exacerbate global warming through exerting a positive radiative forcing1. However, the total contribution of BC to the ongoing changes in global climate is presently under debate2-8. Both anthropogenic BC emissions and the resulting spatial and temporal distribution of BC concentration are highly uncertain2,9. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood, leading to large estimated uncertainty in BC concentration at high altitudes and far from emission sources10. These uncertainties limit our ability to quantify both the historical, present and future anthropogenic climate impact of BC. Here we compare vertical profiles of BC concentration from four recent aircraft measurement campaigns with 13 state of the art aerosol models, and show that recent assessments may have overestimated present day BC radiative forcing. Further, an atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in transport dominated remote regions. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in the multi-model median direct BC forcing from fossil fuel and biofuel burning over the industrial era.

  18. Postglacial Rebound Model ICE-6G_C (VM5a) Constrained by Geodetic and Geologic Observations

    NASA Astrophysics Data System (ADS)

    Peltier, W. R.; Argus, D. F.; Drummond, R.

    2014-12-01

    We fit the revised global model of glacial isostatic adjustment ICE-6G_C (VM5a) to all available data, consisting of several hundred GPS uplift rates, a similar number of 14C dated relative sea level histories, and 62 geologic estimates of changes in Antarctic ice thickness. The mantle viscosity profile, VM5a is a simple multi-layer fit to prior model VM2 of Peltier (1996, Science). However, the revised deglaciation history, ICE-6G (VM5a), differs significantly from previous models in the Toronto series. (1) In North America, GPS observations of vertical uplift of Earth's surface from the Canadian Base Network require the thickness of the Laurentide ice sheet at Last Glacial Maximum to be significantly revised. At Last Glacial Maximum the new model ICE-6G_C in this region, relative to ICE-5G, roughly 50 percent thicker east of Hudson Bay (in and northern Quebec and Labrador region) and roughly 30 percent thinner west of Hudson Bay (in Manitoba, Saskatchewan, and the Northwest Territories).the net change in mass, however, is small. We find that rates of gravity change determined by GRACE when corrected for the predictions of ICE-6G_C (VM5a) are significantly smaller than residuals determined on the basis of earlier models. (2) In Antarctica, we fit GPS uplift rates, geologic estimates of changes in ice thickness, and geologic constraints on the timing of ice loss. The resulting deglaciation history also differs significantly from prior models. The contribution of Antarctic ice loss to global sea level rise since Last Glacial Maximum in ICE-6G_C is 13.6 meters, less than in ICE-5G (17.5 m), but significantly larger than in both the W12A model of Whitehouse et al. [2012] (8 m) and the IJ05 R02 model of Ivins et al. [2013] (7.5 m). In ICE-6G_C rapid ice loss occurs in Antarctica from 11.5 to 8 thousands years ago, with a rapid onset at 11.5 ka thereby contributing significantly to Meltwater Pulse 1B. In ICE-6G_C (VM5a), viscous uplift of Antarctica is increasing

  19. CONSTRAINING THE OUTBURST PROPERTIES OF THE SMBH IN FORNAX A THROUGH X-RAY, INFRARED, AND RADIO OBSERVATIONS

    SciTech Connect

    Lanz, Lauranne; Jones, Christine; Forman, William R.; Ashby, Matthew L. N.; Kraft, Ralph; Hickox, Ryan C.

    2010-10-01

    Combined Spitzer, Chandra, XMM-Newton, and VLA observations of the giant radio galaxy NGC 1316 (Fornax A) show a radio jet and X-ray cavities from active galactic nucleus (AGN) outbursts most likely triggered by a merger with a late-type galaxy at least 0.4 Gyr ago. We detect a weak nucleus with a spectral energy distribution typical of a low-luminosity AGN with a bolometric luminosity of 2.4 x 10{sup 42} erg s{sup -1}. We examine the Spitzer IRAC and MIPS images of NGC 1316. We find that the dust emission is strongest in regions with little or no radio emission and that the particularly large infrared luminosity relative to the galaxy's K-band luminosity implies an external origin for the dust. The inferred dust mass implies that the merger spiral galaxy had a stellar mass of (1-6) x10{sup 10} M{sub sun} and a gas mass of (2-4) x10{sup 9} M{sub sun}. X-ray cavities in the Chandra and XMM-Newton images likely result from the expansion of relativistic plasma ejected by the AGN. The soft (0.5-2.0 keV) Chandra images show a small {approx}15'' (1.6 kpc) cavity coincident with the radio jet, while the XMM-Newton image shows two large X-ray cavities lying 320'' (34.8 kpc) east and west of the nucleus, each approximately 230'' (25 kpc) in radius. Current radio observations do not show emission within these cavities. The radio lobes lie at radii of 14.'3 (93.3 kpc) and 15.'6 (101 kpc), more distant from the nucleus than the detected X-ray cavities. The relative morphology of the large scale 1.4 GHz and X-ray emission suggests they were products of two distinct outbursts, an earlier one creating the radio lobes and a later one producing the X-ray cavities. Alternatively, if a single outburst created both the X-ray cavities and the radio lobes, this would require that the radio morphology is not fully defined by the 1.4 GHz emission. For the more likely two outbursts scenarios, we use the buoyancy rise times to estimate an age for the more recent outburst that created the X

  20. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2010-01-01

    The extremely massive (> 90 Solar Mass) and luminous (= 5 x 10(exp 6) Solar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the galaxy. However, many of its underlying physical parameters remain a mystery. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision in Eta Car, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-1) space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  1. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 Stellar Mass) and luminous (= 5 x 10(exp 6) Stellar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  2. 3D thermo-mechanical model of the orogeny in Pamir constrained by geological and geophysical observations

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Tympel, J.; Ratschbacher, L.

    2015-12-01

    geological observations. The model also replicates evolution of surface topography including the collapse of high Pamir Plateau in N-S and E-W directions, resulting in exhumation of gneiss domes. We demonstrate that extensive westward outflow of material and the relatively small initial width of Pamir are the key factors that controlled its evolution.

  3. Joint inversions of three types of electromagnetic data explicitly constrained by seismic observations: results from the central Okavango Delta, Botswana

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Blake, Sarah; Podgorski, Joel E.; Wagner, Frederic; Green, Alan G.; Maurer, Hansruedi; Jones, Alan G.; Muller, Mark; Ntibinyane, Ongkopotse; Tshoso, Gomotsang

    2015-09-01

    The Okavango Delta of northern Botswana is one of the world's largest inland deltas or megafans. To obtain information on the character of sediments and basement depths, audiomagnetotelluric (AMT), controlled-source audiomagnetotelluric (CSAMT) and central-loop transient electromagnetic (TEM) data were collected on the largest island within the delta. The data were inverted individually and jointly for 1-D models of electric resistivity. Distortion effects in the AMT and CSAMT data were accounted for by including galvanic distortion tensors as free parameters in the inversions. By employing Marquardt-Levenberg inversion, we found that a 3-layer model comprising a resistive layer overlying sequentially a conductive layer and a deeper resistive layer was sufficient to explain all of the electromagnetic data. However, the top of the basal resistive layer from electromagnetic-only inversions was much shallower than the well-determined basement depth observed in high-quality seismic reflection images and seismic refraction velocity tomograms. To resolve this discrepancy, we jointly inverted the electromagnetic data for 4-layer models by including seismic depths to an interface between sedimentary units and to basement as explicit a priori constraints. We have also estimated the interconnected porosities, clay contents and pore-fluid resistivities of the sedimentary units from their electrical resistivities and seismic P-wave velocities using appropriate petrophysical models. In the interpretation of our preferred model, a shallow ˜40 m thick freshwater sandy aquifer with 85-100 Ωm resistivity, 10-32 per cent interconnected porosity and <13 per cent clay content overlies a 105-115 m thick conductive sequence of clay and intercalated salt-water-saturated sands with 15-20 Ωm total resistivity, 1-27 per cent interconnected porosity and 15-60 per cent clay content. A third ˜60 m thick sandy layer with 40-50 Ωm resistivity, 10-33 per cent interconnected porosity and <15

  4. Constraining the Lyα escape fraction with far-infrared observations of Lyα emitters

    SciTech Connect

    Wardlow, Julie L.; Calanog, J.; Cooray, A.; Malhotra, S.; Zheng, Z.; Rhoads, J.; Finkelstein, S.; Bock, J.; Bridge, C.; Ciardullo, R.; Gronwall, C.; Conley, A.; Farrah, D.; Gawiser, E.; Heinis, S.; Ibar, E.; Ivison, R. J.; Marsden, G.; Oliver, S. J.; Riechers, D.; and others

    2014-05-20

    We study the far-infrared properties of 498 Lyα emitters (LAEs) at z = 2.8, 3.1, and 4.5 in the Extended Chandra Deep Field-South, using 250, 350, and 500 μm data from the Herschel Multi-tiered Extragalactic Survey and 870 μm data from the LABOCA ECDFS Submillimeter Survey. None of the 126, 280, or 92 LAEs at z = 2.8, 3.1, and 4.5, respectively, are individually detected in the far-infrared data. We use stacking to probe the average emission to deeper flux limits, reaching 1σ depths of ∼0.1 to 0.4 mJy. The LAEs are also undetected at ≥3σ in the stacks, although a 2.5σ signal is observed at 870 μm for the z = 2.8 sources. We consider a wide range of far-infrared spectral energy distributions (SEDs), including an M82 and an Sd galaxy template, to determine upper limits on the far-infrared luminosities and far-infrared-derived star formation rates of the LAEs. These star formation rates are then combined with those inferred from the Lyα and UV emission to determine lower limits on the LAEs' Lyα escape fraction (f {sub esc}(Lyα)). For the Sd SED template, the inferred LAEs f {sub esc}(Lyα) are ≳ 30% (1σ) at z = 2.8, 3.1, and 4.5, which are all significantly higher than the global f {sub esc}(Lyα) at these redshifts. Thus, if the LAEs f {sub esc}(Lyα) follows the global evolution, then they have warmer far-infrared SEDs than the Sd galaxy template. The average and M82 SEDs produce lower limits on the LAE f {sub esc}(Lyα) of ∼10%-20% (1σ), all of which are slightly higher than the global evolution of f {sub esc}(Lyα), but consistent with it at the 2σ-3σ level.

  5. Thermal-based modeling of coupled carbon, water and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2014-10-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light-use-efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of Absorbed Photosynthetically Active Radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote sensing-based Two-Source Energy Balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is non-linearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in-situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  6. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2015-03-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  7. Mothers, daughters and midlife (self)-discoveries: gender and aging in the Amanda Cross' Kate Fansler series.

    PubMed

    Domínguez-Rué, Emma

    2012-12-01

    In the same way that many aspects of gender cannot be understood aside from their relationship to race, class, culture, nationality and/or sexuality, the interactions between gender and aging constitute an interesting field for academic research, without which we cannot gain full insight into the complex and multi-faceted nature of gender studies. Although the American writer and Columbia professor Carolyn Gold Heilbrun (1926-2003) is more widely known for her best-selling mystery novels, published under the pseudonym of Amanda Cross, she also authored remarkable pieces of non-fiction in which she asserted her long-standing commitment to feminism, while she also challenged established notions on women and aging and advocated for a reassessment of those negative views. To my mind, the Kate Fansler novels became an instrument to reach a massive audience of female readers who might not have read her non-fiction, but who were perhaps finding it difficult to reach fulfillment as women under patriarchy, especially upon reaching middle age. Taking her essays in feminism and literary criticism as a basis and her later fiction as substantiation to my argument, this paper will try to reveal the ways in which Heilbrun's seemingly more superficial and much more commercial mystery novels as Amanda Cross were used a catalyst that informed her feminist principles while vindicating the need to rethink about issues concerning literary representations of mature women and cultural stereotypes about motherhood. PMID:22939539

  8. Inclusion of In-Situ Velocity Measurements into the UCSD Time-Dependent Tomography to Constrain and Better-Forecast Remote-Sensing Observations

    NASA Astrophysics Data System (ADS)

    Jackson, B. V.; Hick, P. P.; Bisi, M. M.; Clover, J. M.; Buffington, A.

    2010-08-01

    The University of California, San Diego (UCSD) three-dimensional (3-D) time-dependent tomography program has been used successfully for a decade to reconstruct and forecast coronal mass ejections from interplanetary scintillation observations. More recently, we have extended this tomography technique to use remote-sensing data from the Solar Mass Ejection Imager (SMEI) on board the Coriolis spacecraft; from the Ootacamund (Ooty) radio telescope in India; and from the European Incoherent SCATter (EISCAT) radar telescopes in northern Scandinavia. Finally, we intend these analyses to be used with observations from the Murchison Widefield Array (MWA), or the LOw Frequency ARray (LOFAR) now being developed respectively in Australia and Europe. In this article we demonstrate how in-situ velocity measurements from the Advanced Composition Explorer (ACE) space-borne instrumentation can be used in addition to remote-sensing data to constrain the time-dependent tomographic solution. Supplementing the remote-sensing observations with in-situ measurements provides additional information to construct an iterated solar-wind parameter that is propagated outward from near the solar surface past the measurement location, and throughout the volume. While the largest changes within the volume are close to the radial directions that incorporate the in-situ measurements, their inclusion significantly reduces the uncertainty in extending these measurements to global 3-D reconstructions that are distant in time and space from the spacecraft. At Earth, this can provide a finely-tuned real-time measurement up to the latest time for which in-situ measurements are available, and enables more-accurate forecasting beyond this than remote-sensing observations alone allow.

  9. Comparison of Satellite-Derived TOA Shortwave Clear-Sky Fluxes to Estimates from GCM Simulations Constrained by Satellite Observations of Land Surface Characteristics

    NASA Technical Reports Server (NTRS)

    Anantharaj, Valentine G.; Nair, Udaysankar S.; Lawrence, Peter; Chase, Thomas N.; Christopher, Sundar; Jones, Thomas

    2010-01-01

    Clear-sky, upwelling shortwave flux at the top of the atmosphere (S(sub TOA raised arrow)), simulated using the atmospheric and land model components of the Community Climate System Model 3 (CCSM3), is compared to corresponding observational estimates from the Clouds and Earth's Radiant Energy System (CERES) sensor. Improvements resulting from the use of land surface albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to constrain the simulations are also examined. Compared to CERES observations, CCSM3 overestimates global, annual averaged S(sub TOA raised arrow) over both land and oceans. However, regionally, CCSM3 overestimates S(sub TOA raised arrow) over some land and ocean areas while underestimating it over other sites. CCSM3 underestimates S(sub TOA raised arrow) over the Saharan and Arabian Deserts and substantial differences exist between CERES observations and CCSM3 over agricultural areas. Over selected sites, after using groundbased observations to remove systematic biases that exist in CCSM computation of S(sub TOA raised arrow), it is found that use of MODIS albedo improves the simulation of S(sub TOA raised arrow). Inability of coarse resolution CCSM3 simulation to resolve spatial heterogeneity of snowfall over high altitude sites such as the Tibetan Plateau causes overestimation of S(sub TOA raised arrow) in these areas. Discrepancies also exist in the simulation of S(sub TOA raised arrow) over ocean areas as CCSM3 does not account for the effect of wind speed on ocean surface albedo. This study shows that the radiative energy budget at the TOA is improved through the use of MODIS albedo in Global Climate Models.

  10. Constraining the Structure of the Transition Disk HD 135344B (SAO 206462) by Simultaneous Modeling of Multiwavelength Gas and Dust Observations

    NASA Technical Reports Server (NTRS)

    Carmona, A.; Pinte, C.; Thi, W. F.; Benisty, M.; Menard, F.; Grady, C.; Kamp, I.; Woitke, P.; Olofsson, J.; Roberge, A.; Brittain, S.; Duchene, G.; Meeus, G.; Martin-Zaidi, C.; Dent, B.; Le Bouquin, J. E.; Berger, J. P.

    2014-01-01

    Context: Constraining the gas and dust disk structure of transition disks, particularly in the inner dust cavity, is a crucial step toward understanding the link between them and planet formation. HD 135344B is an accreting (pre-)transition disk that displays the CO 4.7 micrometer emission extending tens of AU inside its 30 AU dust cavity. Aims: We constrain HD 135344B's disk structure from multi-instrument gas and dust observations. Methods: We used the dust radiative transfer code MCFOST and the thermochemical code ProDiMo to derive the disk structure from the simultaneous modeling of the spectral energy distribution (SED), VLT/CRIRES CO P(10) 4.75 Micrometers, Herschel/PACS [O(sub I)] 63 Micrometers, Spitzer/IRS, and JCMT CO-12 J = 3-2 spectra, VLTI/PIONIER H-band visibilities, and constraints from (sub-)mm continuum interferometry and near-IR imaging. Results: We found a disk model able to describe the current gas and dust observations simultaneously. This disk has the following structure. (1) To simultaneously reproduce the SED, the near-IR interferometry data, and the CO ro-vibrational emission, refractory grains (we suggest carbon) are present inside the silicate sublimation radius (0.08 is less than R less than 0.2 AU). (2) The dust cavity (R is less than 30 AU) is filled with gas, the surface density of the gas inside the cavity must increase with radius to fit the CO ro-vibrational line profile, a small gap of a few AU in the gas distribution is compatible with current data, and a large gap of tens of AU in the gas does not appear likely. (4) The gas-to-dust ratio inside the cavity is >100 to account for the 870 Micrometers continuum upper limit and the CO P(10) line flux. (5) The gas-to-dust ratio in the outer disk (30 is less than R less than 200 AU) is less than 10 to simultaneously describe the [O(sub I)] 63 Micrometers line flux and the CO P(10) line profile. (6) In the outer disk, most of the gas and dust mass should be located in the midplane, and

  11. Improved western U.S. background ozone estimates via constraining nonlocal and local source contributions using Aura TES and OMI observations

    NASA Astrophysics Data System (ADS)

    Huang, Min; Bowman, Kevin W.; Carmichael, Gregory R.; Lee, Meemong; Chai, Tianfeng; Spak, Scott N.; Henze, Daven K.; Darmenov, Anton S.; Silva, Arlindo M.

    2015-04-01

    Western U.S. near-surface ozone (O3) concentrations are sensitive to transported background O3 from the eastern Pacific free troposphere, as well as U.S. anthropogenic and natural emissions. The current 75 ppbv U.S. O3 primary standard may be lowered soon, hence accurately estimating O3 source contributions, especially background O3 in this region has growing policy-relevant significance. In this study, we improve the modeled total and background O3, via repartitioning and redistributing the contributions from nonlocal and local anthropogenic/wildfires sources in a multi-scale satellite data assimilation system containing global Goddard Earth Observing System-Chemistry model (GEOS-Chem) and regional Sulfur Transport and dEposition Model (STEM). Focusing on NASA's ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) field campaign period in June-July 2008, we first demonstrate that the negative biases in GEOS-Chem free simulation in the eastern Pacific at 400-900 hPa are reduced via assimilating Aura-Tropospheric Emission Spectrometer (TES) O3 profiles. Using the TES-constrained boundary conditions, we then assimilated into STEM the tropospheric nitrogen dioxide (NO2) columns from Aura-Ozone Monitoring Instrument to indicate U.S. nitrogen oxides (NOx = NO2 + NO) emissions at 12 × 12 km2 grid scale. Improved model skills are indicated from cross validation against independent ARCTAS measurements. Leveraging Aura observations, we show anomalously high wildfire NOx emissions in this summer in Northern California and the Central Valley while lower anthropogenic emissions in multiple urban areas than those representing the year of 2005. We found strong spatial variability of the daily maximum 8 h average background O3 and its contribution to the modeled total O3, with the mean value of ~48 ppbv (~77% of the total).

  12. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger

  13. CO2 emissions from wildfires in Siberia: FRP measurement based estimates constrained by satellite and ground based observations of co-emitted species

    NASA Astrophysics Data System (ADS)

    Berezin, Evgeny V.; Konovalov, Igor B.; Ciais, Philippe; Broquet, Gregoire; Wu, Lin; Beekmann, Matthias; Hadji-Lazaro, Juliette; Clerbaux, Cathy; Andreae, Meinrat O.; Kaiser, Johannes W.; Schulze, Ernst-Detlef

    2013-04-01

    Wildfires play an important role in the global carbon balance, being one of the major processes of the carbon cycle and by providing a considerable contribution to the global carbon dioxide emissions. Meanwhile, significant discrepancies (especially on a regional scale) between the available wildfire emission estimates provided by different global and regional emission inventories indicate that the current knowledge of wildfire emissions and related processes is still deficient. Although studies of wildfire emissions of several important species have greatly benefited from the recent advent of satellite measurements of the tropospheric composition, the informativeness of direct satellite measurements of CO2 in such a context still remains rather limited. We develop a new approach [1] to estimate CO2 emissions, which is based on the use of satellite measurements of co-emitted species in combination with chemistry transport model simulations and "bottom-up" emission inventory data. In this study we apply this approach together with the earlier developed method [2] for estimation of wildfire emissions to constrain the parameters of a fire emission model and to estimate emissions of CO2, CO and aerosol from wildfires in such an important carbon-rich region of the world as Siberia. We employ the MODIS fire radiative power (FRP) measurements to obtain spatial-temporal fields of fire activity and other (IASI CO and MODIS AOD) satellite observations in combination with simulations performed with the CHIMERE chemistry transport model to estimate the FRP to biomass burning rate conversion factors for different vegetative land cover types. The conversion factors (which are believed to be much more uncertain than the available estimates of the emission factors for major species) derived from the CO and AOD measurements are additionally evaluated with independent ground based measurements in Zotino and Tomsk and are combined in the Bayesian way to obtain CO2 emission estimates

  14. Southern San Andreas-San Jacinto fault system slip rates estimated from earthquake cycle models constrained by GPS and interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Hetland, Eric A.; Liu, Zhen; Fielding, Eric J.

    2009-02-01

    We use ground geodetic and interferometric synthetic aperture radar satellite observations across the southern San Andreas (SAF)-San Jacinto (SJF) fault systems to constrain their slip rates and the viscosity structure of the lower crust and upper mantle on the basis of periodic earthquake cycle, Maxwell viscoelastic, finite element models. Key questions for this system are the SAF and SJF slip rates, the slip partitioning between the two main branches of the SJF, and the dip of the SAF. The best-fitting models generally have a high-viscosity lower crust (η = 1021 Pa s) overlying a lower-viscosity upper mantle (η = 1019 Pa s). We find considerable trade-offs between the relative time into the current earthquake cycle of the San Jacinto fault and the upper mantle viscosity. With reasonable assumptions for the relative time in the earthquake cycle, the partition of slip is fairly robust at around 24-26 mm/a for the San Jacinto fault system and 16-18 mm/a for the San Andreas fault. Models for two subprofiles across the SAF-SJF systems suggest that slip may transfer from the western (Coyote Creek) branch to the eastern (Clark-Superstition hills) branch of the SJF from NW to SE. Across the entire system our best-fitting model gives slip rates of 2 ± 3, 12 ± 9, 12 ± 9, and 17 ± 3 mm/a for the Elsinore, Coyote Creek, Clark, and San Andreas faults, respectively, where the large uncertainties in the slip rates for the SJF branches reflect the large uncertainty in the slip rate partitioning within the SJF system.

  15. Comparing Simulations of Rising Flux Tubes Through the Solar Convection Zone with Observations of Solar Active Regions: Constraining the Dynamo Field Strength

    NASA Astrophysics Data System (ADS)

    Weber, M. A.; Fan, Y.; Miesch, M. S.

    2013-10-01

    We study how active-region-scale flux tubes rise buoyantly from the base of the convection zone to near the solar surface by embedding a thin flux tube model in a rotating spherical shell of solar-like turbulent convection. These toroidal flux tubes that we simulate range in magnetic field strength from 15 kG to 100 kG at initial latitudes of 1∘ to 40∘ in both hemispheres. This article expands upon Weber, Fan, and Miesch ( Astrophys. J. 741, 11, 2011) (Article 1) with the inclusion of tubes with magnetic flux of 1020 Mx and 1021 Mx, and more simulations of the previously investigated case of 1022 Mx, sampling more convective flows than the previous article, greatly improving statistics. Observed properties of active regions are compared to properties of the simulated emerging flux tubes, including: the tilt of active regions in accordance with Joy's Law as in Article 1, and in addition the scatter of tilt angles about the Joy's Law trend, the most commonly occurring tilt angle, the rotation rate of the emerging loops with respect to the surrounding plasma, and the nature of the magnetic field at the flux tube apex. We discuss how these diagnostic properties constrain the initial field strength of the active-region flux tubes at the bottom of the solar convection zone, and suggest that flux tubes of initial magnetic field strengths of ≥ 40 kG are good candidates for the progenitors of large (1021 Mx to 1022 Mx) solar active regions, which agrees with the results from Article 1 for flux tubes of 1022 Mx. With the addition of more magnetic flux values and more simulations, we find that for all magnetic field strengths, the emerging tubes show a positive Joy's Law trend, and that this trend does not show a statistically significant dependence on the magnetic flux.

  16. Constraining inflation

    SciTech Connect

    Adshead, Peter; Easther, Richard E-mail: richard.easther@yale.edu

    2008-10-15

    We analyze the theoretical limits on slow roll reconstruction, an optimal algorithm for recovering the inflaton potential (assuming a single-field slow roll scenario) from observational data. Slow roll reconstruction is based upon the Hamilton-Jacobi formulation of the inflationary dynamics. We show that at low inflationary scales the Hamilton-Jacobi equations simplify considerably. We provide a new classification scheme for inflationary models, based solely on the number of parameters needed to specify the potential, and provide forecasts for the bounds on the slow roll parameters from future data sets. A minimal running of the spectral index, induced solely by the first two slow roll parameters ({epsilon} and {eta}), appears to be effectively undetectable by realistic cosmic microwave background (CMB) experiments. However, since the ability to detect any running increases with the lever arm in comoving wavenumber, we conjecture that high redshift 21 cm data may allow tests of second-order consistency conditions on inflation. Finally, we point out that the second-order corrections to the spectral index are correlated with the inflationary scale, and thus the amplitude of the CMB B mode.

  17. Constraining primordial magnetism

    NASA Astrophysics Data System (ADS)

    Shaw, J. Richard; Lewis, Antony

    2012-08-01

    Primordial magnetic fields could provide an explanation for the galactic magnetic fields observed today; in which case, they may leave interesting signals in the CMB and the small-scale matter power spectrum. We discuss how to approximately calculate the important nonlinear magnetic effects within the guise of linear perturbation theory and calculate the matter and CMB power spectra including the Sunyaev-Zel’dovich contribution. We then use various cosmological data sets to constrain the form of the magnetic field power spectrum. Using solely large-scale CMB data (WMAP7, QUaD, and ACBAR) we find a 95% C.L. on the variance of the magnetic field at 1 Mpc of Bλ<6.4nG. When we include South Pole Telescope data to constrain the Sunyaev-Zel’dovich effect, we find a revised limit of Bλ<4.1nG. The addition of Sloan Digital Sky Survey Lyman-α data lowers this limit even further, roughly constraining the magnetic field to Bλ<1.3nG.

  18. A province-scale block model of Walker Lane and western Basin and Range crustal deformation constrained by GPS observations (Invited)

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Bormann, J.; Blewitt, G.; Kreemer, C.

    2013-12-01

    The Walker Lane in the western Great Basin of the western United States is an 800 km long and 100 km wide zone of active intracontinental transtension that absorbs ~10 mm/yr, about 20% of the Pacific/North America plate boundary relative motion. Lying west of the Sierra Nevada/Great Valley microplate (SNGV) and adjoining the Basin and Range Province to the east, deformation is predominantly shear strain overprinted with a minor component of extension. The Walker Lane responds with faulting, block rotations, structural step-overs, and has distinct and varying partitioned domains of shear and extension. Resolving these complex deformation patterns requires a long term observation strategy with a dense network of GPS stations (spacing ~20 km). The University of Nevada, Reno operates the 373 station Mobile Array of GPS for Nevada transtension (MAGNET) semi-continuous network that supplements coverage by other networks such as EarthScope's Plate Boundary Observatory, which alone has insufficient density to resolve the deformation patterns. Uniform processing of data from these GPS mega-networks provides a synoptic view and new insights into the kinematics and mechanics of Walker Lane tectonics. We present velocities for thousands of stations with time series between 3 to 17 years in duration aligned to our new GPS-based North America fixed reference frame NA12. The velocity field shows a rate budget across the southern Walker Lane of ~10 mm/yr, decreasing northward to ~7 mm/yr at the latitude of the Mohawk Valley and Pyramid Lake. We model the data with a new block model that estimates rotations and slip rates of known active faults between the Mojave Desert and northern Nevada and northeast California. The density of active faults in the region requires including a relatively large number of blocks in the model to accurately estimate deformation patterns. With 49 blocks, our the model captures structural detail not represented in previous province-scale models, and

  19. Three-dimensional constrained variational analysis: Approach and application to analysis of atmospheric diabatic heating and derivative fields during an ARM SGP intensive observational period

    NASA Astrophysics Data System (ADS)

    Tang, Shuaiqi; Zhang, Minghua

    2015-08-01

    Atmospheric vertical velocities and advective tendencies are essential large-scale forcing data to drive single-column models (SCMs), cloud-resolving models (CRMs), and large-eddy simulations (LESs). However, they cannot be directly measured from field measurements or easily calculated with great accuracy. In the Atmospheric Radiation Measurement Program (ARM), a constrained variational algorithm (1-D constrained variational analysis (1DCVA)) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). The 1DCVA algorithm is now extended into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data, diabatic heating sources (Q1), and moisture sinks (Q2). Results are presented for a midlatitude cyclone case study on 3 March 2000 at the ARM Southern Great Plains site. These results are used to evaluate the diabatic heating fields in the available products such as Rapid Update Cycle, ERA-Interim, National Centers for Environmental Prediction Climate Forecast System Reanalysis, Modern-Era Retrospective Analysis for Research and Applications, Japanese 55-year Reanalysis, and North American Regional Reanalysis. We show that although the analysis/reanalysis generally captures the atmospheric state of the cyclone, their biases in the derivative terms (Q1 and Q2) at regional scale of a few hundred kilometers are large and all analyses/reanalyses tend to underestimate the subgrid-scale upward transport of moist static energy in the lower troposphere. The 3DCVA-gridded large-scale forcing data are physically consistent with the spatial distribution of surface and TOA measurements of radiation, precipitation, latent and sensible heat fluxes, and clouds that are better suited to force SCMs, CRMs, and LESs. Possible applications of the 3DCVA are discussed.

  20. Transport of nutrients and contaminants from ocean to island by emperor penguins from Amanda Bay, East Antarctic.

    PubMed

    Huang, Tao; Sun, Liguang; Wang, Yuhong; Chu, Zhuding; Qin, Xianyan; Yang, Lianjiao

    2014-01-15

    Penguins play important roles in the biogeochemical cycle between Antarctic Ocean and land ecosystems. The roles of emperor penguin Aptenodytes forsteri, however, are usually ignored because emperor penguin breeds in fast sea ice. In this study, we collected two sediment profiles (EPI and PI) from the N island near a large emperor penguin colony at Amanda Bay, East Antarctic and performed stable isotope and element analyses. The organic C/N ratios and carbon and nitrogen isotopes suggested an autochthonous source of organic materials for the sediments of EPI (C/N = 10.21 ± 0.28, n = 17; δ(13)C = -13.48 ± 0.50‰, δ(15)N = 8.35 ± 0.55‰, n = 4) and an allochthonous source of marine-derived organic materials for the sediments of PI (C/N = 6.15 ± 0.08, δ(13)C = -26.85 ± 0.11‰, δ(15)N = 21.21 ± 2.02‰, n = 20). The concentrations of total phosphorus (TP), selenium (Se), mercury (Hg) and zinc (Zn) in PI sediments were much higher than those in EPI, the concentration of copper (Cu) in PI was a little lower, and the concentration of element lead (Pb) showed no difference. As measured by the geoaccumulation indexes, Zn, TP, Hg and Se were from moderately to very strongly enriched in PI, relative to local mother rock, due to the guano input from juvenile emperor penguins. Because of its high trophic level and transfer efficiency, emperor penguin can transport a large amount of nutrients and contaminants from ocean to land even with a relatively small population, and its roles in the biogeochemical cycle between ocean and terrestrial environment should not be ignored. PMID:24056448

  1. Transport of nutrients and contaminants from ocean to island by emperor penguins from Amanda Bay, East Antarctic.

    PubMed

    Huang, Tao; Sun, Liguang; Wang, Yuhong; Chu, Zhuding; Qin, Xianyan; Yang, Lianjiao

    2014-01-15

    Penguins play important roles in the biogeochemical cycle between Antarctic Ocean and land ecosystems. The roles of emperor penguin Aptenodytes forsteri, however, are usually ignored because emperor penguin breeds in fast sea ice. In this study, we collected two sediment profiles (EPI and PI) from the N island near a large emperor penguin colony at Amanda Bay, East Antarctic and performed stable isotope and element analyses. The organic C/N ratios and carbon and nitrogen isotopes suggested an autochthonous source of organic materials for the sediments of EPI (C/N = 10.21 ± 0.28, n = 17; δ(13)C = -13.48 ± 0.50‰, δ(15)N = 8.35 ± 0.55‰, n = 4) and an allochthonous source of marine-derived organic materials for the sediments of PI (C/N = 6.15 ± 0.08, δ(13)C = -26.85 ± 0.11‰, δ(15)N = 21.21 ± 2.02‰, n = 20). The concentrations of total phosphorus (TP), selenium (Se), mercury (Hg) and zinc (Zn) in PI sediments were much higher than those in EPI, the concentration of copper (Cu) in PI was a little lower, and the concentration of element lead (Pb) showed no difference. As measured by the geoaccumulation indexes, Zn, TP, Hg and Se were from moderately to very strongly enriched in PI, relative to local mother rock, due to the guano input from juvenile emperor penguins. Because of its high trophic level and transfer efficiency, emperor penguin can transport a large amount of nutrients and contaminants from ocean to land even with a relatively small population, and its roles in the biogeochemical cycle between ocean and terrestrial environment should not be ignored.

  2. Structural basins, terrain contacts, and large fault displacements on the central California continental margin, constrained by seismic data and submersible observations

    NASA Astrophysics Data System (ADS)

    Ramirez, T. M.; Caress, D.; Aiello, I.; Greene, G.; Lewis, S.; Paull, C.; Silver, E.; Stakes, D.

    2002-05-01

    A synthesis of reprocessed multichannel seismic data and lithologies based on ROV sampling defines a series of block faulted basement rocks off-shore central California in the Monterey Bay region with lithologies associated with either Salinia or Franciscan microterrains and their overlying sediments. In 1990, the USGS conducted a multi-channel seismic (MCS) reflection survey (cruise L-3-90-NC) off the central California coast between Monterey Bay and Bodega Bay. Sixty-two MCS lines were collected on the R/V S. P. Lee using a 2.6 km long, 48-channel streamer and a tuned 2400 cubic inch array of ten airguns. We have reprocessed several critical lines and reviewed the entire dataset to map basement structures and the lithostratigraphy of sediments. From preliminary analyses of the MCS data, the development of Smooth Ridge appears controlled by two prominent basement highs forming a trapped basin for sediment accumulation. Meanders in the lower Monterey Canyon, seen in bathymetric data, are constrained by these uplifted basement blocks. The lithologies of basement samples collected by ROV show spatial relationships that correlate with the seismic character. Faulted contacts between the blocks of the Franciscan and Salinia microterrains are consistent with 100+ Km of right slip displacement on the San Gregorio fault zone. These contacts are onlapped by Tertiary sediments forming a series of basins such as Smooth Ridge aligned along the continental margin. On-going analyses of these data will allow for a better understanding of the Monterey Bay regional tectonics and contribute to the mapping of the western edge of the paleo-subduction zone along central California.

  3. Multi-Resolution Long-Term Satellite Observations of Declines in Photosynthetic Capacity: Constraining Abiotic and Biophysical Disturbances to Plant Productivity in North America

    NASA Astrophysics Data System (ADS)

    Neigh, C. S.; Bolton, D. K.; Diabate, M. A.; Tucker, C. J.

    2009-12-01

    Declines in vegetation growth driven by disturbance from climate or biophysical processes throughout North America have been observed with coarse resolution remote sensing data over nearly three decades, yet many of the direct local scale disturbance dynamics are not included with spatially explicit information in simulated estimates of ecosystem carbon balance. We used multi temporal-spectral remote sensing data to understand if fine-scale disturbance dynamics impact broad-scale declines in vegetation growth which are relevant to regional carbon budgets. Investigation of long-term NOAA series Advanced Very High Resolution Radiometer normalized difference vegetation index (NDVI) data from 1982-2007 from the NASA Goddard Global Inventory Modeling and Mapping Studies (GIMMS) group version ‘g’ was undertaken to understand if anomalous negative trends were due to a remotely observable fine-scale disturbance phenomena, climate change, or were a result of a data processing artifacts. Three regions were selected for investigation which showed marked declines over the long-term data record. Interior Alaska, Western Oregon, and Northern Wisconsin, all have had reported forest declines from climate and/or insect disturbance. Landsat data were classified and validated with historical air-photos with resultant fine-scale change maps geospatially linked to coarse-scale trends to understand multi-scale observations of long-term forest dynamics. Three goals related to scale of observations were addressed: 1) confirm disturbance events with high-resolution data in regions with declines of photosynthetic capacity; 2) develop fine-scale multi-temporal LCLUC information needed for production efficiency models; and 3) derive scale aspects of disturbance in coarse grid satellite driven simulations of net primary production exploring if widespread fine-scale sub-grid disturbance is overlooked. Our multi temporal-spectral investigation aimed to improve local scale biogeochemistry

  4. Weak ductile shear zone beneath a major strike-slip fault: Inferences from earthquake cycle model constrained by geodetic observations of the western North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Yamasaki, Tadashi; Wright, Tim J.; Houseman, Gregory A.

    2014-04-01

    GPS data before and after the 1999 İzmit/Düzce earthquakes on the North Anatolian Fault Zone (Turkey) reveal a preseismic strain localization within about 25 km of the fault and a rapid postseismic transient. Using 3-D finite element calculations of the earthquake cycle in an idealized model of the crust, comprising elastic above Maxwell viscoelastic layers, we show that spatially varying viscosity in the crust can explain these observations. Depth-dependent viscosity without lateral variations can reproduce some of the observations but cannot explain the proximity to the fault of maximum postseismic velocities. A localized weak zone beneath the faulted elastic lid satisfactorily explains the observations if the weak zone extends down to midcrustal depths, and the ratio of relaxation time to earthquake repeat time ranges from ~0.005 to ~0.01 (for weak-zone widths of ~24 and 40 km, respectively) in the weakened domain and greater than ~1.0 elsewhere, corresponding to viscosities of ~1018 ± 0.3 Pa s and greater than ~1020 Pa s. Models with sharp weak-zone boundaries fit the data better than those with a smooth viscosity increase away from the fault, implying that the weak zone may be bounded by a relatively abrupt change in material properties. Such a change might result from lithological contrast, grain size reduction, fabric development, or water content, in addition to any effects from shear heating. Our models also imply that viscosities inferred from postseismic studies primarily reflect the rheology of the weak zone and should not be used to infer the mechanical properties of normal crust.

  5. The formation of peak-ring basins: Working hypotheses and path forward in using observations to constrain models of impact-basin formation

    NASA Astrophysics Data System (ADS)

    Baker, David M. H.; Head, James W.; Collins, Gareth S.; Potter, Ross W. K.

    2016-07-01

    Impact basins provide windows into the crustal structure and stratigraphy of planetary bodies; however, interpreting the stratigraphic origin of basin materials requires an understanding of the processes controlling basin formation and morphology. Peak-ring basins (exhibiting a rim crest and single interior ring of peaks) provide important insight into the basin-formation process, as they are transitional between complex craters with central peaks and larger multi-ring basins. New image and altimetry data from the Lunar Reconnaissance Orbiter as well as a suite of remote sensing datasets have permitted a reassessment of the origin of lunar peak-ring basins. We synthesize morphometric, spectroscopic, and gravity observations of lunar peak-ring basins and describe two working hypotheses for the formation of peak rings that involve interactions between inward collapsing walls of the transient cavity and large central uplifts of the crust and mantle. Major facets of our observations are then compared and discussed in the context of numerical simulations of peak-ring basin formation in order to plot a course for future model refinement and development.

  6. Joint modeling of teleseismic and tsunami wave observations to constrain the 16 September 2015 Illapel, Chile, Mw 8.3 earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Li, Linyan; Lay, Thorne; Cheung, Kwok Fai; Ye, Lingling

    2016-05-01

    The 16 September 2015 Illapel, Chile, Mw 8.3 earthquake ruptured ~170 km along the plate boundary megathrust fault from 30.0°S to 31.6°S. A patch of offshore slip of up to 10 m extended to near the trench, and a patch of ~3 m slip occurred downdip below the coast. Aftershocks fringe the large-slip zone, extending along the coast from 29.5°S to 32.5°S between the 1922 and 1971/1985 ruptures. The coseismic slip distribution is determined by iterative modeling of teleseismic body waves as well as tsunami signals recorded at three regional DART stations and tide gauges immediately north and south of the rupture. The tsunami observations tightly delimit the rupture length, suppressing bilateral southward extension of slip found in unconstrained teleseismic-wave inversions. The spatially concentrated rupture area, with a stress drop of ~3.2 MPa, is validated by modeling DART and tide gauge observations in Hawaii, which also prove sensitive to the along-strike length of the rupture.

  7. Synergy of short gamma ray burst and gravitational wave observations: Constraining the inclination angle of the binary and possible implications for off-axis gamma ray bursts

    NASA Astrophysics Data System (ADS)

    Arun, K. G.; Tagoshi, Hideyuki; Pai, Archana; Mishra, Chandra Kant

    2014-07-01

    Compact binary mergers are the strongest candidates for the progenitors of short gamma ray bursts (SGRBs). If a gravitational wave signal from the compact binary merger is observed in association with a SGRB, such a synergy can help us understand many interesting aspects of these bursts. We examine the accuracies with which a worldwide network of gravitational wave interferometers would measure the inclination angle (the angle between the angular momentum axis of the binary and the observer's line of sight) of the binary. We compare the projected accuracies of gravitational wave detectors to measure the inclination angle of double neutron star and neutron star-black hole binaries for different astrophysical scenarios. We find that a five-detector network can measure the inclination angle to an accuracy of ˜5.1 (2.2) deg for a double neutron star (neutron star-black hole) system at 200 Mpc if the direction of the source as well as the redshift is known electromagnetically. We argue as to how an accurate estimation of the inclination angle of the binary can prove to be crucial in understanding off-axis GRBs, the dynamics and the energetics of their jets, and help the searches for (possible) orphan afterglows of the SGRBs.

  8. Constraining Galileon inflation

    SciTech Connect

    Regan, Donough; Anderson, Gemma J.; Hull, Matthew; Seery, David E-mail: G.Anderson@sussex.ac.uk E-mail: D.Seery@sussex.ac.uk

    2015-02-01

    In this short paper, we present constraints on the Galileon inflationary model from the CMB bispectrum. We employ a principal-component analysis of the independent degrees of freedom constrained by data and apply this to the WMAP 9-year data to constrain the free parameters of the model. A simple Bayesian comparison establishes that support for the Galileon model from bispectrum data is at best weak.

  9. Constraining Predicted Secondary Organic Aerosol Formation and Processing Using Real-Time Observations of Aging Urban Emissions in an Oxidation Flow Reactor

    NASA Astrophysics Data System (ADS)

    Ortega, A. M.; Palm, B. B.; Hayes, P. L.; Day, D. A.; Cubison, M.; Brune, W. H.; Hu, W.; Graus, M.; Warneke, C.; Gilman, J.; De Gouw, J. A.; Jimenez, J. L.

    2014-12-01

    To investigate atmospheric processing of urban emissions, we deployed an oxidation flow reactor with measurements of size-resolved chemical composition of submicron aerosol during CalNex-LA, a field study investigating air quality and climate change at a receptor site in the Los Angeles Basin. The reactor produces OH concentrations up to 4 orders of magnitude higher than in ambient air, achieving equivalent atmospheric aging of hours to ~2 weeks in 5 minutes of processing. The OH exposure (OHexp) was stepped every 20 min to survey the effects of a range of oxidation exposures on gases and aerosols. This approach is a valuable tool for in-situ evaluation of changes in organic aerosol (OA) concentration and composition due to photochemical processing over a range of ambient atmospheric conditions and composition. Combined with collocated gas-phase measurements of volatile organic compounds, this novel approach enables the comparison of measured SOA to predicted SOA formation from a prescribed set of precursors. Results from CalNex-LA show enhancements of OA and inorganic aerosol from gas-phase precursors. The OA mass enhancement from aging was highest at night and correlated with trimethylbenzene, indicating the importance of relatively short-lived VOC (OH lifetime of ~12 hrs or less) as SOA precursors in the LA Basin. Maximum net SOA production is observed between 3-6 days of aging and decreases at higher exposures. Aging in the reactor shows similar behavior to atmospheric processing; the elemental composition of ambient and reactor measurements follow similar slopes when plotted in a Van Krevelen diagram. Additionally, for air processed in the reactor, oxygen-to-carbon ratios (O/C) of aerosol extended over a larger range compared to ambient aerosol observed in the LA Basin. While reactor aging always increases O/C, often beyond maximum observed ambient levels, a transition from net OA production to destruction occurs at intermediate OHexp, suggesting a transition

  10. Updated Rupture Model for the M7.8 October 28, 2012, Haida Gwaii Earthquake as Constrained by GPS-Observed Displacements

    NASA Astrophysics Data System (ADS)

    Nykolaishen, L.; Dragert, H.; Wang, K.; James, T. S.; de Lange Boom, B.; Schmidt, M.; Sinnott, D.

    2014-12-01

    The M7.8 low-angle thrust earthquake off the west coast of southern Haida Gwaii on October 28, 2012, provided Canadian scientists the opportunity to study a local large thrust earthquake and has provided important information towards an improved understanding of geohazards in coastal British Columbia. Most large events along the Pacific-North America boundary in this region have involved strike-slip motion, such as the 1949 M8.1 earthquake on the Queen Charlotte Fault. In contrast along the southern portion of Haida Gwaii, the young (~8 Ma) Pacific plate crust also underthrusts North America and has been viewed as a small-scale analogy of the Cascadia Subduction Zone. Initial seismic-based rupture models for this event were improved through inclusion of GPS observed coseismic displacements, which are as large as 115 cm of horizontal motion (SSW) and 30 cm of subsidence. Additional campaign-style GPS surveys have since been repeated by the Canadian Hydrographic Service (CHS) at seven vertical reference benchmarks throughout Haida Gwaii, significantly improving the coverage of coseismic displacement observations in the region. These added offsets were typically calculated by differencing a single occupation before and after the earthquake and preliminary displacement estimates are consistent with previous GPS observations from the Geological Survey of Canada. Addition of the CHS coseismic offset estimates may allow direct inversion of the GPS data to derive a purely GPS-based rupture model. To date, cumulative postseismic displacements at six sites indicate up to 6 cm of motion, varying in azimuth between SSW and SE. Preliminary postseismic timeseries curve fitting to date has utilized a double exponential function characteristic of mantle relaxation. The current postseismic trends also suggest afterslip on the deeper plate interface beneath central Haida Gwaii along with possible induced aseismic slip on a deeper segment of the Queen Charlotte Fault located offshore

  11. Source Attribution and Interannual Variability of Arctic Pollution in Spring Constrained by Aircraft (ARCTAS, ARCPAC) and Satellite (AIRS) Observations of Carbon Monoxide

    NASA Technical Reports Server (NTRS)

    Fisher, J. A.; Jacob, D. J.; Purdy, M. T.; Kopacz, M.; LeSager, P.; Carouge, C.; Holmes, C. D.; Yantosca, R. M.; Batchelor, R. L.; Strong, K.; Diskin, G. S.; Fuelberg, H. E.; Holloway, J. S.; McMillan, W. W.; Warner, J.; Streets, D. G.; Zhang, Q.; Wang, Y.; Wu, S.

    2009-01-01

    We use aircraft observations of carbon monoxide (CO) from the NASA ARCTAS and NOAA ARCPAC campaigns in April 2008 together with multiyear (2003-2008) CO satellite data from the AIRS instrument and a global chemical transport model (GEOS-Chem) to better understand the sources, transport, and interannual variability of pollution in the Arctic in spring. Model simulation of the aircraft data gives best estimates of CO emissions in April 2008 of 26 Tg month-1 for Asian anthropogenic, 9.1 for European anthropogenic, 4.2 for North American anthropogenic, 9.3 for Russian biomass burning (anomalously large that year), and 21 for Southeast Asian biomass burning. We find that Asian anthropogenic emissions are the dominant source of Arctic CO pollution everywhere except in surface air where European anthropogenic emissions are of similar importance. Synoptic pollution influences in the Arctic free troposphere include contributions of comparable magnitude from Russian biomass burning and from North American, European, and Asian anthropogenic sources. European pollution dominates synoptic variability near the surface. Analysis of two pollution events sampled by the aircraft demonstrates that AIRS is capable of observing pollution transport to the Arctic in the mid-troposphere. The 2003-2008 record of CO from AIRS shows that interannual variability averaged over the Arctic cap is very small. AIRS CO columns over Alaska are highly correlated with the Ocean Nino Index, suggesting a link between El Nino and northward pollution transport. AIRS shows lower-than-average CO columns over Alaska during April 2008, despite the Russian fires, due to a weakened Aleutian Low hindering transport from Asia and associated with the moderate 2007-2008 La Nina. This suggests that Asian pollution influence over the Arctic may be particularly large under strong El Nino conditions.

  12. Finding consistency between different views of the absorption enhancement of black carbon: An observationally constrained hybrid model to support a transition in optical properties with mass fraction

    NASA Astrophysics Data System (ADS)

    Coe, H.; Allan, J. D.; Whitehead, J.; Alfarra, M. R. R.; Villegas, E.; Kong, S.; Williams, P. I.; Ting, Y. C.; Haslett, S.; Taylor, J.; Morgan, W.; McFiggans, G.; Spracklen, D. V.; Reddington, C.

    2015-12-01

    The mixing state of black carbon is uncertain yet has a significant influence on the efficiency with which a particle absorbs light. In turn, this may make a significant contribution to the uncertainty in global model predictions of the black carbon radiative budget. Previous modelling studies that have represented this mixing state using a core-shell approach have shown that aged black carbon particles may be considerably enhanced compared to freshly emitted black carbon due to the addition of co-emitted, weakly absorbing species. However, recent field results have demonstrated that any enhancement of absorption is minor in the ambient atmosphere. Resolving these differences in absorption efficiency is important as they will have a major impact on the extent to which black carbon heats the atmospheric column. We have made morphology-independent measurements of refractory black carbon mass and associated weakly absorbing material in single particles from laboratory-generated diesel soot and black carbon particles in ambient air influenced by traffic and wood burning sources and related these to the optical properties of the particles. We compared our calculated optical properties with optical models that use varying mixing state assumptions and by characterising the behaviour in terms of the relative amounts of weakly absorbing material and black carbon in a particle we show a sharp transition in mixing occurs. We show that the majority of black carbon particles from traffic-dominated sources can be treated as externally mixed and show no absorption enhancement, whereas models assuming internal mixing tend to give the best estimate of the absorption enhancement of thickly coated black carbon particles from biofuel or biomass burning. This approach reconciles the differences in absorption enhancement previously observed and offers a systematic way of treating the differences in behaviour observed.

  13. Long-term observations of black carbon mass concentrations at Fukue Island, western Japan, during 2009-2015: constraining wet removal rates and emission strengths from East Asia

    NASA Astrophysics Data System (ADS)

    Kanaya, Yugo; Pan, Xiaole; Miyakawa, Takuma; Komazaki, Yuichi; Taketani, Fumikazu; Uno, Itsushi; Kondo, Yutaka

    2016-08-01

    Long-term (2009-2015) observations of atmospheric black carbon (BC) mass concentrations were performed using a continuous soot-monitoring system (COSMOS) at Fukue Island, western Japan, to provide information on wet removal rate constraints and the emission strengths of important source regions in East Asia (China and others). The annual average mass concentration was 0.36 µg m-3, with distinct seasonality; high concentrations were recorded during autumn, winter, and spring and were caused by Asian continental outflows, which reached Fukue Island in 6-46 h. The observed data were categorized into two classes, i.e., with and without a wet removal effect, using the accumulated precipitation along a backward trajectory (APT) for the last 3 days as an index. Statistical analysis of the observed ΔBC / ΔCO ratios was performed to obtain information on the emission ratios (from data with zero APT only) and wet removal rates (including data with nonzero APTs). The estimated emission ratios (5.2-6.9 ng m-3 ppb-1) varied over the six air mass origin areas; the higher ratios for south-central East China (30-35° N) than for north-central East China (35-40° N) indicated the relative importance of domestic emissions and/or biomass burning sectors. The significantly higher BC / CO emission ratios adopted in the bottom-up Regional Emission inventory in Asia (REAS) version 2 (8.3-23 ng m-3 ppb-1) over central East China and Korea needed to be reduced at least by factors of 1.3 and 2.8 for central East China and Korea, respectively, but the ratio for Japan was reasonable. The wintertime enhancement of the BC emission from China, predicted by REAS2, was verified for air masses from south-central East China but not for those from north-central East China. Wet removal of BC was clearly identified as a decrease in the ΔBC / ΔCO ratio against APT. The transport efficiency (TE), defined as the ratio of the ΔBC / ΔCO ratio with precipitation to that without precipitation, was

  14. Shallow Chamber & Conduit Behavior of Silicic Magma: A Thermo- and Fluid- Dynamic Parameterization Model of Physical Deformation as Constrained by Geodetic Observations: Case Study; Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Gunn de Rosas, C. L.

    2013-12-01

    The Soufrière Hills Volcano, Montserrat (SHV) is an active, mainly andesitic and well-studied stratovolcano situated at the northern end of the Lesser Antilles Arc subduction zone in the Caribbean Sea. The goal of our research is to create a high resolution 3D subsurface model of the shallow and deeper aspects of the magma storage and plumbing system at SHV. Our model will integrate inversions using continuous and campaign geodetic observations at SHV from 1995 to the present as well as local seismic records taken at various unrest intervals to construct a best-fit geometry, pressure point source and inflation rate and magnitude. We will also incorporate a heterogeneous media in the crust and use the most contemporary understanding of deep crustal- or even mantle-depth 'hot-zone' genesis and chemical evolution of silicic and intermediate magmas to inform the character of the deep edifice influx. Our heat transfer model will be constructed with a modified 'thin shell' enveloping the magma chamber to simulate the insulating or conducting influence of heat-altered chamber boundary conditions. The final forward model should elucidate observational data preceding and proceeding unrest events, the behavioral suite of magma transport in the subsurface environment and the feedback mechanisms that may contribute to eruption triggering. Preliminary hypotheses suggest wet, low-viscosity residual melts derived from 'hot zones' will ascend rapidly to shallower stall-points and that their products (eventually erupted lavas as well as stalled plutonic masses) will experience and display two discrete periods of shallow evolution; a rapid depressurization crystallization event followed by a slower conduction-controlled heat transfer and cooling crystallization. These events have particular implications for shallow magma behaviors, notably inflation, compressibility and pressure values. Visualization of the model with its inversion constraints will be affected with Com

  15. Sensitivity testing of a 1-D calving criterion numerical model constrained by observations of post-LIA fluctuations of Kangiata Nunaata Sermia, SW Greenland

    NASA Astrophysics Data System (ADS)

    Lea, J. M.; Mair, D.; Nick, F. M.; Rea, B. R.; Schofield, E.; Nienow, P. W.

    2012-12-01

    The ability to successfully model the behaviour of Greenlandic tidewater glaciers is pivotal for the prediction of future behaviour and potential impact on global sea level. However, to have confidence in the results of numerical models, they must be capable of replicating the full range of observed glacier behaviour (i.e. both advance and retreat) when realistic forcings are applied. Due to the paucity of observational records recording this behaviour, it is therefore necessary to verify calving models against reconstructions of glacier dynamics. The dynamics of Kangiata Nunaata Sermia (KNS) can be reconstructed with a high degree of detail using a combination of sedimentological and geomorphological evidence, photographs, historical sources and satellite imagery. Since the LIA-maximum KNS has retreated a total of 21 km with multiple phases of rapid retreat evident between topographic pinning points. A readvance attaining a position 9 km from the current terminus associated with the '1920 stade' is also identified. KNS therefore represents an ideal test location for calving models since it has both advanced and retreated over known timescales, while the scale of fluctuations implies KNS is sensitive to parameter(s) controlling terminus stability. Using the known stable positions for verification, we present the results of an array of sensitivity tests conducted on KNS using the 1-D flowband calving model of Nick et al (2009). The model is initially tuned to an historically stable position where the glacier configuration is accurately known (in this case 1985), and forced by varying surface mass balance, crevasse water depth, submarine melt rate at the calving front, in addition to the strength and pervasiveness of sikussak in the fjord. Successive series of experiments were run using each parameter to test model sensitivity to the initial conditions of each variable. Results indicate that the model is capable of stabilising at locations that are in agreement with

  16. Dynamic triggering of creep events in the Salton Trough, Southern California by regional M ≥ 5.4 earthquakes constrained by geodetic observations and numerical simulations

    NASA Astrophysics Data System (ADS)

    Wei, Meng; Liu, Yajing; Kaneko, Yoshihiro; McGuire, Jeffrey J.; Bilham, Roger

    2015-10-01

    Since a regional earthquake in 1951, shallow creep events on strike-slip faults within the Salton Trough, Southern California have been triggered at least 10 times by M ≥ 5.4 earthquakes within 200 km. The high earthquake and creep activity and the long history of digital recording within the Salton Trough region provide a unique opportunity to study the mechanism of creep event triggering by nearby earthquakes. Here, we document the history of fault creep events on the Superstition Hills Fault based on data from creepmeters, InSAR, and field surveys since 1988. We focus on a subset of these creep events that were triggered by significant nearby earthquakes. We model these events by adding realistic static and dynamic perturbations to a theoretical fault model based on rate- and state-dependent friction. We find that the static stress changes from the causal earthquakes are less than 0.1 MPa and too small to instantaneously trigger creep events. In contrast, we can reproduce the characteristics of triggered slip with dynamic perturbations alone. The instantaneous triggering of creep events depends on the peak and the time-integrated amplitudes of the dynamic Coulomb stress change. Based on observations and simulations, the stress change amplitude required to trigger a creep event of a 0.01-mm surface slip is about 0.6 MPa. This threshold is at least an order of magnitude larger than the reported triggering threshold of non-volcanic tremors (2-60 kPa) and earthquakes in geothermal fields (5 kPa) and near shale gas production sites (0.2-0.4 kPa), which may result from differences in effective normal stress, fault friction, the density of nucleation sites in these systems, or triggering mechanisms. We conclude that shallow frictional heterogeneity can explain both the spontaneous and dynamically triggered creep events on the Superstition Hills Fault.

  17. Choosing health, constrained choices.

    PubMed

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  18. Observation of high energy atmospheric neutrinos with antarctic muon and neutrino detector array

    SciTech Connect

    Ahrens, J.; Andres, E.; Bai, X.; Barouch, G.; Barwick, S.W.; Bay, R.C.; Becka, T.; Becker, K.-H.; Bertrand, D.; Binon, F.; Biron, A.; Booth, J.; Botner, O.; Bouchta, A.; Bouhali, O.; Boyce, M.M.; Carius, S.; Chen, A.; Chirkin, D.; Conrad, J.; Cooley, J.; Costa, C.G.S.; Cowen, D.F.; Dalberg, E.; De Clercq, C.; DeYoung, T.; Desiati, P.; Dewulf, J.-P.; Doksus, P.; Edsjo, J.; Ekstrom, P.; Feser, T.; Frere, J.-M.; Gaisser, T.K.; Gaug, M.; Goldschmidt, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Hardtke, R.; Hauschildt, T.; Hellwig, M.; Heukenkamp, H.; Hill, G.C.; Hulth, P.O.; Hundertmark, S.; Jacobsen, J.; Karle, A.; Kim, J.; Koci, B.; Kopke, L.; Kowalski, M.; Lamoureux, J.I.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Loaiza, P.; Lowder, D.M.; Madsen, J.; Marciniewski, P.; Matis, H.S.; McParland, C.P.; Miller, T.C.; Minaeva, Y.; Miocinovic, P.; Mock, P.C.; Morse, R.; Neunhoffer, T.; Niessen, P.; Nygren, D.R.; Ogelman, H.; Olbrechts, Ph.; Perez de los Heros, C.; Pohl, A.C.; Porrata, R.; Price, P.B.; Przybylski, G.T.; Rawlins, K.; Reed, C.; Rhode, W.; Ribordy, M.; Richter, S.; Rodriguez Martino, J.; Romenesko, P.; Ross, D.; Sander, H.-G.; Schmidt, T.; Schneider, D.; Schwarz, R.; Silvestri, A.; Solarz, M.; Spiczak, G.M.; Spiering, C.; Starinsky, N.; Steele, D.; Steffen, P.; Stokstad, R.G.; Streicher, O.; Sudhoff, P.; Sulanke, K.-H.; Taboada, I.; Thollander, L.; Thon, T.; Tilav, S.; Vander Donckt, M.; Walck, C.; Weinheimer, C.; Wiebusch, C.H.; Wiedeman, C.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Wu, W.; Yodh, G.; Young, S.

    2002-05-07

    The Antarctic Muon and Neutrino Detector Array (AMANDA) began collecting data with ten strings in 1997. Results from the first year of operation are presented. Neutrinos coming through the Earth from the Northern Hemisphere are identified by secondary muons moving upward through the array. Cosmic rays in the atmosphere generate a background of downward moving muons, which are about 10{sup 6} times more abundant than the upward moving muons. Over 130 days of exposure, we observed a total of about 300 neutrino events. In the same period, a background of 1.05 x 10{sup 9} cosmic ray muon events was recorded. The observed neutrino flux is consistent with atmospheric neutrino predictions. Monte Carlo simulations indicate that 90 percent of these events lie in the energy range 66 GeV to 3.4 TeV. The observation of atmospheric neutrinos consistent with expectations establishes AMANDA-B10 as a working neutrino telescope.

  19. Constrained Adaptive Sensing

    NASA Astrophysics Data System (ADS)

    Davenport, Mark A.; Massimino, Andrew K.; Needell, Deanna; Woolf, Tina

    2016-10-01

    Suppose that we wish to estimate a vector $\\mathbf{x} \\in \\mathbb{C}^n$ from a small number of noisy linear measurements of the form $\\mathbf{y} = \\mathbf{A x} + \\mathbf{z}$, where $\\mathbf{z}$ represents measurement noise. When the vector $\\mathbf{x}$ is sparse, meaning that it has only $s$ nonzeros with $s \\ll n$, one can obtain a significantly more accurate estimate of $\\mathbf{x}$ by adaptively selecting the rows of $\\mathbf{A}$ based on the previous measurements provided that the signal-to-noise ratio (SNR) is sufficiently large. In this paper we consider the case where we wish to realize the potential of adaptivity but where the rows of $\\mathbf{A}$ are subject to physical constraints. In particular, we examine the case where the rows of $\\mathbf{A}$ are constrained to belong to a finite set of allowable measurement vectors. We demonstrate both the limitations and advantages of adaptive sensing in this constrained setting. We prove that for certain measurement ensembles, the benefits offered by adaptive designs fall far short of the improvements that are possible in the unconstrained adaptive setting. On the other hand, we also provide both theoretical and empirical evidence that in some scenarios adaptivity does still result in substantial improvements even in the constrained setting. To illustrate these potential gains, we propose practical algorithms for constrained adaptive sensing by exploiting connections to the theory of optimal experimental design and show that these algorithms exhibit promising performance in some representative applications.

  20. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  1. Constraining the Europa Neutral Torus

    NASA Astrophysics Data System (ADS)

    Smith, Howard T.; Mitchell, Donald; mauk, Barry; Johnson, Robert E.; clark, george

    2016-10-01

    "Neutral tori" consist of neutral particles that usually co-orbit along with their source forming a toroidal (or partial toroidal) feature around the planet. The distribution and composition of these features can often provide important, if not unique, insight into magnetospheric particles sources, mechanisms and dynamics. However, these features can often be difficult to directly detect. One innovative method for detecting neutral tori is by observing Energetic Neutral Atoms (ENAs) that are generally considered produced as a result of charge exchange interactions between charged and neutral particles.Mauk et al. (2003) reported the detection of a Europa neutral particle torus using ENA observations. The presence of a Europa torus has extremely large implications for upcoming missions to Jupiter as well as understanding possible activity at this moon and providing critical insight into what lies beneath the surface of this icy ocean world. However, ENAs can also be produced as a result of charge exchange interactions between two ionized particles and in that case cannot be used to infer the presence of neutral particle population. Thus, a detailed examination of all possible source interactions must be considered before one can confirm that likely original source population of these ENA images is actually a Europa neutral particle torus. For this talk, we examine the viability that the Mauk et al. (2003) observations were actually generated from a neutral torus emanating from Europa as opposed to charge particle interactions with plasma originating from Io. These results help constrain such a torus as well as Europa source processes.

  2. Constraining anisotropic baryon oscillations

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin

    2008-06-01

    We present an analysis of anisotropic baryon acoustic oscillations and elucidate how a mis-estimation of the cosmology, which leads to incorrect values of the angular diameter distance, dA, and Hubble parameter, H, manifest themselves in changes to the monopole and quadrupole power spectrum of biased tracers of the density field. Previous work has focused on the monopole power spectrum, and shown that the isotropic dilation combination dA2H-1 is robustly constrained by an overall shift in the scale of the baryon feature. We extend this by demonstrating that the quadrupole power spectrum is sensitive to an anisotropic warping mode dAH, allowing one to break the degeneracy between dA and H. We describe a method for measuring this warping, explicitly marginalizing over the form of redshift-space distortions. We verify this method on N-body simulations and estimate that dAH can be measured with a fractional accuracy of ˜(3/V)% where the survey volume is estimated in h-3Gpc3.

  3. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  4. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  5. Constrained Vapor Bubble

    NASA Technical Reports Server (NTRS)

    Huang, J.; Karthikeyan, M.; Plawsky, J.; Wayner, P. C., Jr.

    1999-01-01

    The nonisothermal Constrained Vapor Bubble, CVB, is being studied to enhance the understanding of passive systems controlled by interfacial phenomena. The study is multifaceted: 1) it is a basic scientific study in interfacial phenomena, fluid physics and thermodynamics; 2) it is a basic study in thermal transport; and 3) it is a study of a heat exchanger. The research is synergistic in that CVB research requires a microgravity environment and the space program needs thermal control systems like the CVB. Ground based studies are being done as a precursor to flight experiment. The results demonstrate that experimental techniques for the direct measurement of the fundamental operating parameters (temperature, pressure, and interfacial curvature fields) have been developed. Fluid flow and change-of-phase heat transfer are a function of the temperature field and the vapor bubble shape, which can be measured using an Image Analyzing Interferometer. The CVB for a microgravity environment, has various thin film regions that are of both basic and applied interest. Generically, a CVB is formed by underfilling an evacuated enclosure with a liquid. Classification depends on shape and Bond number. The specific CVB discussed herein was formed in a fused silica cell with inside dimensions of 3x3x40 mm and, therefore, can be viewed as a large version of a micro heat pipe. Since the dimensions are relatively large for a passive system, most of the liquid flow occurs under a small capillary pressure difference. Therefore, we can classify the discussed system as a low capillary pressure system. The studies discussed herein were done in a 1-g environment (Bond Number = 3.6) to obtain experience to design a microgravity experiment for a future NASA flight where low capillary pressure systems should prove more useful. The flight experiment is tentatively scheduled for the year 2000. The SCR was passed on September 16, 1997. The RDR is tentatively scheduled for October, 1998.

  6. BICEP2 constrains composite inflation

    NASA Astrophysics Data System (ADS)

    Channuie, Phongpichit

    2014-07-01

    In light of BICEP2, we re-examine single field inflationary models in which the inflation is a composite state stemming from various four-dimensional strongly coupled theories. We study in the Einstein frame a set of cosmological parameters, the primordial spectral index ns and tensor-to-scalar ratio r, predicted by such models. We confront the predicted results with the joint Planck data, and with the recent BICEP2 data. We constrain the number of e-foldings for composite models of inflation in order to obtain a successful inflation. We find that the minimal composite inflationary model is fully consistent with the Planck data. However it is in tension with the recent BICEP2 data. The observables predicted by the glueball inflationary model can be consistent with both Planck and BICEP2 contours if a suitable number of e-foldings are chosen. Surprisingly, the super Yang-Mills inflationary prediction is significantly consistent with the Planck and BICEP2 observations.

  7. Cosmicflows Constrained Local UniversE Simulations

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, i.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  8. Constrained Allocation Flux Balance Analysis.

    PubMed

    Mori, Matteo; Hwa, Terence; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-06-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws.

  9. Constrained Allocation Flux Balance Analysis.

    PubMed

    Mori, Matteo; Hwa, Terence; Martin, Olivier C; De Martino, Andrea; Marinari, Enzo

    2016-06-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  10. Gyrification from constrained cortical expansion

    PubMed Central

    Tallinen, Tuomas; Chung, Jun Young; Biggins, John S.; Mahadevan, L.

    2014-01-01

    The exterior of the mammalian brain—the cerebral cortex—has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highly convoluted. Furthermore, this dependence on two simple geometric parameters that characterize the brain also allows us to qualitatively explain how variations in these parameters lead to anatomical anomalies in such situations as polymicrogyria, pachygyria, and lissencephalia. PMID:25136099

  11. Constrained Allocation Flux Balance Analysis

    PubMed Central

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  12. Constrained simulation of the Bullet Cluster

    SciTech Connect

    Lage, Craig; Farrar, Glennys

    2014-06-01

    In this work, we report on a detailed simulation of the Bullet Cluster (1E0657-56) merger, including magnetohydrodynamics, plasma cooling, and adaptive mesh refinement. We constrain the simulation with data from gravitational lensing reconstructions and the 0.5-2 keV Chandra X-ray flux map, then compare the resulting model to higher energy X-ray fluxes, the extracted plasma temperature map, Sunyaev-Zel'dovich effect measurements, and cluster halo radio emission. We constrain the initial conditions by minimizing the chi-squared figure of merit between the full two-dimensional (2D) observational data sets and the simulation, rather than comparing only a few features such as the location of subcluster centroids, as in previous studies. A simple initial configuration of two triaxial clusters with Navarro-Frenk-White dark matter profiles and physically reasonable plasma profiles gives a good fit to the current observational morphology and X-ray emissions of the merging clusters. There is no need for unconventional physics or extreme infall velocities. The study gives insight into the astrophysical processes at play during a galaxy cluster merger, and constrains the strength and coherence length of the magnetic fields. The techniques developed here to create realistic, stable, triaxial clusters, and to utilize the totality of the 2D image data, will be applicable to future simulation studies of other merging clusters. This approach of constrained simulation, when applied to well-measured systems, should be a powerful complement to present tools for understanding X-ray clusters and their magnetic fields, and the processes governing their formation.

  13. Constrained evolution of nanocrystallites

    NASA Astrophysics Data System (ADS)

    Degawa, M.; Thürmer, K.; Williams, E. D.

    2006-10-01

    Deviations from the universal predictions of shape-preserving structure evolution have been investigated in the context of realistic physical boundary conditions for supported nanoscale crystallites. Structural evolution was simulated using the continuum step model with volume conservation, variable interface free energy, and incorporating analytical solutions for equilibrium and metastable crystallite shapes. Early stages of evolution following a simulated temperature drop are consistent with the kinetics of shape-preserving evolution, e.g., not limited by the constant volume constraint. Later stages of decay show a distinct slow down, with an empirically-determined exponential form. The time constant of the slow final evolution increases linearly with the length scale of the crystallite, and also increases monotonically with interface adhesion strength. Under normal evolution, where the interface area between the crystallite and substrate is constant or increasing, the evolution progresses through the metastable states accessible to the volume. If a decreasing interface area can be induced, an alternative progression ending much closer to equilibrium is possible. The late-stage slow down provides additional kinetic information that allows the nonuniqueness of early-stage modeling to be resolved. The slow down observed in the late stages of relaxation of Pb crystallites has been fit, with a unique determination of the relative values of the terrace diffusion constant and step attachment constant.

  14. Constraining curvatonic reheating

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Koyama, Kazuya; Wands, David

    2016-08-01

    We derive the first systematic observational constraints on reheating in models of inflation where an additional light scalar field contributes to primordial density perturbations and affects the expansion history during reheating. This encompasses the original curvaton model but also covers a larger class of scenarios. We find that, compared to the single-field case, lower values of the energy density at the end of inflation and of the reheating temperature are preferred when an additional scalar field is introduced. For instance, if inflation is driven by a quartic potential, which is one of the most favoured models when a light scalar field is added, the upper bound Treh < 5 × 104 GeV on the reheating temperature Treh is derived, and the implications of this value on post-inflationary physics are discussed. The information gained about reheating is also quantified and it is found that it remains modest in plateau inflation (though still larger than in the single-field version of the model) but can become substantial in quartic inflation. The role played by the vev of the additional scalar field at the end of inflation is highlighted, and opens interesting possibilities for exploring stochastic inflation effects that could determine its distribution.

  15. Optimization of retinotopy constrained source estimation constrained by prior

    PubMed Central

    Hagler, Donald J.

    2015-01-01

    Studying how the timing and amplitude of visual evoked responses (VERs) vary between visual areas is important for understanding visual processing but is complicated by difficulties in reliably estimating VERs in individual visual areas using non-invasive brain measurements. Retinotopy constrained source estimation (RCSE) addresses this challenge by using multiple, retinotopically-mapped stimulus locations to simultaneously constrain estimates of VERs in visual areas V1, V2, and V3, taking advantage of the spatial precision of fMRI retinotopy and the temporal resolution of magnetoencephalography (MEG) or electroencephalography (EEG). Nonlinear optimization of dipole locations, guided by a group-constrained RCSE solution as a prior, improved the robustness of RCSE. This approach facilitated the analysis of differences in timing and amplitude of VERs between V1, V2, and V3, elicited by stimuli with varying luminance contrast in a sample of eight adult humans. The V1 peak response was 37% larger than that of V2 and 74% larger than that of V3, and also ~10–20 msec earlier. Normalized contrast response functions were nearly identical for the three areas. Results without dipole optimization, or with other nonlinear methods not constrained by prior estimates were similar but suffered from greater between-subject variability. The increased reliability of estimates offered by this approach may be particularly valuable when using a smaller number of stimulus locations, enabling a greater variety of stimulus and task manipulations. PMID:23868690

  16. Constraining Source Redshift Distributions with Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Wittman, D.; Dawson, W. A.

    2012-09-01

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that ~40 massive (σ v = 1200 km s-1) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to ~11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N_lens^{-{1\\over 2}}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  17. Haplotype inference constrained by plausible haplotype data.

    PubMed

    Fellows, Michael R; Hartman, Tzvika; Hermelin, Danny; Landau, Gad M; Rosamond, Frances; Rozenberg, Liat

    2011-01-01

    The haplotype inference problem (HIP) asks to find a set of haplotypes which resolve a given set of genotypes. This problem is important in practical fields such as the investigation of diseases or other types of genetic mutations. In order to find the haplotypes which are as close as possible to the real set of haplotypes that comprise the genotypes, two models have been suggested which are by now well-studied: The perfect phylogeny model and the pure parsimony model. All known algorithms up till now for haplotype inference may find haplotypes that are not necessarily plausible, i.e., very rare haplotypes or haplotypes that were never observed in the population. In order to overcome this disadvantage, we study in this paper, a new constrained version of HIP under the above-mentioned models. In this new version, a pool of plausible haplotypes H is given together with the set of genotypes G, and the goal is to find a subset H ⊆ H that resolves G. For constrained perfect phlogeny haplotyping (CPPH), we provide initial insights and polynomial-time algorithms for some restricted cases of the problem. For constrained parsimony haplotyping (CPH), we show that the problem is fixed parameter tractable when parameterized by the size of the solution set of haplotypes.

  18. Compositionally Constraining Elysium Lava Fields

    NASA Astrophysics Data System (ADS)

    Karunatillake, S.; Button, N. E.; Skok, J. R.

    2013-12-01

    Chemical provinces of Mars defined recently [1-3] became possible with the maps of elemental mass fractions generated with Mars Odyssey Gamma and Neutron Spectrometer (GS) data [4,5]. These provide a unique perspective by representing compositional signatures distinctive of the regolith vertically at decimeter depths and laterally at hundreds of kilometer scale. Some provinces overlap compellingly with regions highlighted by other remote sensing observations, such as the Mars Radar Stealth area [3]. The spatial convergence of mutually independent data with the consequent highlight of a region provides a unique opportunity of insight not possible with a single type of remote sensing observation. Among such provinces, previous work [3] highlighted Elysium lava flows as a promising candidate on the basis of convergence with mapped geologic units identifying Elysium's lava fields generally, and Amazonian-aged lava flows specifically. The South Eastern lava flows of Elysium Mons, dating to the recent Amazonian epoch, overlap compellingly with a chemical province of K and Th depletion relative to the Martian midlatitudes. We characterize the composition, geology, and geomorphology of the SE Elysium province to constrain the confluence of geologic and alteration processes that may have contributed to its evolution. We compare this with the North Western lava fields, extending the discussion on chemical products from the thermal evolution of Martian volcanism as discussed by Baratoux et al. [6]. The chemical province, by regional proximity to Cerberus Fossae, may also reflect the influence of recently identified buried flood channels [7] in the vicinity of Orcus Patera. Despite the compelling chemical signature from γ spectra, fine grained unconsolidated sediment hampers regional VNTIR (Visible, Near, and Thermal Infrared) spectral analysis. But some observations near scarps and fresh craters allow a view of small scale mineral content. The judicious synthesis of

  19. Constraining torsion with Gravity Probe B

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Tegmark, Max; Guth, Alan H.; Cabi, Serkan

    2007-11-01

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein’s general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.

  20. Constraining torsion with Gravity Probe B

    SciTech Connect

    Mao Yi; Guth, Alan H.; Cabi, Serkan; Tegmark, Max

    2007-11-15

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.

  1. Constraining the Interior Earth Objects population

    NASA Astrophysics Data System (ADS)

    Fuentes, Cesar; Trilling, David E.; Knight, Matthew M.; Mommert, Michael; Hechenleitner, Federico

    2016-10-01

    Interior Earth Objects (IEOs) are among the least known populations in the Solar Sytem. Ground-based surveys are extremely inefficient in surveying them as most of the time IEOs are located inside the orbit of the Earth. We present observational constraints to the IEO population from STEREO (Solar TErrestrial RElations Observatory). This is the first result of searching through the archival STEREO data. Although after analyzing a year's worth of data we found no new IEOs, we observed hundreds of known asteroids. Our survey efficiency is computed with known and implanted synthetic objects, yielding a limiting magnitude of V~14.5. We constrain different IEO population models, yielding an upper limit for the total number of IEOs in line with previous estimates.

  2. How alive is constrained SUSY really?

    DOE PAGES

    Bechtle, Philip; Desch, Klaus; Dreiner, Herbert K.; Hamer, Matthias; Kramer, Michael; O'Leary, Ben; Porod, Werner; Sarrazin, Bjorn; Stefaniak, Tim; Uhlenbrock, Mathias; et al

    2016-05-31

    Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Furthermore, using the framework Fittino, we perform globalmore » fits of the CMSSM to the toy data and find that this model is excluded at the 90% confidence level.« less

  3. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    NASA Astrophysics Data System (ADS)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  4. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066

  5. Spacetime-constrained oblivious transfer

    NASA Astrophysics Data System (ADS)

    Pitalúa-García, Damián

    2016-06-01

    In 1-out-of-2 oblivious transfer (OT), Alice inputs numbers x0,x1 , Bob inputs a bit b and outputs xb. Secure OT requires that Alice and Bob learn nothing about b and xb ¯, respectively. We define spacetime-constrained oblivious transfer (SCOT) as OT in Minkowski spacetime in which Bob must output xb within Rb, where R0 and R1 are fixed spacelike separated spacetime regions. We show that unconditionally secure SCOT is impossible with classical protocols in Minkowski (or Galilean) spacetime, or with quantum protocols in Galilean spacetime. We describe a quantum SCOT protocol in Minkowski spacetime, and we show it unconditionally secure.

  6. Constrained Stochastic Extended Redundancy Analysis.

    PubMed

    DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco

    2015-06-01

    We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).

  7. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  8. Porosity and water ice content of the sub-surface material in the Imhotep region of 67P/Churyumov-Gerasimenko constrained with the Microwave Instrument on the Rosetta Orbiter (MIRO) observations

    NASA Astrophysics Data System (ADS)

    von Allmen, Paul

    2016-04-01

    In late October 2014, the Rosetta spacecraft orbited around 67P/Churyumov-Gerasimenko at a distance less than 10 km, the closest orbit in the mission so far. During this close approach, the Microwave Instrument on the Rosetta Orbiter (MIRO) observed an 800-meter long swath in the Imhotep region on October 27, 2014. Continuum and spectroscopic data were obtained. These data provided the highest spatial resolution obtained to date with the MIRO instrument. The footprint diameter of MIRO on the surface of the nucleus was about 20 meters in the sub-millimeter band at λ=0.5 mm, and 60 meters in the millimeter band at λ=1.6 mm. The swath transitions from a relatively flat area of the Imhotep region to a topographically more diverse area, still making the data relatively easy to analyze. We used a thermal model of the nucleus, including water ice sublimation to analyze the continuum data. The sub-surface material of the nucleus is described in terms of its porosity, grain size and water ice content, in addition to assumptions for the dust bulk density and grain packing geometry. We used the optimal estimation algorithm to fit the material parameters for the best agreement between the observations and the simulation results. We will present the material parameters determined from our analysis.

  9. The Hamburg/ESO R-process Enhanced Star survey (HERES). IX. Constraining pure r-process Ba/Eu abundance ratio from observations of r-II stars

    NASA Astrophysics Data System (ADS)

    Mashonkina, L.; Christlieb, N.

    2014-05-01

    Context. The oldest stars born before the onset of the main s-process are expected to have a pure r-process Ba/Eu abundance ratio. Aims: We revised barium and europium abundances of selected very metal-poor (VMP) and strongly r-process enhanced (r-II) stars to evaluate an empirical r-process Ba/Eu ratio. Methods: Our calculations were based on non-local thermodynamic equilibrium (NLTE) line formation for Ba ii and Eu ii in the classical 1D MARCS model atmospheres. Homogeneous stellar abundances were determined from the Ba ii subordinate and resonance lines by applying a common Ba isotope mixture. We used high-quality VLT/UVES spectra and observational material from the literature. Results: For most investigated stars, NLTE leads to a lower Ba, but a higher Eu abundance. The resulting elemental ratio of the NLTE abundances amounts to, on average, log(Ba/Eu) = 0.78±0.06. This is a new constraint to pure r-process production of Ba and Eu. The obtained Ba/Eu abundance ratio of the r-II stars supports the corresponding solar system r-process ratio as predicted by recent Galactic chemical evolution calculations of Bisterzo, Travaglio, Gallino, Wiescher, and Käppeler. We present the NLTE abundance corrections for the Ba ii and Eu ii lines in the grid of VMP model atmospheres. Based on observations collected at the European Southern Observatory, Paranal, Chile (Proposal numbers 170.D-0010, and 280.D-5011).Tables 7 and 8 are available in electronic form at http://www.aanda.org

  10. Constrained multibody system dynamics - An automated approach

    NASA Astrophysics Data System (ADS)

    Kamman, J. W.; Huston, R. L.

    The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. Specifically, the 'closed loop' problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. This modification, which is based upon a solution of the constraint equations obtained through a 'zero eigenvalues theorem', is, in effect, a contraction of the dynamical equations. It is observed that, for a system with n generalized coordinates and m constraint equations, the coefficients in the constraint equations may be viewed as 'constraint vectors' in n-dimensional space. Then, in this setting the system itself is free to move in the n-m directions which are 'orthogonal' to the constraint vectors.

  11. Two algorithms for fitting constrained marginal models

    PubMed Central

    Evans, R.J.; Forcina, A.

    2013-01-01

    The two main algorithms that have been considered for fitting constrained marginal models to discrete data, one based on Lagrange multipliers and the other on a regression model, are studied in detail. It is shown that the updates produced by the two methods are identical, but that the Lagrangian method is more efficient in the case of identically distributed observations. A generalization is given of the regression algorithm for modelling the effect of exogenous individual-level covariates, a context in which the use of the Lagrangian algorithm would be infeasible for even moderate sample sizes. An extension of the method to likelihood-based estimation under L1-penalties is also considered. PMID:23794772

  12. Optimum constrained image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1977-01-01

    The research described centered on development of an optimum image restoration filter (IRF) minimizing the radius of gyration of the corrected or composite system point-spread function (P-SF) subject to contraints, and reducing 2-dimensional spatial smearing or blurring of an image. The constraints are imposed on the radius of gyration of the IRF P-SF, the total restored image noise power, and the shape of the composite system frequency spectrum. The image degradation corresponds to mapping many points from the original image into a single resolution element. The P-SF is obtained as solution to a set of simultaneous differential equations obeying nonlinear integral constraints. Truncation errors due to edge effects are controlled by constraining the radius of gyration of the IRF P-SF. An iterative technique suppresses sidelobes of the composite system P-SF.

  13. Constrained Local UniversE Simulations: a Local Group factory

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  14. The constrained NMSSM: mSUGRA and GMSB

    SciTech Connect

    Ellwanger, Ulrich

    2008-11-23

    We review different constrained versions of the NMSSM: the fully constrained cNMSSM with universal boundary conditions for gauginos and all soft scalar masses and trilinear couplings, and the NMSSM with soft terms from Gauge Mediated Supersymmetry Breaking. Regarding the fully constrained cNMSSM, after imposing LEP constraints and the correct dark matter relic density, one single parameter is sufficient to describe the entire Higgs and sparticle spectrum of the model, which then contains always a singlino LSP. The NMSSM with soft terms from GMSB is phenomenologically viable if (and only if) the singlet is allowed to couple directly to the messenger sector; then various ranges in parameter space satisfy constraints from colliders and precision observables. Motivations for and phenomenological features of extra U(1)' gauge symmetries are briefly reviewed.

  15. Constraining sterile neutrino dark matter with phase space density observations

    SciTech Connect

    Gorbunov, D; Khmelnitsky, A; Rubakov, V E-mail: khmeln@ms2.inr.ac.ru

    2008-10-15

    We apply phase space density considerations to obtain lower bounds on the mass of the sterile neutrino as a dark matter candidate. The bounds are different for non-resonant production, resonant production in the presence of lepton asymmetry and production in decays of heavier particles. In the former case our bound is comparable to but independent of the Lyman-{alpha} bound, and together with the x-ray upper limit it disfavors non-resonantly produced sterile neutrino dark matter. An interesting feature of the latter case is that warm dark matter may be composed of heavy particles.

  16. A Path Algorithm for Constrained Estimation.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.

  17. Constraining Paleoearthquake Slip Distributions with Coral Microatolls

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; nic Bhloscaidh, M.; Murphy, S.

    2014-12-01

    Key to understanding the threat posed by large megathrust earthquakes is identifying where the potential for these destructive events exists. Studying extended sequences of earthquakes, Slip Deficit and Stress Evolution modelling techniques may hold the key to locating areas of concern. However, as well as using recent instrumentally constrained slip distributions they require the production of high resolution source models for pre-instrumental events. One place we can attempt this longer term modelling is along the Sunda Trench with its record of large megathrust earthquakes dating back centuries. Coral microatolls populating the intertidal areas of the Sumatran Forearc act as long-term geodetic recorders of tectonic activity. Repeated cycles of stress accumulation and release alter relative sea levels around these islands. Growth of corals, controlled by the level of the lowest tide, exploit interseismic rises in sea level. In turn, they experience die-offs when coseismic drops in sea level lead to subaerially exposure. Examination of coral stratigraphy reveals a history of displacements from which information of past earthquakes can be inferred. We have developed a Genetic Algorithm Slip Estimator (GASE) to rapidly produce high resolution slip distributions from coral displacement data. GASE recombines information held in populations of randomly generated slip distributions, to create superior models, satisfying observed displacements. Non-unique solutions require multiple iterations of the algorithm, producing a suite of models from which an ensemble slip distribution is drawn. Systematic testing of the algorithm demonstrates its ability to reliably estimate both known synthetic and instrumentally constrained slip distributions based on surface displacements. We will present high-resolution source models satisfying published displacement data for a number recent and paleoearthquakes along the Sunda trench, including the great 1797 and 1833 events.

  18. Constraining Intracluster Gas Models with AMiBA13

    NASA Astrophysics Data System (ADS)

    Molnar, Sandor M.; Umetsu, Keiichi; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltán; Hearn, Nathan; Shang, Cien; Ho, Paul T. P.; Locutus Huang, Chih-Wei; Koch, Patrick M.; Liao, Yu-Wei Victor; Lin, Kai-Yang; Liu, Guo-Chin; Nishioka, Hiroaki; Wang, Fu-Cheng; Proty Wu, Jiun-Huei

    2010-11-01

    Clusters of galaxies have been extensively used to determine cosmological parameters. A major difficulty in making the best use of Sunyaev-Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intracluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2 m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal β models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the quality of constraints on the distribution of gas density, and simulated SZ visibilities (AMiBA13 observations) for constraints on the large-scale temperature distribution of the ICG. We find that AMiBA13 visibilities should constrain the scale radius of the temperature distribution to about 50% accuracy. We conclude that the upgraded AMiBA, AMiBA13, should be a powerful instrument to constrain the large-scale distribution of the ICG.

  19. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  20. Constrained orbital intercept-evasion

    NASA Astrophysics Data System (ADS)

    Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh

    2014-06-01

    An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.

  1. Constrained Peptides as Miniature Protein Structures

    PubMed Central

    Yin, Hang

    2012-01-01

    This paper discusses the recent developments of protein engineering using both covalent and noncovalent bonds to constrain peptides, forcing them into designed protein secondary structures. These constrained peptides subsequently can be used as peptidomimetics for biological functions such as regulations of protein-protein interactions. PMID:25969758

  2. Determination of optimal gains for constrained controllers

    SciTech Connect

    Kwan, C.M.; Mestha, L.K.

    1993-08-01

    In this report, we consider the determination of optimal gains, with respect to a certain performance index, for state feedback controllers where some elements in the gain matrix are constrained to be zero. Two iterative schemes for systematically finding the constrained gain matrix are presented. An example is included to demonstrate the procedures.

  3. Constraining the Evolution of ZZ Ceti

    NASA Technical Reports Server (NTRS)

    Mukadam, Anjum S.; Kepler, S. O.; Winget, D. E.; Nather, R. E.; Kilic, M.; Mullally, F.; vonHippel, T.; Kleinman, S. J.; Nitta, A.; Guzik, J. A.

    2003-01-01

    We report our analysis of the stability of pulsation periods in the DAV star (pulsating hydrogen atmosphere white dwarf) ZZ Ceti, also called R548. On the basis of observations that span 31 years, we conclude that the period 213.13 s observed in ZZ Ceti drifts at a rate dP/dt 5 (5.5 plus or minus 1.9) x 10(exp -15) ss(sup -1), after correcting for proper motion. Our results are consistent with previous P values for this mode and an improvement over them because of the larger time base. The characteristic stability timescale implied for the pulsation period is |P||P(raised dot)|greater than or equal to 1.2 Gyr, comparable to the theoretical cooling timescale for the star. Our current stability limit for the period 213.13 s is only slightly less than the present measurement for another DAV, G117-B15A, for the period 215.2 s, establishing this mode in ZZ Ceti as the second most stable optical clock known, comparable to atomic clocks and more stable than most pulsars. Constraining the cooling rate of ZZ Ceti aids theoretical evolutionary models and white dwarf cosmochronology. The drift rate of this clock is small enough that we can set interesting limits on reflex motion due to planetary companions.

  4. CONSTRAINING RADIO EMISSION FROM MAGNETARS

    SciTech Connect

    Lazarus, P.; Kaspi, V. M.; Dib, R.; Champion, D. J.; Hessels, J. W. T.

    2012-01-10

    We report on radio observations of five magnetars and two magnetar candidates carried out at 1950 MHz with the Green Bank Telescope in 2006-2007. The data from these observations were searched for periodic emission and bright single pulses. Also, monitoring observations of magnetar 4U 0142+61 following its 2006 X-ray bursts were obtained. No radio emission was detected for any of our targets. The non-detections allow us to place luminosity upper limits of L{sub 1950} {approx}< 1.60 mJy kpc{sup 2} for periodic emission and L{sub 1950,single} {approx}< 7.6 Jy kpc{sup 2} for single pulse emission. These are the most stringent limits yet for the magnetars observed. The resulting luminosity upper limits together with previous results are discussed, as is the importance of further radio observations of radio-loud and radio-quiet magnetars.

  5. Constrained Deformable-Layer Tomography

    NASA Astrophysics Data System (ADS)

    Zhou, H.

    2006-12-01

    The improvement on traveltime tomography depends on improving data coverage and tomographic methodology. The data coverage depends on the spatial distribution of sources and stations, as well as the extent of lateral velocity variation that may alter the raypaths locally. A reliable tomographic image requires large enough ray hit count and wide enough angular range between traversing rays over the targeted anomalies. Recent years have witnessed the advancement of traveltime tomography in two aspects. One is the use of finite frequency kernels, and the other is the improvement on model parameterization, particularly that allows the use of a priori constraints. A new way of model parameterization is the deformable-layer tomography (DLT), which directly inverts for the geometry of velocity interfaces by varying the depths of grid points to achieve a best traveltime fit. In contrast, conventional grid or cell tomography seeks to determine velocity values of a mesh of fixed-in-space grids or cells. In this study, the DLT is used to map crustal P-wave velocities with first arrival data from local earthquakes and two LARSE active surveys in southern California. The DLT solutions along three profiles are constrained using known depth ranges of the Moho discontinuity at 21 sites from a previous receiver function study. The DLT solutions are generally well resolved according to restoration resolution tests. The patterns of 2D DLT models of different profiles match well at their intersection locations. In comparison with existing 3D cell tomography models in southern California, the new DLT models significantly improve the data fitness. In comparison with the multi-scale cell tomography conducted for the same data, while the data fitting levels of the DLT and the multi-scale cell tomography models are compatible, the DLT provides much higher vertical resolution and more realistic description of the undulation of velocity discontinuities. The constraints on the Moho depth

  6. CONSTRAINING SOLAR FLARE DIFFERENTIAL EMISSION MEASURES WITH EVE AND RHESSI

    SciTech Connect

    Caspi, Amir; McTiernan, James M.; Warren, Harry P.

    2014-06-20

    Deriving a well-constrained differential emission measure (DEM) distribution for solar flares has historically been difficult, primarily because no single instrument is sensitive to the full range of coronal temperatures observed in flares, from ≲2 to ≳50 MK. We present a new technique, combining extreme ultraviolet (EUV) spectra from the EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory with X-ray spectra from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), to derive, for the first time, a self-consistent, well-constrained DEM for jointly observed solar flares. EVE is sensitive to ∼2-25 MK thermal plasma emission, and RHESSI to ≳10 MK; together, the two instruments cover the full range of flare coronal plasma temperatures. We have validated the new technique on artificial test data, and apply it to two X-class flares from solar cycle 24 to determine the flare DEM and its temporal evolution; the constraints on the thermal emission derived from the EVE data also constrain the low energy cutoff of the non-thermal electrons, a crucial parameter for flare energetics. The DEM analysis can also be used to predict the soft X-ray flux in the poorly observed ∼0.4-5 nm range, with important applications for geospace science.

  7. Active constrained clustering by examining spectral Eigenvectors

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; Xu, Qianjun

    2005-01-01

    This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix.

  8. Constraining the Evolution of Poor Clusters

    NASA Astrophysics Data System (ADS)

    Broming, Emma J.; Fuse, C. R.

    2012-01-01

    There currently exists no method by which to quantify the evolutionary state of poor clusters (PCs). Research by Broming & Fuse (2010) demonstrated that the evolution of Hickson compact groups (HCGs) are constrained by the correlation between the X-ray luminosities of point sources and diffuse gas. The current investigation adopts an analogous approach to understanding PCs. Plionis et al. (2009) proposed a theory to define the evolution of poor clusters. The theory asserts that cannibalism of galaxies causes a cluster to become more spherical, develop increased velocity dispersion and increased X-ray temperature and gas luminosity. Data used to quantify the evolution of the poor clusters were compiled across multiple wavelengths. The sample includes 162 objects from the WBL catalogue (White et al. 1999), 30 poor clusters in the Chandra X-ray Observatory archive, and 15 Abell poor clusters observed with BAX (Sadat et al. 2004). Preliminary results indicate that the cluster velocity dispersion and X-ray gas and point source luminosities can be used to highlight a weak correlation. An evolutionary trend was observed for multiple correlations detailed herein. The current study is a continuation of the work by Broming & Fuse examining point sources and their properties to determine the evolutionary stage of compact groups, poor clusters, and their proposed remnants, isolated ellipticals and fossil groups. Preliminary data suggests that compact groups and their high-mass counterpart, poor clusters, evolve along tracks identified in the X-ray gas - X-ray point source relation. While compact groups likely evolve into isolated elliptical galaxies, fossil groups display properties that suggest they are the remains of fully coalesced poor clusters.

  9. Constraining blazar physics with polarization signatures

    NASA Astrophysics Data System (ADS)

    Zhang, Haocheng; Boettcher, Markus; Li, Hui

    2016-01-01

    Blazars are active galactic nuclei whose jets are directed very close to our line of sight. They emit nonthermal-dominated emission from radio to gamma-rays, with the radio to optical emissions known to be polarized. Both radiation and polarization signatures can be strongly variable. Observations have shown that sometimes strong multiwavelength flares are accompanied by drastic polarization variations, indicating active participation of the magnetic field during flares. We have developed a 3D multi-zone time-dependent polarization-dependent radiation transfer code, which enables us to study the spectral and polarization signatures of blazar flares simultaneously. By combining this code with a Fokker-Planck nonthermal particle evolution scheme, we are able to derive simultaneous fits to time-dependent spectra, multiwavelength light curves, and time-dependent optical polarization signatures of a well-known multiwavelength flare with 180 degree polarization angle swing of the blazar 3C279. Our work shows that with detailed consideration of light travel time effects, the apparently symmetric time-dependent radiation and polarization signatures can be naturally explained by a straight, helically symmetric jet pervaded by a helical magnetic field, without the need of any asymmetric structures. Also our model suggests that the excess in the nonthermal particles during flares can originate from magnetic reconnection events, initiated by a shock propagating through the emission region. Additionally, the magnetic field should generally revert to its initial topology after the flare. We conclude that such shock-initiated magnetic reconnection event in an emission environment with relatively strong magnetic energy can be the driver of multiwavelength flares with polarization angle swings. Future statistics on such observations will constrain general features of such events, while magneto-hydrodynamic simulations will provide physical scenarios for the magnetic field evolution

  10. Constraining duty cycles through a Bayesian technique

    NASA Astrophysics Data System (ADS)

    Romano, P.; Guidorzi, C.; Segreto, A.; Ducci, L.; Vercellone, S.

    2014-12-01

    The duty cycle (DC) of astrophysical sources is generally defined as the fraction of time during which the sources are active. It is used to both characterize their central engine and to plan further observing campaigns to study them. However, DCs are generally not provided with statistical uncertainties, since the standard approach is to perform Monte Carlo bootstrap simulations to evaluate them, which can be quite time consuming for a large sample of sources. As an alternative, considerably less time-consuming approach, we derived the theoretical expectation value for the DC and its error for sources whose state is one of two possible, mutually exclusive states, inactive (off) or flaring (on), as based on a finite set of independent observational data points. Following a Bayesian approach, we derived the analytical expression for the posterior, the conjugated distribution adopted as prior, and the expectation value and variance. We applied our method to the specific case of the inactivity duty cycle (IDC) for supergiant fast X-ray transients, a subclass of flaring high mass X-ray binaries characterized by large dynamical ranges. We also studied IDC as a function of the number of observations in the sample. Finally, we compare the results with the theoretical expectations. We found excellent agreement with our findings based on the standard bootstrap method. Our Bayesian treatment can be applied to all sets of independent observations of two-state sources, such as active galactic nuclei, X-ray binaries, etc. In addition to being far less time consuming than bootstrap methods, the additional strength of this approach becomes obvious when considering a well-populated class of sources (Nsrc ≥ 50) for which the prior can be fully characterized by fitting the distribution of the observed DCs for all sources in the class, so that, through the prior, one can further constrain the DC of a new source by exploiting the information acquired on the DC distribution derived

  11. Constrained output feedback control of flexible rotor-bearing systems

    NASA Astrophysics Data System (ADS)

    Kim, Jong-Sun; Lee, Chong-Won

    1990-04-01

    The design of an optimal constrained output feedback controller for a rotor-bearing system is described, based on a reduced order model. The aims are to stabilize the unstable or marginally stable motion and to control the large build-up of periodic disturbances occurring during operation. The reduced order model is constructed on the basis of a modal model and singular perturbation, retaining the advantages of the two methods. The onset of instability due to spillover is prevented by the constrained optimization, and the robustness and pole assignability are improved by designing not merely a static output feedback but a dynamic compensator. The periodic disturbances, usually caused by rotation, are reduced by using the disturbance observer and feed-forward compensation. The efficiency of the proposed method is demonstrated through two simulation models, a rigid shaft supported by soft bearings at its ends and an overhung rotor system with a tip disk, under both transient vibration and sudden imbalance situations.

  12. Lilith: a tool for constraining new physics from Higgs measurements

    NASA Astrophysics Data System (ADS)

    Bernon, Jérémy; Dumont, Béranger

    2015-09-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.

  13. Gyrification from constrained cortical expansion

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas

    The convolutions of the human brain are a symbol of its functional complexity. But how does the outer surface of the brain, the layered cortex of neuronal gray matter get its folds? In this talk, we ask to which extent folding of the brain can be explained as a purely mechanical consequence of unpatterned growth of the cortical layer relative to the sublayers. Modeling the growing brain as a soft layered solid leads to elastic instabilities and the formation of cusped sulci and smooth gyri consistent with observations across species in both normal and pathological situations. Furthermore, we apply initial geometries obtained from fetal brain MRI to address the question of how the brain geometry and folding patterns may be coupled via mechanics.

  14. Motor Demands Constrain Cognitive Rule Structures.

    PubMed

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-03-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist.

  15. Constraining Cosmic Evolution of Type Ia Supernovae

    SciTech Connect

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  16. Motor Demands Constrain Cognitive Rule Structures

    PubMed Central

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-01-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist. PMID:26966909

  17. Motor Demands Constrain Cognitive Rule Structures.

    PubMed

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-03-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist. PMID:26966909

  18. Observation of high-energy neutrinos using Cerenkov detectors embedded deep in Antarctic ice.

    PubMed

    Andrés, E; Askebjer, P; Bai, X; Barouch, G; Barwick, S W; Bay, R C; Becker, K H; Bergström, L; Bertrand, D; Bierenbaum, D; Biron, A; Booth, J; Botner, O; Bouchta, A; Boyce, M M; Carius, S; Chen, A; Chirkin, D; Conrad, J; Cooley, J; Costa, C G; Cowen, D F; Dailing, J; Dalberg, E; DeYoung, T; Desiati, P; Dewulf, J P; Doksus, P; Edsjö, J; Ekström, P; Erlandsson, B; Feser, T; Gaug, M; Goldschmidt, A; Goobar, A; Gray, L; Haase, H; Hallgren, A; Halzen, F; Hanson, K; Hardtke, R; He, Y D; Hellwig, M; Heukenkamp, H; Hill, G C; Hulth, P O; Hundertmark, S; Jacobsen, J; Kandhadai, V; Karle, A; Kim, J; Koci, B; Köpke, L; Kowalski, M; Leich, H; Leuthold, M; Lindahl, P; Liubarsky, I; Loaiza, P; Lowder, D M; Ludvig, J; Madsen, J; Marciniewski, P; Matis, H S; Mihalyi, A; Mikolajski, T; Miller, T C; Minaeva, Y; Miocinović, P; Mock, P C; Morse, R; Neunhöffer, T; Newcomer, F M; Niessen, P; Nygren, D R; Ogelman, H; Pérez de los Heros, C; Porrata, R; Price, P B; Rawlins, K; Reed, C; Rhode, W; Richards, A; Richter, S; Martino, J R; Romenesko, P; Ross, D; Rubinstein, H; Sander, H G; Scheider, T; Schmidt, T; Schneider, D; Schneider, E; Schwarz, R; Silvestri, A; Solarz, M; Spiczak, G M; Spiering, C; Starinsky, N; Steele, D; Steffen, P; Stokstad, R G; Streicher, O; Sun, Q; Taboada, I; Thollander, L; Thon, T; Tilav, S; Usechak, N; Vander Donckt, M; Walck, C; Weinheimer, C; Wiebusch, C H; Wischnewski, R; Wissing, H; Woschnagg, K; Wu, W; Yodh, G; Young, S

    2001-03-22

    Neutrinos are elementary particles that carry no electric charge and have little mass. As they interact only weakly with other particles, they can penetrate enormous amounts of matter, and therefore have the potential to directly convey astrophysical information from the edge of the Universe and from deep inside the most cataclysmic high-energy regions. The neutrino's great penetrating power, however, also makes this particle difficult to detect. Underground detectors have observed low-energy neutrinos from the Sun and a nearby supernova, as well as neutrinos generated in the Earth's atmosphere. But the very low fluxes of high-energy neutrinos from cosmic sources can be observed only by much larger, expandable detectors in, for example, deep water or ice. Here we report the detection of upwardly propagating atmospheric neutrinos by the ice-based Antarctic muon and neutrino detector array (AMANDA). These results establish a technology with which to build a kilometre-scale neutrino observatory necessary for astrophysical observations.

  19. Constraining weak annihilation using semileptonic D decays

    SciTech Connect

    Ligeti, Zoltan; Luke, Michael; Manohar, Aneesh V.

    2010-08-01

    The recently measured semileptonic D{sub s} decay rate can be used to constrain weak annihilation (WA) effects in semileptonic D and B decays. We revisit the theoretical predictions for inclusive semileptonic D{sub (s)} decays using a variety of quark mass schemes. The most reliable results are obtained if the fits to B decay distributions are used to eliminate the charm quark mass dependence, without using any specific charm mass scheme. Our fit to the available data shows that WA is smaller than commonly assumed. There is no indication that the WA octet contribution (which is better constrained than the singlet contribution) dominates. The results constrain an important source of uncertainty in the extraction of |V{sub ub}| from inclusive semileptonic B decays.

  20. Towards weakly constrained double field theory

    NASA Astrophysics Data System (ADS)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  1. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  2. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  3. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  4. Pattern recognition constrains mantle properties, past and present

    NASA Astrophysics Data System (ADS)

    Atkins, S.; Rozel, A. B.; Valentine, A. P.; Tackley, P.; Trampert, J.

    2015-12-01

    Understanding and modelling mantle convection requires knowledge of many mantle properties, such as viscosity, chemical structure and thermal proerties such as radiogenic heating rate. However, many of these parameters are only poorly constrained. We demonstrate a new method for inverting present day Earth observations for mantle properties. We use neural networks to represent the posterior probability density functions of many different mantle properties given the present structure of the mantle. We construct these probability density functions by sampling a wide range of possible mantle properties and running forward simulations, using the convection code StagYY. Our approach is particularly powerful because of its flexibility. Our samples are selected in the prior space, rather than being targeted towards a particular observation, as would normally be the case for probabilistic inversion. This means that the same suite of simulations can be used for inversions using a wide range of geophysical observations without the need to resample. Our method is probabilistic and non-linear and is therefore compatible with non-linear convection, avoiding some of the limitations associated with other methods for inverting mantle flow. This allows us to consider the entire history of the mantle. We also need relatively few samples for our inversion, making our approach computationally tractable when considering long periods of mantle history. Using the present thermal and density structure of the mantle, we can constrain rheological and compositional parameters such as viscosity and yield stress. We can also use the present day mantle structure to make inferences about the initial conditions for convection 4.5 Gyr ago. We can constrain initial mantle conditions including the initial concentration of heat producing elements in the mantle and the initial thickness of primordial material at the CMB. Currently we use density and temperature structure for our inversions, but we can

  5. Towards better constrained models of the solar magnetic cycle

    NASA Astrophysics Data System (ADS)

    Munoz-Jaramillo, Andres

    2010-12-01

    The best tools we have for understanding the origin of solar magnetic variability are kinematic dynamo models. During the last decade, this type of models has seen a continuous evolution and has become increasingly successful at reproducing solar cycle characteristics. The basic ingredients of these models are: the solar differential rotation -- which acts as the main source of energy for the system by shearing the magnetic field; the meridional circulation -- which plays a crucial role in magnetic field transport; the turbulent diffusivity -- which attempts to capture the effect of convective turbulence on the large scale magnetic field; and the poloidal field source -- which closes the cycle by regenerating the poloidal magnetic field. However, most of these ingredients remain poorly constrained which allows one to obtain solar-like solutions by "tuning" the input parameters, leading to controversy regarding which parameter set is more appropriate. In this thesis we revisit each of those ingredients in an attempt to constrain them better by using observational data and theoretical considerations, reducing the amount of free parameters in the model. For the meridional flow and differential rotation we use helioseismic data to constrain free parameters and find that the differential rotation is well determined, but the available data can only constrain the latitudinal dependence of the meridional flow. For the turbulent magnetic diffusivity we show that combining mixing-length theory estimates with magnetic quenching allows us to obtain viable magnetic cycles and that the commonly used diffusivity profiles can be understood as a spatiotemporal average of this process. For the poloidal source we introduce a more realistic way of modeling active region emergence and decay and find that this resolves existing discrepancies between kinematic dynamo models and surface flux transport simulations. We also study the physical mechanisms behind the unusually long minimum of

  6. Constrained optimization of image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1973-01-01

    A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.

  7. Rhythmic Grouping Biases Constrain Infant Statistical Learning

    ERIC Educational Resources Information Center

    Hay, Jessica F.; Saffran, Jenny R.

    2012-01-01

    Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to…

  8. Analytical solutions to constrained hypersonic flight trajectories

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1993-01-01

    The flight trajectory of aerospace vehicles subject to a class of path constraints is considered. The constrained dynamics is shown to be a natural two-time-scale system. Asymptotic analytical solutions are obtained. Problems of trajectory optimization and guidance can be dramatically simplified with these solutions. Applications in trajectory design for an aerospace plane strongly support the theoretical development.

  9. Analytical solutions to constrained hypersonic flight trajectories

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The flight trajectory of aerospace vehicles subject to a class of path constraints is considered. The constrained dynamics is shown to be a natural two-time-scale system. Asymptotic analytical solutions are obtained. Problems of trajectory optimization and guidance can be dramatically simplified with these solutions. Applications in trajectory design for an aerospace plane strongly support the theoretical development.

  10. Constrained tri-sphere kinematic positioning system

    DOEpatents

    Viola, Robert J

    2010-12-14

    A scalable and adaptable, six-degree-of-freedom, kinematic positioning system is described. The system can position objects supported on top of, or suspended from, jacks comprising constrained joints. The system is compatible with extreme low temperature or high vacuum environments. When constant adjustment is not required a removable motor unit is available.

  11. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  12. Thermoregulation constrains effective warning signal expression.

    PubMed

    Lindstedt, Carita; Lindström, Leena; Mappes, Johanna

    2009-02-01

    Evolution of conspicuous signals may be constrained if animal coloration has nonsignaling as well as signaling functions. In aposematic wood tiger moth (Parasemia plantaginis) larvae, the size of a warning signal (orange patch on black body) varies phenotypically and genetically. Although a large warning signal is favored as an antipredator defense, we hypothesized that thermoregulation may constrain the signal size in colder habitats. To test this hypothesis, we conducted a factorial rearing experiment with two selection lines for larval coloration (small and large signal) and with two temperature manipulations (high and low temperature environment). Temperature constrained the size and brightness of the warning signal. Larvae with a small signal had an advantage in the colder environment, which was demonstrated by a faster development time and growth rate in the low temperature treatment, compared to larvae with a large signal. Interestingly, the larvae with a small signal were found more often on the plant than the ones with a large signal, suggesting higher basking activity of the melanic (small signal) individuals in the low temperature. We conclude that the expression of aposematic display is not only defined by its efficacy against predators; variation in temperature may constrain evolution of a conspicuous warning signal and maintain variation in it.

  13. Mantle Convection Models Constrained by Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Durbin, C. J.; Shahnas, M.; Peltier, W. R.; Woodhouse, J. H.

    2011-12-01

    Perovskite-post-Perovskite transition (Murakami et al., 2004, Science) that appears to define the D" layer at the base of the mantle. In this initial phase of what will be a longer term project we are assuming that the internal mantle viscosity structure is spherically symmetric and compatible with the recent inferences of Peltier and Drummond (2010, Geophys. Res. Lett.) based upon glacial isostatic adjustment and Earth rotation constraints. The internal density structure inferred from the tomography model is assimilated into the convection model by continuously "nudging" the modification to the input density structure predicted by the convection model back towards the tomographic constraint at the long wavelengths that the tomography specifically resolves, leaving the shorter wavelength structure free to evolve, essentially "slaved" to the large scale structure. We focus upon the ability of the nudged model to explain observed plate velocities, including both their poloidal (divergence related) and toroidal (strike slip fault related) components. The true plate velocity field is then used as an additional field towards which the tomographically constrained solution is nudged.

  14. Generation of Granulites Constrained by Thermal Modeling

    NASA Astrophysics Data System (ADS)

    Depine, G. V.; Andronicos, C. L.; Phipps-Morgan, J.

    2006-12-01

    The heat source needed to generate granulites facies metamorphism is still an unsolved problem in geology. There is a close spatial relationship between granulite terrains and extensive silicic plutonism, suggesting heat advection by melts is critical to their formation. To investigate the role of heat advection by melt in the generation of granulites we use numerical 1-D models which include the movement of melt from the base of the crust to the middle crust. The model is in part constrained by petrological observations from the Coast Plutonic Complex (CPC) in British Columbia, Canada at ~ 54° N where migmatite and granulite are widespread. The model takes into account time dependent heat conduction and advection of melts generated at the base of the crust. The model starts with a crust of 55 km, consistent with petrologic and geochemical data from the CPC. The lower crust is assumed to be amphibolite in composition, consistent with seismologic and geochemical constraints for the CPC. An initial geothermal gradient estimated from metamorphic P-T-t paths in this region is ~37°C/km, hotter than normal geothermal gradients. The parameters used for the model are a coefficient of thermal conductivity of 2.5 W/m°C, a density for the crust of 2700 kg/m3 and a heat capacity of 1170 J/Kg°C. Using the above starting conditions, a temperature of 1250°C is assumed for the mantle below 55 km, equivalent to placing asthenosphere in contact with the base of the crust to simulate delamination, basaltic underplating and/or asthenospheric exposure by a sudden steepening of slab. This condition at 55 km results in melting the amphibolite in the lower crust. Once a melt fraction of 10% is reached the melt is allowed to migrate to a depth of 13 km, while material at 13 km is displaced downwards to replace the ascending melts. The steady-state profile has a very steep geothermal gradient of more than 50°C/km from the surface to 13 km, consistent with the generation of andalusite

  15. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  16. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  17. An autonomous vehicle: Constrained test and evaluation

    NASA Astrophysics Data System (ADS)

    Griswold, Norman C.

    1991-11-01

    The objective of the research is to develop an autonomous vehicle which utilizes stereo camera sensors (using ambient light) to follow complex paths at speeds up to 35 mph with consideration of moving vehicles within the path. The task is intended to demonstrate the contribution to safety of a vehicle under automatic control. All of the long-term scenarios investigating future reduction in congestion involve an automatic system taking control, or partial control, of the vehicle. A vehicle which includes a collision avoidance system is a prerequisite to an automatic control system. The report outlines the results of a constrained test of a vision controlled vehicle. In order to demonstrate its ability to perform on the current street system the vehicle was constrained to recognize, approach, and stop at an ordinary roadside stop sign.

  18. Global marine primary production constrains fisheries catches.

    PubMed

    Chassot, Emmanuel; Bonhommeau, Sylvain; Dulvy, Nicholas K; Mélin, Frédéric; Watson, Reg; Gascuel, Didier; Le Pape, Olivier

    2010-04-01

    Primary production must constrain the amount of fish and invertebrates available to expanding fisheries; however the degree of limitation has only been demonstrated at regional scales to date. Here we show that phytoplanktonic primary production, estimated from an ocean-colour satellite (SeaWiFS), is related to global fisheries catches at the scale of Large Marine Ecosystems, while accounting for temperature and ecological factors such as ecosystem size and type, species richness, animal body size, and the degree and nature of fisheries exploitation. Indeed we show that global fisheries catches since 1950 have been increasingly constrained by the amount of primary production. The primary production appropriated by current global fisheries is 17-112% higher than that appropriated by sustainable fisheries. Global primary production appears to be declining, in some part due to climate variability and change, with consequences for the near future fisheries catches.

  19. Synthesis of constrained analogues of tryptophan

    PubMed Central

    Negrato, Marco; Abbiati, Giorgio; Dell’Acqua, Monica

    2015-01-01

    Summary A Lewis acid-catalysed diastereoselective [4 + 2] cycloaddition of vinylindoles and methyl 2-acetamidoacrylate, leading to methyl 3-acetamido-1,2,3,4-tetrahydrocarbazole-3-carboxylate derivatives, is described. Treatment of the obtained cycloadducts under hydrolytic conditions results in the preparation of a small library of compounds bearing the free amino acid function at C-3 and pertaining to the class of constrained tryptophan analogues. PMID:26664620

  20. Constraining RRc candidates using SDSS colours

    NASA Astrophysics Data System (ADS)

    Banyai, E.; Plachy, E.; Molnar, L.; Dobos, L.; Szabo, R.

    2016-05-01

    The light variations of first-overtone RR Lyrae stars and contact eclipsing binaries can be difficult to distinguish. The Catalina Periodic Variable Star catalog contains several misclassified objects, despite the classification efforts by Drake et al. (2014). They used metallicity and surface gravity derived from spectroscopic data (from the SDSS database) to rule out binaries. Our aim is to further constrain the catalog using SDSS colours to estimate physical parameters for stars that did not have spectroscopic data.

  1. Constrained Multi-View Video Face Clustering.

    PubMed

    Cao, Xiaochun; Zhang, Changqing; Zhou, Chengju; Fu, Huazhu; Foroosh, Hassan

    2015-11-01

    In this paper, we focus on face clustering in videos. To promote the performance of video clustering by multiple intrinsic cues, i.e., pairwise constraints and multiple views, we propose a constrained multi-view video face clustering method under a unified graph-based model. First, unlike most existing video face clustering methods which only employ these constraints in the clustering step, we strengthen the pairwise constraints through the whole video face clustering framework, both in sparse subspace representation and spectral clustering. In the constrained sparse subspace representation, the sparse representation is forced to explore unknown relationships. In the constrained spectral clustering, the constraints are used to guide for learning more reasonable new representations. Second, our method considers both the video face pairwise constraints as well as the multi-view consistence simultaneously. In particular, the graph regularization enforces the pairwise constraints to be respected and the co-regularization penalizes the disagreement among different graphs of multiple views. Experiments on three real-world video benchmark data sets demonstrate the significant improvements of our method over the state-of-the-art methods. PMID:26259245

  2. An English language interface for constrained domains

    NASA Technical Reports Server (NTRS)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  3. Calcium constrains plant control over forest ecosystem nitrogen cycling.

    PubMed

    Groffman, Peter M; Fisk, Melany C

    2011-11-01

    Forest ecosystem nitrogen (N) cycling is a critical controller of the ability of forests to prevent the movement of reactive N to receiving waters and the atmosphere and to sequester elevated levels of atmospheric carbon dioxide (CO2). Here we show that calcium (Ca) constrains the ability of northern hardwood forest trees to control the availability and loss of nitrogen. We evaluated soil N-cycling response to Ca additions in the presence and absence of plants and observed that when plants were present, Ca additions "tightened" the ecosystem N cycle, with decreases in inorganic N levels, potential net N mineralization rates, microbial biomass N content, and denitrification potential. In the absence of plants, Ca additions induced marked increases in nitrification (the key process controlling ecosystem N losses) and inorganic N levels. The observed "tightening" of the N cycle when Ca was added in the presence of plants suggests that the capacity of forests to absorb elevated levels of atmospheric N and CO2 is fundamentally constrained by base cations, which have been depleted in many areas of the globe by acid rain and forest harvesting.

  4. The Application of Optimisation Methods to Constrain Absolute Plate Motions

    NASA Astrophysics Data System (ADS)

    Tetley, M. G.; Williams, S.; Hardy, S.; Müller, D.

    2015-12-01

    Plate tectonic reconstructions are an excellent tool for understanding the configuration and behaviour of continents through time on both global and regional scales, and are relatively well understood back to ~200 Ma. However, many of these models represent only relative motions between continents, providing little information of absolute tectonic motions and their relationship with the deep Earth. Significant issues exist in solving this problem, including how to combine constraints from multiple, diverse data into a unified model of absolute plate motions; and how to address uncertainties both in the available data, and in the assumptions involved in this process (e.g. hotspot motion, true polar wander). In deep time (pre-Pangea breakup), plate reconstructions rely more heavily on paleomagnetism, but these data often imply plate velocities much larger than those observed since the breakup of the supercontinent Pangea where plate velocities are constrained by the seafloor spreading record. Here we present two complementary techniques to address these issues, applying parallelized numerical methods to quantitatively investigate absolute plate motions through time. Firstly, we develop a data-fit optimized global absolute reference frame constrained by kinematic reconstruction data, hotspot-trail observations, and trench migration statistics. Secondly we calculate optimized paleomagnetic data-derived apparent polar wander paths (APWPs) for both the Phanerozoic and Precambrian. Paths are generated from raw pole data with optimal spatial and temporal pole configurations calculated using all known uncertainties and quality criteria to produce velocity-optimized absolute motion paths through deep time.

  5. A Method to Constrain the Size of the Protosolar Nebula

    NASA Astrophysics Data System (ADS)

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-01

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i >~ 50°), low-eccentricity orbits (q >~ 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (RD >~ 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  6. Constrained motion control on a hemispherical surface: path planning.

    PubMed

    Berman, Sigal; Liebermann, Dario G; McIntyre, Joseph

    2014-03-01

    Surface-constrained motion, i.e., motion constraint by a rigid surface, is commonly found in daily activities. The current work investigates the choice of hand paths constrained to a concave hemispherical surface. To gain insight regarding paths and their relationship with task dynamics, we simulated various control policies. The simulations demonstrated that following a geodesic path (the shortest path between 2 points on a sphere) is advantageous not only in terms of path length but also in terms of motor planning and sensitivity to motor command errors. These stem from the fact that the applied forces lie in a single plane (that of the geodesic path). To test whether human subjects indeed follow the geodesic, and to see how such motion compares to other paths, we recorded movements in a virtual haptic-visual environment from 11 healthy subjects. The task comprised point-to-point motion between targets at two elevations (30° and 60°). Three typical choices of paths were observed from a frontal plane projection of the paths: circular arcs, straight lines, and arcs close to the geodesic path for each elevation. Based on the measured hand paths, we applied k-means blind separation to divide the subjects into three groups and compared performance indicators. The analysis confirmed that subjects who followed paths closest to the geodesic produced faster and smoother movements compared with the others. The "better" performance reflects the dynamical advantages of following the geodesic path and may also reflect invariant features of control policies used to produce such a surface-constrained motion.

  7. Late time CMB anisotropies constrain mini-charged particles

    SciTech Connect

    Burrage, C.; Redondo, J.; Ringwald, A.; Jaeckel, J. E-mail: joerg.jaeckel@durham.ac.uk E-mail: andreas.ringwald@desy.de

    2009-11-01

    Observations of the temperature anisotropies induced as light from the CMB passes through large scale structures in the late universe are a sensitive probe of the interactions of photons in such environments. In extensions of the Standard Model which give rise to mini-charged particles, photons propagating through transverse magnetic fields can be lost to pair production of such particles. Such a decrement in the photon flux would occur as photons from the CMB traverse the magnetic fields of galaxy clusters. Therefore late time CMB anisotropies can be used to constrain the properties of mini-charged particles. We outline how this test is constructed, and present new constraints on mini-charged particles from observations of the Sunyaev-Zel'dovich effect in the Coma cluster.

  8. Constraining f (T ,T ) gravity models using type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Sáez-Gómez, Diego; Carvalho, C. Sofia; Lobo, Francisco S. N.; Tereno, Ismael

    2016-07-01

    We present an analysis of an f (T ,T ) extension of the Teleparallel Equivalent of General Relativity, where T denotes the torsion and T denotes the trace of the energy-momentum tensor. This extension includes nonminimal couplings between torsion and matter. In particular, we construct two specific models that recover the usual continuity equation, namely, f (T ,T )=T +g (T ) and f (T ,T )=T ×g (T ). We then constrain the parameters of each model by fitting the predicted distance modulus to that measured from type Ia supernovae and find that both models can reproduce the late-time cosmic acceleration. We also observe that one of the models satisfies well the observational constraints and yields a goodness-of-fit similar to the Λ CDM model, thus demonstrating that f (T ,T ) gravity theory encompasses viable models that can be an alternative to Λ CDM .

  9. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  10. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  11. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  12. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  13. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  14. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  15. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  16. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  17. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  18. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  19. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  20. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  1. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  2. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  3. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  4. Constraining the African pole of rotation

    NASA Astrophysics Data System (ADS)

    Asfaw, Laike M.

    1992-08-01

    In the absence of well defined transform faults in the East African rift system for constraining the plate kinematic reconstruction, the pole of relative motion for the African (Nubian) and Somalian plates has been determined from residual motion. If Africa and Somalia are to continue to drift apart along the East African rift system (which would then evolve into a series of ridges offset by transform faults) then incipient transform faults that may reflect the direction of relative motion should already be in place along the East African rift system. The incipient transforms along the East African rift system are characterized by shear zones, such as the Zambezi shear zone in the south and the Aswa and Hamer shear zones in the north. Some of these shear zones have been associated with recent strike-slip faulting in the NW-SE direction during periods of earthquakes. Provided that these, consistently NW-SE oriented, strike-slip movements in the shear zones give the direction of relative motion of the adjacent plates, then they can be used to constrain the position of the Africa-Somalia Euler pole. Due to the fact that identifying transform faults in the East African rift system is difficult and because the genesis of transform faults characterizing a plate boundary at an inception stage is not well known, the discussion here is limited to the northern segment of the East African rift system where shear zones are better characterized by the existing geophysical data. The characterizing features vary with latitude, indicating the complexity of the problem of the genesis of transform faults. I believe, however, that the relatively well defined intra-continental transform fault in the northern East African rift system, which is characterized by strike-slip faulting and earthquakes, constrains the pole of relative motion for the African and Somalian plates to a position near 1.5°S and 29.0°E.

  5. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  6. Quantization of soluble classical constrained systems

    SciTech Connect

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  7. Charged particles constrained to a curved surface

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Frauendiener, Jörg

    2013-01-01

    We study the motion of charged particles constrained to arbitrary two-dimensional curved surfaces but interacting in three-dimensional space via the Coulomb potential. To speed up the interaction calculations, we use the parallel compute capability of the Compute Unified Device Architecture of today's graphics boards. The particles and the curved surfaces are shown using the Open Graphics Library. This paper is intended to give graduate students, who have basic experiences with electrostatics and the Lagrangian formalism, a deeper understanding of charged particle interactions and a short introduction of how to handle a many particle system using parallel computing on a single home computer.

  8. Local structure of equality constrained NLP problems

    SciTech Connect

    Mari, J.

    1994-12-31

    We show that locally around a feasible point, the behavior of an equality constrained nonlinear program is described by the gradient and the Hessian of the Lagrangian on the tangent subspace. In particular this holds true for reduced gradient approaches. Applying the same ideas to the control of nonlinear ODE:s, one can device first and second order methods that can be applied also to stiff problems. We finally describe an application of these ideas to the optimization of the production of human growth factor by fed-batch fermentation.

  9. Quantization of soluble classical constrained systems

    NASA Astrophysics Data System (ADS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac's formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  10. Constrained inflaton due to a complex scalar

    SciTech Connect

    Budhi, Romy H. S.; Kashiwase, Shoichi; Suematsu, Daijiro

    2015-09-14

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.

  11. Medical knowledge evolution query constraining aspects.

    PubMed

    Eklund, Ann-Marie

    2011-01-01

    In this paper we present a first analysis towards better understanding of the query constraining aspects of knowledge, as expressed in the most used public medical bibliographic database MEDLINE. Our results indicate, possibly not surprising, that new terms occur, but also that traditional terms are replaced by more specific ones or even go out of use as they become common knowledge. Hence, as knowledge evolve over time, search methods may benefit from becoming more sensitive to knowledge expression, to enable finding new, as well as older, relevant database contents.

  12. Quantum annealing in a kinetically constrained system.

    PubMed

    Das, Arnab; Chakrabarti, Bikas K; Stinchcombe, Robin B

    2005-08-01

    Classical and quantum annealing is discussed in the case of a generalized kinetically constrained model, where the relaxation dynamics of a system with trivial ground state is retarded by the appearance of energy barriers in the relaxation path, following a local kinetic rule. Effectiveness of thermal and quantum fluctuations in overcoming these kinetic barriers to reach the ground state are studied. It has been shown that for certain barrier characteristics, quantum annealing might by far surpass its thermal counter part in reaching the ground state faster.

  13. Incomplete Dirac reduction of constrained Hamiltonian systems

    SciTech Connect

    Chandre, C.

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  14. Constrained optimization in human walking: cost minimization and gait plasticity.

    PubMed

    Bertram, John E A

    2005-03-01

    As walking speed increases, consistent relationships emerge between the three determinant parameters of walking, speed, step frequency and step length. However, when step length or step frequency are predetermined rather than speed, different relationships are spontaneously selected. This result is expected if walking parameters are selected to optimize to an underlying objective function, known as the constrained optimization hypothesis. The most likely candidate for the objective function is metabolic cost per distance traveled, where the hypothesis predicts that the subject will minimize the cost of travel under a given gait constraint even if this requires an unusual step length and frequency combination. In the current study this is tested directly by measuring the walking behavior of subjects constrained systematically to determined speeds, step frequencies or step lengths and comparing behavior to predictions derived directly from minimization of measured metabolic cost. A metabolic cost surface in speed-frequency space is derived from metabolic rate for 10 subjects walking at 49 speed-frequency conditions. Optimization is predicted from the iso-energetic cost contours derived from this surface. Substantial congruence is found between the predicted and observed behavior using the cost of walking per unit distance. Although minimization of cost per distance appears to dominate walking control, certain notable differences from predicted behavior suggest that other factors must also be considered. The results of these studies provide a new perspective on the integration of walking cost with neuromuscular control, and provide a novel approach to the investigation of the control features involved in gait parameter selection.

  15. Constraining the level density using fission of lead projectiles

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, J. L.; Benlliure, J.; Álvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Boutoux, G.; Casarejos, E.; Chatillon, A.; Cortina-Gil, D.; Gorbinet, T.; Heinz, A.; Kelić-Heil, A.; Laurent, B.; Martin, J.-F.; Paradela, C.; Pellereau, E.; Pietras, B.; Ramos, D.; Rodríguez-Tajes, C.; Rossi, D. M.; Simon, H.; Taïeb, J.; Vargas, J.; Voss, B.

    2015-10-01

    The nuclear level density is one of the main ingredients for the statistical description of the fission process. In this work, we propose to constrain the description of this parameter by using fission reactions induced by protons and light ions on 208Pb at high kinetic energies. The experiment was performed at GSI (Darmstadt), where the combined use of the inverse kinematics technique with an efficient detection setup allowed us to measure the atomic number of the two fission fragments in coincidence. This measurement permitted us to obtain with high precision the partial fission cross sections and the width of the charge distribution as a function of the atomic number of the fissioning system. These data and others previously measured, covering a large range in fissility, are compared to state-of-the-art calculations. The results reveal that total and partial fission cross sections cannot unambiguously constrain the level density at ground-state and saddle-point deformations and additional observables, such as the width of the charge distribution of the final fission fragments, are required.

  16. Trajectory generation and constrained control of quadrotors

    NASA Astrophysics Data System (ADS)

    Tule, Carlos Alberto

    Unmanned Aerial Systems, although still in early development, are expected to grow in both the military and civil sectors. As part of the UAV sector, the Quadrotor helicopter platform has been receiving a lot of interest from various academic and research institutions because of their simplistic design and low cost to manufacture, yet remaining a challenging platform to control. Four different controllers were derived for the trajectory generation and constrained control of a quadrotor platform. The first approach involves the linear version of the Model Predictive Control (MPC) algorithm to solve the state constrained optimization problem. The second approach uses the State Dependent Coefficient (SDC) form to capture the system non-linearities into a pseudo-linear system matrix, which is used to derive the State Dependent Riccati Equation (SDRE) based optimal control. For the third approach, the SDC form is exploited for obtaining a nonlinear equivalent of the model predictive control. Lastly, a combination of the nonlinear MPC and SDRE optimal control algorithms is used to explore the feasibility of a near real-time nonlinear optimization technique.

  17. Constraining supersymmetric dark matter with synchrotron measurements

    SciTech Connect

    Hooper, Dan

    2008-06-15

    The annihilations of neutralino dark matter (or other dark matter candidate) generate, among other standard model states, electrons and positrons. These particles emit synchrotron photons as a result of their interaction with the galactic magnetic field. In this paper, we use the measurements of the Wilkinson Microwave Anisotropy Probe satellite to constrain the intensity of this synchrotron emission and, in turn, the annihilation cross section of the lightest neutralino. We find this constraint to be more stringent than that provided by any other current indirect detection channel. In particular, the neutralino annihilation cross section must be less than {approx_equal}3x10{sup -26} cm{sup 3}/s (1x10{sup 25} cm{sup 3}/s) for 100 GeV (500 GeV) neutralinos distributed with a Navarro-Frenk-White halo profile. For the conservative case of an entirely flat dark matter distribution within the inner 8 kiloparsecs of the Milky Way, the constraint is approximately a factor of 30 less stringent. Even in this conservative case, synchrotron measurements strongly constrain, for example, the possibility of wino or Higgsino neutralino dark matter produced nonthermally in the early universe.

  18. Nonstationary sparsity-constrained seismic deconvolution

    NASA Astrophysics Data System (ADS)

    Sun, Xue-Kai; Sam, Zandong Sun; Xie, Hui-Wen

    2014-12-01

    The Robinson convolution model is mainly restricted by three inappropriate assumptions, i.e., statistically white reflectivity, minimum-phase wavelet, and stationarity. Modern reflectivity inversion methods (e.g., sparsity-constrained deconvolution) generally attempt to suppress the problems associated with the first two assumptions but often ignore that seismic traces are nonstationary signals, which undermines the basic assumption of unchanging wavelet in reflectivity inversion. Through tests on reflectivity series, we confirm the effects of nonstationarity on reflectivity estimation and the loss of significant information, especially in deep layers. To overcome the problems caused by nonstationarity, we propose a nonstationary convolutional model, and then use the attenuation curve in log spectra to detect and correct the influences of nonstationarity. We use Gabor deconvolution to handle nonstationarity and sparsity-constrained deconvolution to separating reflectivity and wavelet. The combination of the two deconvolution methods effectively handles nonstationarity and greatly reduces the problems associated with the unreasonable assumptions regarding reflectivity and wavelet. Using marine seismic data, we show that correcting nonstationarity helps recover subtle reflectivity information and enhances the characterization of details with respect to the geological record.

  19. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  20. The Constrained Crystallization of Nylon-6

    NASA Astrophysics Data System (ADS)

    Mohan, Anushree; Tonelli, Alan

    2008-10-01

    Non-covalently bonded crystalline inclusion compounds (ICs) have been formed by threading host cyclic starches, cyclodextrins (CDs) onto guest nylon 6 (N6) chains. When excess N6 is employed, non-stoichiometric (n-s)-N6-CD-ICs with partially uncovered and dangling N6 chains result. While the crystalline CD lattice is stable to ˜300 C, the uncovered and dangling, yet constrained, N6 chains may crystallize below or, as shown below, be molten above ˜225 C. We have been studying the constrained crystallization of the dangling N6 chains in (n-S)-N6-CD-ICs, with comparison to bulk N6 samples, as a function of N6 molecular weights, lengths of uncovered N6 chains, and the CD host used. In the IC channels formed with host α- and γ-CDs containing 6 and 8 glucose units, respectively, single and pairs of side-by-side N6 chains are threaded and included. In the α-CD-ICs the ˜ 0.5 nm channels are separated by ˜ 1.4 nm, while in γ-CD-ICs the ˜ 1 nm channels are ˜ 1.7 nm apart, with each γ-CD channel including 2 N6 chains. N6 chains in the bulk and in the dense (n-s)-N6-CD-IC brushes show distinctly different kinetic and thermodynamic crystallization behaviors.

  1. Multiple Manifold Clustering Using Curvature Constrained Path

    PubMed Central

    Babaeian, Amir; Bayestehtashk, Alireza; Bandarabadi, Mojtaba

    2015-01-01

    The problem of multiple surface clustering is a challenging task, particularly when the surfaces intersect. Available methods such as Isomap fail to capture the true shape of the surface near by the intersection and result in incorrect clustering. The Isomap algorithm uses shortest path between points. The main draw back of the shortest path algorithm is due to the lack of curvature constrained where causes to have a path between points on different surfaces. In this paper we tackle this problem by imposing a curvature constraint to the shortest path algorithm used in Isomap. The algorithm chooses several landmark nodes at random and then checks whether there is a curvature constrained path between each landmark node and every other node in the neighborhood graph. We build a binary feature vector for each point where each entry represents the connectivity of that point to a particular landmark. Then the binary feature vectors could be used as a input of conventional clustering algorithm such as hierarchical clustering. We apply our method to simulated and some real datasets and show, it performs comparably to the best methods such as K-manifold and spectral multi-manifold clustering. PMID:26375819

  2. Constraining the topology of reionization through Lyα absorption

    NASA Astrophysics Data System (ADS)

    Furlanetto, S. R.; Hernquist, L.; Zaldarriaga, M.

    2004-11-01

    The reionization of hydrogen in the intergalactic medium (IGM) is a crucial landmark in the history of the Universe, but the processes through which it occurs remain mysterious. In particular, recent numerical and analytic work suggest that reionization by stellar sources is driven by large-scale density fluctuations and must be inhomogeneous on scales of many comoving Mpc. We examine the prospects for constraining the topology of neutral and ionized gas through Lyα absorption of high-redshift sources. One method is to search for gaps in the Gunn-Peterson absorption troughs of luminous sources. These could occur if the line of sight passes sufficiently close to the centre of a large HII region. In contrast to previous work, we find a non-negligible (though still small) probability of observing such a gap before reionization is complete. In our model the transmission spike at z= 6.08 in the spectrum of SDSS J1148+5251 does not necessarily require overlap to have been completed at an earlier epoch. We also examine the IGM damping wing absorption of the Lyα emission lines of star-forming galaxies. Because most galaxies sit inside of large HII regions, we find that the severity of absorption is significantly smaller than previously thought and decoupled from the properties of the observed galaxy. While this limits our ability to constrain the mean neutral fraction of the IGM from observations of individual galaxies, it presents the exciting possibility of measuring the size distribution and evolution of the ionized bubbles by examining the distribution of damping wing optical depths in a large sample of galaxies.

  3. CONSTRAINING INTERMEDIATE-MASS BLACK HOLES IN GLOBULAR CLUSTERS

    SciTech Connect

    Umbreit, Stefan; Rasio, Frederic A. E-mail: rasio@northwestern.edu

    2013-05-01

    Decades after the first predictions of intermediate-mass black holes (IMBHs) in globular clusters (GCs) there is still no unambiguous observational evidence for their existence. The most promising signatures for IMBHs are found in the cores of GCs, where the evidence now comes from the stellar velocity distribution, the surface density profile, and, for very deep observations, the mass-segregation profile near the cluster center. However, interpretation of the data, and, in particular, constraints on central IMBH masses, require the use of detailed cluster dynamical models. Here we present results from Monte Carlo cluster simulations of GCs that harbor IMBHs. As an example of application, we compare velocity dispersion, surface brightness and mass-segregation profiles with observations of the GC M10, and constrain the mass of a possible central IMBH in this cluster. We find that, although M10 does not seem to possess a cuspy surface density profile, the presence of an IMBH with a mass up to 0.75% of the total cluster mass, corresponding to about 600 M{sub Sun }, cannot be excluded. This is also in agreement with the surface brightness profile, although we find it to be less constraining, as it is dominated by the light of giants, causing it to fluctuate significantly. We also find that the mass-segregation profile cannot be used to discriminate between models with and without IMBH. The reason is that M10 is not yet dynamically evolved enough for the quenching of mass segregation to take effect. Finally, detecting a velocity dispersion cusp in clusters with central densities as low as in M10 is extremely challenging, and has to rely on only 20-40 bright stars. It is only when stars with masses down to 0.3 M{sub Sun} are included that the velocity cusp is sampled close enough to the IMBH for a significant increase above the core velocity dispersion to become detectable.

  4. Using Simple Shapes to Constrain Asteroid Thermal Inertia

    NASA Astrophysics Data System (ADS)

    MacLennan, Eric M.; Emery, Joshua P.

    2015-11-01

    With the use of remote thermal infrared observations and a thermophysical model (TPM), the thermal inertia of an asteroid surface can be determined. The thermal inertia, in turn, can be used to infer physical properties of the surface, specifically to estimate the average regolith grain size. Since asteroids are often non-spherical techniques for incorporating modeled (non-spherical) shapes into calculating thermal inertia have been established. However, using a sphere as input for TPM is beneficial in reducing running time and shape models are not generally available for all (or most) objects that are observed in the thermal-IR. This is particularly true, as the pace of infrared observations has recently dramatically increased, notably due to the WISE mission, while the time to acquire sufficient light curves for accurate shape inversion remains relatively long. Here, we investigate the accuracy of using both a spherical and ellipsoidal TPM, with infrared observations obtained at pre- and post-opposition (hereafter multi-epoch) geometries to constrain the thermal inertias of a large number of asteroids.We test whether using multi-epoch observations combined with a spherical and ellipsoidal shape TPM can constrain the thermal inertia of an object without a priori knowledge of its shape or spin state. The effectiveness of this technique is tested for 16 objects with shape models from DAMIT and WISE multi-epoch observations. For each object, the shape model is used as input for the TPM to generate synthetic fluxes for different values of thermal inertia. The input spherical and ellipsoidal shapes are then stepped through different spin vectors as the TPM is used to generate best-fit thermal inertia and diameter to the synthetically generated fluxes, allowing for a direct test of the approach’s effectiveness. We will discuss whether the precision of the thermal inertia constraints from the spherical TPM analysis of multi- epoch observations is comparable to works

  5. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  6. Constraining the Ensemble Kalman Filter for improved streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Maxwell, Deborah; Jackson, Bethanna; McGregor, James

    2016-04-01

    Data assimilation techniques such as the Kalman Filter and its variants are often applied to hydrological models with minimal state volume/capacity constraints. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this presentation, we investigate the effect of constraining the Ensemble Kalman Filter (EnKF) on forecast performance. An EnKF implementation with no constraints is compared to model output with no assimilation, followed by a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then a more tightly constrained implementation where flux as well as mass constraints are imposed to limit the rate of water movement within a state. A three year period (2008-2010) with no significant data gaps and representative of the range of flows observed over the fuller 1976-2010 record was selected for analysis. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Overall, neither the unconstrained nor the "typically" mass-constrained forecasts were significantly better than the non-filtered forecasts; in fact several were significantly degraded. Flux constraints (in conjunction with mass constraints) significantly improved the forecast performance of six events relative to all other implementations, while the remaining two events showed no significant difference in performance. We conclude that placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state updating and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, and find that this

  7. Constraining the Interior Geophysics of Rubble Pile Asteroids

    NASA Astrophysics Data System (ADS)

    Scheeres, D. J.; Jacobson, S.; McMahon, J.; Hirabayashi, M.

    2013-12-01

    The internal geophysics of small rubble pile asteroids are largely unknown, and standard geophysical theories are not well matched to the extreme environment these bodies exist in. Interior pressures within rapidly spinning rubble piles are predicted to be as small as a few Pascals, a regime in which small non-gravitational forces not considered for larger bodies may become important. Previous research has suggested that the standard geophysical models for internal energy dissipation in this regime require modification (Goldreich and Sari, ApJ 2009), adding additional uncertainty in the geophysics. We report on new theoretical and observational results that suggest a direct way in which fundamental geophysical parameters of small rubble pile asteroids can be constrained. Specifically, we will discuss how the ratio Q/k, tidal dissipation number over tidal Love number, can be inferred and more strictly constrained for primaries in small binary asteroid systems where the secondary is spin-synchronized and the primary is super-synchronous, the most common class of small asteroid binary systems. Jacobson & Scheeres (ApJ 2011) proposed that many of these binary asteroid systems may be in an equilibrium state where contractive Binary YORP forces balance against expansive tidal torques due to tidal distortion of the primary body. The predicted equilibrium semi-major axes for such binary asteroid systems (based on presumed values for the Binary YORP force and Q/k values) has been seen to be consistent with the observed sizes of many of these systems (see figure). Recently, it has also been reported that the spacecraft-accessible binary asteroid 1996 FG3 is in such an equilibrium state (Scheirich et al., Binaries Workshop 2013). The combined detection of such an equilibrium coupled with their theoretical model makes it feasible to sharply constrain the Q/k parameter for the primary asteroid in the 1996 FG3 system and extrapolate its functional form to other such systems. We

  8. A METHOD TO CONSTRAIN THE SIZE OF THE PROTOSOLAR NEBULA

    SciTech Connect

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-15

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i {approx}> 50 Degree-Sign ), low-eccentricity orbits (q {approx}> 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (R{sub D} {approx}> 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  9. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  10. Traveltime tomography and nonlinear constrained optimization

    SciTech Connect

    Berryman, J.G.

    1988-10-01

    Fermat's principle of least traveltime states that the first arrivals follow ray paths with the smallest overall traveltime from the point of transmission to the point of reception. This principle determines a definite convex set of feasible slowness models - depending only on the traveltime data - for the fully nonlinear traveltime inversion problem. The existence of such a convex set allows us to transform the inversion problem into a nonlinear constrained optimization problem. Fermat's principle also shows that the standard undamped least-squares solution to the inversion problem always produces a slowness model with many ray paths having traveltime shorter than the measured traveltime (an impossibility even if the trial ray paths are not the true ray paths). In a damped least-squares inversion, the damping parameter may be varied to allow efficient location of a slowness model on the feasibility boundary. 13 refs., 1 fig., 1 tab.

  11. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  12. Multiplier-continuation algorthms for constrained optimization

    NASA Technical Reports Server (NTRS)

    Lundberg, Bruce N.; Poore, Aubrey B.; Bing, Yang

    1989-01-01

    Several path following algorithms based on the combination of three smooth penalty functions, the quadratic penalty for equality constraints and the quadratic loss and log barrier for inequality constraints, their modern counterparts, augmented Lagrangian or multiplier methods, sequential quadratic programming, and predictor-corrector continuation are described. In the first phase of this methodology, one minimizes the unconstrained or linearly constrained penalty function or augmented Lagrangian. A homotopy path generated from the functions is then followed to optimality using efficient predictor-corrector continuation methods. The continuation steps are asymptotic to those taken by sequential quadratic programming which can be used in the final steps. Numerical test results show the method to be efficient, robust, and a competitive alternative to sequential quadratic programming.

  13. Arithmetic coding with constrained carry operations

    NASA Astrophysics Data System (ADS)

    Mahfoodh, Abo-Talib; Said, Amir; Yea, Sehoon

    2015-03-01

    Buffer or counter-based techniques are adequate for dealing with carry propagation in software implementations of arithmetic coding, but create problems in hardware implementations due to the difficulty of handling worst-case scenarios, defined by very long propagations. We propose a new technique for constraining the carry propagation, similar to "bit-stuffing," but designed for encoders that generate data as bytes instead of individual bits, and is based on the fact that the encoder and decoder can maintain the same state, and both can identify the situations when it desired to limit carry propagation. The new technique adjusts the coding interval in a way that corresponds to coding an unused data symbol, but selected to minimize overhead. Our experimental results demonstrate that the loss in compression can be made very small using regular precision for arithmetic operations.

  14. Constraining nonstandard neutrino-electron interactions

    SciTech Connect

    Barranco, J.; Miranda, O. G.; Moura, C. A.; Valle, J. W. F.

    2008-05-01

    We present a detailed analysis on nonstandard neutrino interactions (NSI) with electrons including all muon and electron (anti)-neutrino data from existing accelerators and reactors, in conjunction with the 'neutrino counting' data (e{sup +}e{sup -}{yields}{nu}{nu}{gamma}) from the four LEP collaborations. First we perform a one-parameter-at-a-time analysis, showing how most constraints improve with respect to previous results reported in the literature. We also present more robust results where the NSI parameters are allowed to vary freely in the analysis. We show the importance of combining LEP data with the other experiments in removing degeneracies in the global analysis constraining flavor-conserving NSI parameters which, at 90% and 95% C.L., must lie within unique allowed regions. Despite such improved constraints, there is still substantial room for improvement, posing a big challenge for upcoming experiments.

  15. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  16. Remote gaming on resource-constrained devices

    NASA Astrophysics Data System (ADS)

    Reza, Waazim; Kalva, Hari; Kaufman, Richard

    2010-08-01

    Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.

  17. Sampling Motif-Constrained Ensembles of Networks

    NASA Astrophysics Data System (ADS)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  18. Ice Sheet Stratigraphy Can Constrain Basal Slip

    NASA Astrophysics Data System (ADS)

    Wolovick, M.; Creyts, T. T.; Buck, W. R.; Bell, R. E.

    2014-12-01

    Basal slip is an important component of ice sheet mass flux and dynamics. Basal slip varies over time due to variations in basal temperature, water pressure, and sediment cover. All of these factors can create coherent patterns of basal slip that migrate over time. Our knowledge of the spatial variability in basal slip comes from inversions of driving stress, ice thickness, and surface velocity, but these inversions contain no information about temporal variability. We do not know if the patterns in slip revealed by those inversions move over time. While englacial stratigraphy has classically been used to constrain surface accumulation and geothermal flux, it is also sensitive to horizontal gradients in basal slip. Here we show that englacial stratigraphy can constrain the velocity of basal slip patterns. Englacial stratigraphy responds strongly to patterns of basal slip that move downstream over time close to the ice sheet velocity. In previous work, we used a thermomechanical model to discover that thermally controlled slip patterns migrate downstream and create stratigraphic structures, but we were unable to directly control the pattern velocity, as that arose naturally out of the model physics. Here, we use a kinematic flowline model that allows us to directly control pattern velocity, and thus is applicable to a wide variety of slip mechanisms in addition to basal temperature. We find that the largest and most intricate stratigraphic structures develop when the pattern moves at the column-average ice velocity. Patterns that move slower than the column-average ice velocity produce overturned stratigraphy in the lower part of the ice sheet, while patterns moving at the column-average eventually cause the entire ice sheet to overturn if they persist long enough. Based on these forward models, we develop an interpretive guide for deducing moving patterns in basal slip from ice sheet internal layers. Ice sheet internal stratigraphy represents a potentially vast

  19. Newton's method for large bound-constrained optimization problems.

    SciTech Connect

    Lin, C.-J.; More, J. J.; Mathematics and Computer Science

    1999-01-01

    We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.

  20. Lithium in halo stars - Constraining the effects of helium diffusion on globular cluster ages and cosmology

    NASA Technical Reports Server (NTRS)

    Deliyannis, Constantine P.; Demarque, Pierre

    1991-01-01

    Stellar evolutionary models with diffusion are used to show that observations of lithium in extreme halo stars provide crucial constraints on the magnitude of the effects of helium diffusion. The flatness of the observed Li-T(eff) relation severely constrains diffusion Li isochrones, which tend to curve downward toward higher T(eff). It is argued that Li observations at the hot edge of the plateau are particularly important in constraining the effects of helium diffusion; yet, they are currently few in number. It is proposed that additional observations are required there, as well as below 5500 K, to define more securely the morphology of the halo Li abundances. Implications for the primordial Li abundance are considered. It is suggested that a conservative upper limit to the initial Li abundance, due to diffusive effects alone, is 2.35.

  1. Constraining t-T conditions during palaeoseismic events - constraining the viscous brake phenomena in nature.

    NASA Astrophysics Data System (ADS)

    Dobson, Katherine J.; Kirkpatrick, James D.; Mark, Darren F.; Stuart, Finlay M.

    2010-05-01

    observations of the fault rocks assemblage indicate that the pseudotachylytes formed at temperatures of < 300°C, the depth of formation, and therefore the normal stress are poorly constrained. In this study we exploit the relationship between the normal stress and the mass (i.e. thickness) of the rocks above the earthquake. We present data from standard thermochronological techniques (Ar/Ar, apatite and zircon (U-Th)/He and apatite fission track) applied to a vertical profile through the pseudotachylyte bearing granite. This enables the complete time-temperature cooling path of the host rock to be determined and the geothermal gradient to be assessed, which in turn allows us calculate the depth at which rupture occurred. We use these results to test the hypothesis that the Sierra Nevada pseudotachylyte acted as a viscous brake. This will ultimately improve understanding of earthquake ruptures by identifying an intrinsic control on the magnitude of earthquakes. References 1. Di Toro et al. 2006. Science 311. 647-649 2. Fialko & Khazab, 2005, J geophys. Res. 110 B12407

  2. Eulerian Formulation of Spatially Constrained Elastic Rods

    NASA Astrophysics Data System (ADS)

    Huynen, Alexandre

    Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration

  3. Asynchronous parallel generating set search for linearly-constrained optimization.

    SciTech Connect

    Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson

    2006-08-01

    Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.

  4. Sequential unconstrained minimization algorithms for constrained optimization

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  5. Constraining warm dark matter using QSO gravitational lensing

    NASA Astrophysics Data System (ADS)

    Miranda, Marco; Macciò, Andrea V.

    2007-12-01

    Warm dark matter (WDM) has been invoked to resolve apparent conflicts of cold dark matter (CDM) models with observations on subgalactic scales. In this work, we provide a new and independent lower limit for the WDM particle mass (e.g. sterile neutrino) through the analysis of image fluxes in gravitationally lensed quasi-stellar objects (QSOs). Starting from a theoretical unperturbed cusp configuration, we analyse the effects of intergalactic haloes in modifying the fluxes of QSO multiple images, giving rise to the so-called anomalous flux ratio. We found that the global effect of such haloes strongly depends on their mass/abundance ratio and it is maximized for haloes in the mass range 106-108Msolar. This result opens up a new possibility to constrain CDM predictions on small scales and test different warm candidates, since free streaming of WDM particles can considerably dampen the matter power spectrum in this mass range. As a consequence, while a (Λ)CDM model is able to produce flux anomalies at a level similar to those observed, a WDM model, with an insufficiently massive particle, fails to reproduce the observational evidences. Our analysis suggests a lower limit of a few keV (mν ~ 10) for the mass of WDM candidates in the form of a sterile neutrino. This result makes sterile neutrino WDM less attractive as an alternative to CDM, in good agreement with previous findings from Lyman α forest and cosmic microwave background analysis.

  6. Constraining decaying dark matter with Fermi LAT gamma-rays

    SciTech Connect

    Zhang, Le; Sigl, Günter; Weniger, Christoph; Maccione, Luca; Redondo, Javier E-mail: christoph.weniger@desy.de E-mail: redondo@mppmm.mpg.de

    2010-06-01

    High energy electrons and positrons from decaying dark matter can produce a significant flux of gamma rays by inverse Compton off low energy photons in the interstellar radiation field. This possibility is inevitably related with the dark matter interpretation of the observed PAMELA and FERMI excesses. The aim of this paper is providing a simple and universal method to constrain dark matter models which produce electrons and positrons in their decay by using the Fermi LAT gamma-ray observations in the energy range between 0.5 GeV and 300 GeV. We provide a set of universal response functions that, once convolved with a specific dark matter model produce the desired constraints. Our response functions contain all the astrophysical inputs such as the electron propagation in the galaxy, the dark matter profile, the gamma-ray fluxes of known origin, and the Fermi LAT data. We study the uncertainties in the determination of the response functions and apply them to place constraints on some specific dark matter decay models that can well fit the positron and electron fluxes observed by PAMELA and Fermi LAT. To this end we also take into account prompt radiation from the dark matter decay. We find that with the available data decaying dark matter cannot be excluded as source of the PAMELA positron excess.

  7. Using infrasound to constrain ash plume height

    NASA Astrophysics Data System (ADS)

    Lamb, Oliver; De Angelis, Silvio; Lavallée, Yan

    2016-04-01

    Airborne volcanic ash advisories are currently based on analyses of satellite imagery with relatively low temporal resolution, and numerical simulations of atmospheric plume dispersion. These simulations rely on key input parameters such as the maximum height of eruption plumes and the mass eruption rate at the vent, which remain loosely constrained. In this study, we present a proof-of-concept workflow that incorporates the analysis of volcanic infrasound with numerical modelling of volcanic plume rise in a realistic atmosphere. We analyse acoustic infrasound records from two explosions during the 2009 eruption of Mt. Redoubt, USA, that produced plumes reaching heights of 12-14 km. We model the infrasonic radiation at the source under the assumptions of linear acoustic theory and calculate variations in mass ejection velocity at the vent. The estimated eruption velocities serve as the input for numerical models of plume rise. The encouraging results highlight the potential for infrasound measurements to be incorporated into numerical modelling of ash dispersion, and confirm their value for volcano monitoring operations.

  8. Excessive homoplasy in an evolutionarily constrained protein.

    PubMed

    Wells, R S

    1996-04-22

    The evolution of monomorphic proteins among closely related species has not been examined in detail. To investigate this phenomenon, the glycerol-3-phosphate dehydrogenase (Gpdh) locus was sequence in a broad range of Drosophila species. Although purifying selection to remove amino acid variation is the dominant force in the evolution of Gpdh, some replacements have occurred. The sequences were compared in the context of the phylogeny of the genus, revealing a high proportion of amino acid parallelism and reversal (homoplasy) at four sites. The level of homoplasy is significantly greater than that seen in other proteins for which multiple sequences are available, showing that Gpdh is strongly constrained by both the number of amino acid differences and the types of changes allowed. These four sites evolve at a much higher rate than do the other variable positions in the protein, accounting for half of the interspecific amino acid replacements. However, unlike typical hypervariable sites, where multiple changes to several different amino acids are seen, evolutionary 'flip-flopping' between two amino acid states defines this new class of hypervariable site.

  9. String theory origin of constrained multiplets

    NASA Astrophysics Data System (ADS)

    Kallosh, Renata; Vercnocke, Bert; Wrase, Timm

    2016-09-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector A μ , three complex scalars ϕ i and four 4d fermions λ 0, λ i. These transform, in addition to the more familiar {N}=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear {N}=1 multiplets: four chiral multiplets S, Y i that satisfy S 2 = SY i =0 and contain the worldvolume fermions λ 0 and λ i ; and four chiral multiplets W α , H i that satisfy S{W}_{α }=S{overline{D}}_{overset{\\cdotp }{α }}{overline{H}}^{overline{imath}}=0 and contain the vector A μ and the scalars ϕ i . We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet Φ that satisfies S(Φ -overline{Φ})=0 , which is particularly interesting for inflationary cosmology.

  10. Constrained resistivity inversion using seismic data

    NASA Astrophysics Data System (ADS)

    Saunders, J. H.; Herwanger, J. V.; Pain, C. C.; Worthington, M. H.; de Oliveira, C. R. E.

    2005-03-01

    In this paper we describe and apply a method for constraining structure in anisotropic electrical resistivity inversion. Structural constraints are routinely used to achieve improved model inversion. Here, a second-order (curvature-based) regularization tensor (model covariance) is used to build structure in the model. This structure could be obtained from other imaging methods such as seismic tomography, core samples or otherwise known structure in the model. Our method allows the incorporation of existing geophysical data into the inversion, in a general form that does not rely on any one-to-one correlation between data sets or material properties. Ambiguities in the resistivity distribution from electrical inversion, and in particular anisotropic inversion, may be reduced with this approach. To demonstrate the approach we invert a synthetic data set, showing the regularization tensor explicitly in different locations. We then apply the method to field data where we have some knowledge of the subsurface from seismic imaging. Our results show that it is possible to achieve a high level of convergence while using spatially varying structural constraints. Common problems associated with resistivity inversion such as source/receiver effects and false imaging of strongly resistive or conductive zones may also be reduced. As part of the inversion method we show how the magnitude of the constraints in the form of penalty parameters appropriate to an inversion may be estimated, reducing the computational expense of resistivity inversion.

  11. Constrained length minimum inductance gradient coil design.

    PubMed

    Chronik, B A; Rutt, B K

    1998-02-01

    A gradient coil design algorithm capable of controlling the position of the homogeneous region of interest (ROI) with respect to the current-carrying wires is required for many advanced imaging and spectroscopy applications. A modified minimum inductance target field method that allows the placement of a set of constraints on the final current density is presented. This constrained current minimum inductance method is derived in the context of previous target field methods. Complete details are shown and all equations required for implementation of the algorithm are given. The method has been implemented on computer and applied to the design of both a 1:1 aspect ratio (length:diameter) central ROI and a 2:1 aspect ratio edge ROI gradient coil. The 1:1 design demonstrates that a general analytic method can be used to easily obtain very short gradient coil designs for use with specialized magnet systems. The edge gradient design demonstrates that designs that allow imaging of the neck region with a head sized gradient coil can be obtained, as well as other applications requiring edge-of-cylinder regions of uniformity.

  12. Constrained Graph Optimization: Interdiction and Preservation Problems

    SciTech Connect

    Schild, Aaron V

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  13. Acoustic characteristics of listener-constrained speech

    NASA Astrophysics Data System (ADS)

    Ashby, Simone; Cummins, Fred

    2003-04-01

    Relatively little is known about the acoustical modifications speakers employ to meet the various constraints-auditory, linguistic and otherwise-of their listeners. Similarly, the manner by which perceived listener constraints interact with speakers' adoption of specialized speech registers is poorly Hypo (H&H) theory offers a framework for examining the relationship between speech production and output-oriented goals for communication, suggesting that under certain circumstances speakers may attempt to minimize phonetic ambiguity by employing a ``hyperarticulated'' speaking style (Lindblom, 1990). It remains unclear, however, what the acoustic correlates of hyperarticulated speech are, and how, if at all, we might expect phonetic properties to change respective to different listener-constrained conditions. This paper is part of a preliminary investigation concerned with comparing the prosodic characteristics of speech produced across a range of listener constraints. Analyses are drawn from a corpus of read hyperarticulated speech data comprising eight adult, female speakers of English. Specialized registers include speech to foreigners, infant-directed speech, speech produced under noisy conditions, and human-machine interaction. The authors gratefully acknowledge financial support of the Irish Higher Education Authority, allocated to Fred Cummins for collaborative work with Media Lab Europe.

  14. Constraining inflation with future galaxy redshift surveys

    SciTech Connect

    Huang, Zhiqi; Vernizzi, Filippo; Verde, Licia E-mail: liciaverde@icc.ub.edu

    2012-04-01

    With future galaxy surveys, a huge number of Fourier modes of the distribution of the large scale structures in the Universe will become available. These modes are complementary to those of the CMB and can be used to set constraints on models of the early universe, such as inflation. Using a MCMC analysis, we compare the power of the CMB with that of the combination of CMB and galaxy survey data, to constrain the power spectrum of primordial fluctuations generated during inflation. We base our analysis on the Planck satellite and a spectroscopic redshift survey with configuration parameters close to those of the Euclid mission as examples. We first consider models of slow-roll inflation, and show that the inclusion of large scale structure data improves the constraints by nearly halving the error bars on the scalar spectral index and its running. If we attempt to reconstruct the inflationary single-field potential, a similar conclusion can be reached on the parameters characterizing the potential. We then study models with features in the power spectrum. In particular, we consider ringing features produced by a break in the potential and oscillations such as in axion monodromy. Adding large scale structures improves the constraints on features by more than a factor of two. In axion monodromy we show that there are oscillations with small amplitude and frequency in momentum space that are undetected by CMB alone but can be measured by including galaxy surveys in the analysis.

  15. Contact symmetries of constrained quadratic Lagrangians

    NASA Astrophysics Data System (ADS)

    Dimakis, N.; Terzis, Petros A.; Christodoulakis, T.

    2016-01-01

    The conditions for the existence of (polynomial in the velocities) contact symmetries of constrained systems that are described by quadratic Lagrangians is presented. These Lagrangians mainly appear in mini-superspace reductions of gravitational plus matter actions. In the literature, one usually adopts a gauge condition (mostly for the lapse N) prior to searching for symmetries. This, however, is an unnecessary restriction which may lead to a loss of symmetries and consequently to the respective integrals of motion. A generalization of the usual procedure rests in the identification of the lapse function N as an equivalent degree of freedom and the according extension of the infinitesimal generator. As a result, conformal Killing tensors (with appropriate conformal factors) can define integrals of motion (instead of just Killing tensors used in the regular gauge fixed case). Additionally, rheonomic integrals of motion - whose existence is unique in this type of singular systems - of various orders in the momenta can be constructed. An example of a relativistic particle in a pp-wave space-time and under the influence of a quadratic potential is illustrated.

  16. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  17. Testing constrained sequential dominance models of neutrinos

    NASA Astrophysics Data System (ADS)

    Björkeroth, Fredrik; King, Stephen F.

    2015-12-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.

  18. Optimal performance of constrained control systems

    NASA Astrophysics Data System (ADS)

    Harvey, P. Scott, Jr.; Gavin, Henri P.; Scruggs, Jeffrey T.

    2012-08-01

    This paper presents a method to compute optimal open-loop trajectories for systems subject to state and control inequality constraints in which the cost function is quadratic and the state dynamics are linear. For the case in which inequality constraints are decentralized with respect to the controls, optimal Lagrange multipliers enforcing the inequality constraints may be found at any time through Pontryagin’s minimum principle. In so doing, the set of differential algebraic Euler-Lagrange equations is transformed into a nonlinear two-point boundary-value problem for states and costates whose solution meets the necessary conditions for optimality. The optimal performance of inequality constrained control systems is calculable, allowing for comparison to previous, sub-optimal solutions. The method is applied to the control of damping forces in a vibration isolation system subjected to constraints imposed by the physical implementation of a particular controllable damper. An outcome of this study is the best performance achievable given a particular objective, isolation system, and semi-active damper constraints.

  19. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  20. Autonomy, constraining options, and organ sales.

    PubMed

    Taylor, James Stacey

    2002-01-01

    Although there continues to be a chronic shortage of transplant organs the suggestion that we should try to alleviate it through allowing a current market in them continues to be morally condemned, usually on the grounds that such a market would undermine the autonomy of those who would participate in it as vendors. Against this objection Gerald Dworkin has argued that such markets would enhance the autonomy of the vendors through providing them with more options, thus enabling them to exercise a greater degree of control over their bodies. Paul Hughes and T.L. Zutlevics have recently criticized Dworkin's argument, arguing that the option to sell an organ is unusual in that it is an autonomy-undermining "constraining option" whose presence in a person's choice set is likely to undermine her autonomy rather than enhance it. I argue that although Hughes' and Zutlevics' arguments are both innovative and persuasive they are seriously flawed--and that allowing a market in human organs is more likely to enhance vendor autonomy than diminish it. Thus, given that autonomy is the preeminent value in contemporary medical ethics this provides a strong prima facie case for recognizing the moral legitimacy of such markets. PMID:12747360

  1. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion

    NASA Astrophysics Data System (ADS)

    Karabanov, A.; Wiśniewski, D.; Lesanovsky, I.; Köckenberger, W.

    2015-07-01

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP—so-called solid effect DNP—which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications.

  2. Constraining the Properties of Cold Interstellar Clouds

    NASA Astrophysics Data System (ADS)

    Spraggs, Mary Elizabeth; Gibson, Steven J.

    2016-01-01

    Since the interstellar medium (ISM) plays an integral role in star formation and galactic structure, it is important to understand the evolution of clouds over time, including the processes of cooling and condensation that lead to the formation of new stars. This work aims to constrain and better understand the physical properties of the cold ISM by utilizing large surveys of neutral atomic hydrogen (HI) 21cm spectral line emission and absorption, carbon monoxide (CO) 2.6mm line emission, and multi-band infrared dust thermal continuum emission. We identify areas where the gas may be cooling and forming molecules using HI self-absorption (HISA), in which cold foreground HI absorbs radiation from warmer background HI emission.We are developing an algorithm that uses total gas column densities inferred from Planck and other FIR/sub-mm data in parallel with CO and HISA spectral line data to determine the gas temperature, density, molecular abundance, and other properties as functions of position. We can then map these properties to study their variation throughout an individual cloud as well as any dependencies on location or environment within the Galaxy.Funding for this work was provided by the National Science Foundation, the NASA Kentucky Space Grant Consortium, the WKU Ogden College of Science and Engineering, and the Carol Martin Gatton Academy for Mathematics and Science in Kentucky.

  3. Technologies for a greenhouse-constrained society

    SciTech Connect

    Kuliasha, M.A.; Zucker, A.; Ballew, K.J.

    1992-05-01

    This conference explored how three technologies might help society adjust to life in a greenhouse-constrained environment. Technology experts and policy makers from around the world met June 11--13, 1991, in Oak Ridge, Tennessee, to address questions about how energy efficiency, biomass, and nuclear technologies can mitigate the greenhouse effect and to explore energy production and use in countries in various stages of development. The conference was organized by Oak Ridge National Laboratory and sponsored by the US Department of Energy. Energy efficiency biomass, and nuclear energy are potential substitutes for fossil fuels that might help slow or even reverse the global warming changes that may result from mankind`s thirst for energy. Many other conferences have questioned whether the greenhouse effect is real and what reductions in greenhouse gas emissions might be necessary to avoid serious ecological consequences; this conference studied how these reductions might actually be achieved. For these conference proceedings, individuals papers are processed separately for the Energy Data Base.

  4. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  5. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    SciTech Connect

    Ellis, John; Mustafayev, Azar; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  6. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  7. Proximity Navigation of Highly Constrained Spacecraft

    NASA Technical Reports Server (NTRS)

    Scarritt, S.; Swartwout, M.

    2007-01-01

    Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.

  8. Constrained bounds on measures of entanglement

    SciTech Connect

    Datta, Animesh; Flammia, Steven T.; Shaji, Anil; Caves, Carlton M.

    2007-06-15

    Entanglement measures constructed from two positive, but not completely positive, maps on density operators are used as constraints in placing bounds on the entanglement of formation, the tangle, and the concurrence of 4N mixed states. The maps are the partial transpose map and the phi map introduced by Breuer [H.-P. Breuer, Phys. Rev. Lett. 97, 080501 (2006)]. The norm-based entanglement measures constructed from these two maps, called negativity and phi negativity, respectively, lead to two sets of bounds on the entanglement of formation, the tangle, and the concurrence. We compare these bounds and identify the sets of 4N density operators for which the bounds from one constraint are better than the bounds from the other. In the process, we present a derivation of the already known bound on the concurrence based on the negativity. We compute bounds on the three measures of entanglement using both the constraints simultaneously. We demonstrate how such doubly constrained bounds can be constructed. We discuss extensions of our results to bipartite states of higher dimensions and with more than two constraints.

  9. Dynamic Nuclear Polarization as Kinetically Constrained Diffusion.

    PubMed

    Karabanov, A; Wiśniewski, D; Lesanovsky, I; Köckenberger, W

    2015-07-10

    Dynamic nuclear polarization (DNP) is a promising strategy for generating a significantly increased nonthermal spin polarization in nuclear magnetic resonance (NMR) and its applications that range from medicine diagnostics to material science. Being a genuine nonequilibrium effect, DNP circumvents the need for strong magnetic fields. However, despite intense research, a detailed theoretical understanding of the precise mechanism behind DNP is currently lacking. We address this issue by focusing on a simple instance of DNP-so-called solid effect DNP-which is formulated in terms of a quantum central spin model where a single electron is coupled to an ensemble of interacting nuclei. We show analytically that the nonequilibrium buildup of polarization heavily relies on a mechanism which can be interpreted as kinetically constrained diffusion. Beyond revealing this insight, our approach furthermore permits numerical studies of ensembles containing thousands of spins that are typically intractable when formulated in terms of a quantum master equation. We believe that this represents an important step forward in the quest of harnessing nonequilibrium many-body quantum physics for technological applications. PMID:26207453

  10. Constraining the oblateness of Kepler planets

    SciTech Connect

    Zhu, Wei; Huang, Chelsea X.; Zhou, George; Lin, D. N. C.

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  11. Regularized Partial and/or Constrained Redundancy Analysis

    ERIC Educational Resources Information Center

    Takane, Yoshio; Jung, Sunho

    2008-01-01

    Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…

  12. Second-order neural nets for constrained optimization.

    PubMed

    Zhang, S; Zhu, X; Zou, L H

    1992-01-01

    Analog neural nets for constrained optimization are proposed as an analogue of Newton's algorithm in numerical analysis. The neural model is globally stable and can converge to the constrained stationary points. Nonlinear neurons are introduced into the net, making it possible to solve optimization problems where the variables take discrete values, i.e., combinatorial optimization.

  13. The Pendulum: From Constrained Fall to the Concept of Potential

    ERIC Educational Resources Information Center

    Bevilacqua, Fabio; Falomo, Lidia; Fregonese, Lucio; Giannetto, Enrico; Giudice, Franco; Mascheretti, Paolo

    2006-01-01

    Kuhn underlined the relevance of Galileo's gestalt switch in the interpretation of a swinging body from constrained fall to time metre. But the new interpretation did not eliminate the older one. The constrained fall, both in the motion of pendulums and along inclined planes, led Galileo to the law of free fall. Experimenting with physical…

  14. Probability Statements Extraction with Constrained Conditional Random Fields.

    PubMed

    Deleris, Léa A; Jochim, Charles

    2016-01-01

    This paper investigates how to extract probability statements from academic medical papers. In previous work we have explored traditional classification methods which led to numerous false negatives. This current work focuses on constraining classification output obtained from a Conditional Random Field (CRF) model to allow for domain knowledge constraints. Our experimental results indicate constraining leads to a significant improvement in performance. PMID:27577439

  15. Binding of flexible and constrained ligands to the Grb2 SH2 domain: structural effects of ligand preorganization

    SciTech Connect

    Clements, John H.; DeLorbe, John E.; Benfield, Aaron P.; Martin, Stephen F.

    2010-10-01

    Structures of the Grb2 SH2 domain complexed with a series of flexible and constrained replacements of the phosphotyrosine residue in tripeptides derived from Ac-pYXN (where X = V, I, E and Q) were compared to determine what, if any, structural differences arise as a result of ligand preorganization. Structures of the Grb2 SH2 domain complexed with a series of pseudopeptides containing flexible (benzyl succinate) and constrained (aryl cyclopropanedicarboxylate) replacements of the phosphotyrosine (pY) residue in tripeptides derived from Ac-pYXN-NH{sub 2} (where X = V, I, E and Q) were elucidated by X-ray crystallography. Complexes of flexible/constrained pairs having the same pY + 1 amino acid were analyzed in order to ascertain what structural differences might be attributed to constraining the phosphotyrosine replacement. In this context, a given structural dissimilarity between complexes was considered to be significant if it was greater than the corresponding difference in complexes coexisting within the same asymmetric unit. The backbone atoms of the domain generally adopt a similar conformation and orientation relative to the ligands in the complexes of each flexible/constrained pair, although there are some significant differences in the relative orientations of several loop regions, most notably in the BC loop that forms part of the binding pocket for the phosphate group in the tyrosine replacements. These variations are greater in the set of complexes of constrained ligands than in the set of complexes of flexible ligands. The constrained ligands make more direct polar contacts to the domain than their flexible counterparts, whereas the more flexible ligand of each pair makes more single-water-mediated contacts to the domain; there was no correlation between the total number of protein–ligand contacts and whether the phosphotyrosine replacement of the ligand was preorganized. The observed differences in hydrophobic interactions between the complexes of

  16. The cost-constrained traveling salesman problem

    SciTech Connect

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP. We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.

  17. Quasi-optical constrained lens amplifiers

    NASA Astrophysics Data System (ADS)

    Schoenberg, Jon S.

    1995-09-01

    A major goal in the field of quasi-optics is to increase the power available from solid state sources by combining the power of individual devices in free space, as demonstrated with grid oscillators and grid amplifiers. Grid amplifiers and most amplifier arrays require a plane wave feed, provided by a far field source or at the beam waist of a dielectric lens pair. These feed approaches add considerable loss and size, which is usually greater than the quasi-optical amplifier gain. In addition, grid amplifiers require external polarizers for stability, further increasing size and complexity. This thesis describes using constrained lens theory in the design of quasi optical amplifier arrays with a focal point feed, improving the power coupling between the feed and the amplifier for increased gain. Feed and aperture arrays of elements, input/output isolation and stability, amplifier circuitry, delay lines and bias distribution are all contained on a single planar substrate, making monolithic circuit integration possible. Measured results of X band transmission lenses and a low noise receive lens are presented, including absolute power gain up to 13 dB, noise figure as low as 1.7 dB, beam scanning to +/-30 deg, beam forming and beam switching of multiple sources, and multiple level quasi-optical power combining. The design and performance of millimeter wave power combining amplifier arrays is described, including a Ka Band hybrid array with 1 watt output power, and a V Band 36 element monolithic array with a 5 dB on/off ratio.

  18. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  19. Constraining the source of mantle plumes

    NASA Astrophysics Data System (ADS)

    Cagney, N.; Crameri, F.; Newsome, W. H.; Lithgow-Bertelloni, C.; Cotel, A.; Hart, S. R.; Whitehead, J. A.

    2016-02-01

    In order to link the geochemical signature of hot spot basalts to Earth's deep interior, it is first necessary to understand how plumes sample different regions of the mantle. Here, we investigate the relative amounts of deep and shallow mantle material that are entrained by an ascending plume and constrain its source region. The plumes are generated in a viscous syrup using an isolated heater for a range of Rayleigh numbers. The velocity fields are measured using stereoscopic Particle-Image Velocimetry, and the concept of the 'vortex ring bubble' is used to provide an objective definition of the plume geometry. Using this plume geometry, the plume composition can be analysed in terms of the proportion of material that has been entrained from different depths. We show that the plume composition can be well described using a simple empirical relationship, which depends only on a single parameter, the sampling coefficient, sc. High-sc plumes are composed of material which originated from very deep in the fluid domain, while low-sc plumes contain material entrained from a range of depths. The analysis is also used to show that the geometry of the plume can be described using a similarity solution, in agreement with previous studies. Finally, numerical simulations are used to vary both the Rayleigh number and viscosity contrast independently. The simulations allow us to predict the value of the sampling coefficient for mantle plumes; we find that as a plume reaches the lithosphere, 90% of its composition has been derived from the lowermost 260-750 km in the mantle, and negligible amounts are derived from the shallow half of the lower mantle. This result implies that isotope geochemistry cannot provide direct information about this unsampled region, and that the various known geochemical reservoirs must lie in the deepest few hundred kilometres of the mantle.

  20. Constrained layer damping of a tennis racket

    NASA Astrophysics Data System (ADS)

    Harms, Michael R.; Gopal, H. S.; Lai, Ming-Lai; Cheng, Po-Jen

    1996-05-01

    When a tennis ball strikes a racket the impact causes vibrations which are distracting and undesirable to the player. In this work a passive damping system used to reduce vibration is described. The damping system uses a viscoelastic material along with a stiff composite constraining layer which is molded on the inner surface of the tennis racket frame. When a ball strikes a racket with this damping system the vibration causes shearing strain in the viscoelastic material. This strain energy is partially dissipated by the viscoelastic material, thereby increasing the racket damping. An analysis of the design was performed by creating a solid CAD model of the racket using Pro/Engineer. A finite element mesh was created and the mesh was then exported to ANSYS for the finite element modal analysis. The technique used to determine the damping ratio is the modal strain energy method. Experimental testing using accelerometers was conducted to determine the natural frequency and the damping ratio of rackets with and without the damping system. The natural frequency of the finite element model was benchmarked to the experimental data and damping ratios were compared. The modal strain energy method was found to be a very effective means of determining the damping ratio, and the frequencies and damping ratios correlated well with the experimental data. Using this analysis method, the effectiveness of the damping ratio to the change in key variables can be studied, minimizing the need for prototypes. This method can be used to determine an optimum design by maximizing the damping ratio with minimal weight addition.

  1. Titan's interior constrained from its obliquity and tidal Love number

    NASA Astrophysics Data System (ADS)

    Baland, Rose-Marie; Coyette, Alexis; Yseboodt, Marie; Beuthe, Mikael; Van Hoolst, Tim

    2016-04-01

    In the last few years, the Cassini-Huygens mission to the Saturn system has measured the shape, the obliquity, the static gravity field, and the tidally induced gravity field of Titan. The large values of the obliquity and of the k2 Love number both point to the existence of a global internal ocean below the icy crust. In order to constrain interior models of Titan, we combine the above-mentioned data as follows: (1) we build four-layer density profiles consistent with Titan's bulk properties; (2) we determine the corresponding internal flattening compatible with the observed gravity and topography; (3) we compute the obliquity and tidal Love number for each interior model; (4) we compare these predictions with the observations. Previously, we found that Titan is more differentiated than expected (assuming hydrostatic equilibrium), and that its ocean is dense and less than 100 km thick. Here, we revisit these conclusions using a more complete Cassini state model, including: (1) gravitational and pressure torques due to internal tidal deformations; (2) atmosphere/lakes-surface exchange of angular momentum; (3) inertial torque due to Poincaré flow. We also adopt faster methods to evaluate Love numbers (i.e. the membrane approach) in order to explore a larger parameter space.

  2. Constraining Dark Matter Through the Study of Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Dawson, William Anthony

    2013-03-01

    gravitational lensing observations to map and weigh the mass (i.e., dark matter which comprises ~85% of the mass) of the cluster, Sunyaev-Zel'dovich effect and X-ray observations to map and quantify the intracluster gas, and finally radio observations to search for associated radio relics, which had they been observed would have helped constrain the properties of the merger. Using this information in conjunction with a Monte Carlo analysis model I quantify the dynamic properties of the merger, necessary to properly interpret constraints on the SIDM cross-section. I compare the locations of the galaxies, dark matter and gas to constrain the SIDM cross-section. This dissertation presents this work. Findings: We find that the Musket Ball is a merger with total mass of 4.8+3.2-1.5x10 14Msun. However, the dynamic analysis shows that the Musket Ball is being observed 1.1+1.3-0.4 Gyr after first pass through and is much further progressed in its merger process than previously identified dissociative mergers (for example it is 3.4+3.8 -1.4 times further progressed that the Bullet Cluster). By observing that the dark matter is significantly offset from the gas we are able to place an upper limit on the dark matter cross-section of sigmaSIDMm -1DM < 8 cm2g-1. However, we find an that the galaxies appear to be leading the weak lensing (WL) mass distribution by 20.5" (129 kpc at z=0.53) in southern subcluster, which might be expected to occur if dark matter self-interacts. Contrary to this finding though the WL mass centroid appears to be leading the galaxy centroid by 7.4" (47 kpc at z=0.53) in the northern subcluster. Conclusion: The southern offset alone suggests that dark matter self-interacts with ~83% confidence. However, when we account for the observation that the galaxy centroid appears to trail the WL centroid in the north the confidence falls to ~55%. While the SIDM scenario is slightly preferred over the CDM scenario it is not significantly so. Perspectives: The galaxy

  3. Constraining Anthropogenic and Biogenic Emissions Using Chemical Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Spencer, Kathleen M.

    Numerous gas-phase anthropogenic and biogenic compounds are emitted into the atmosphere. These gases undergo oxidation to form other gas-phase species and particulate matter. Whether directly or indirectly, primary pollutants, secondary gas-phase products, and particulate matter all pose health and environmental risks. In this work, ambient measurements conducted using chemical ionization mass spectrometry are used as a tool for investigating regional air quality. Ambient measurements of peroxynitric acid (HO2NO2) were conducted in Mexico City. A method of inferring the rate of ozone production, PO3, is developed based on observations of HO2NO 2, NO, and NO2. Comparison of this observationally based PO3 to a highly constrained photochemical box model indicates that regulations aimed at reducing ozone levels in Mexico City by reducing NOx concentrations may be effective at higher NO x levels than predicted using accepted photochemistry. Measurements of SO2 and particulate sulfate were conducted over the Los Angeles basin in 2008 and are compared to measurements made in 2002. A large decrease in SO2 concentration and a change in spatial distribution are observed. Nevertheless, only a modest reduction in sulfate concentration is observed at ground sites within the basin. Possible explanations for these trends are investigated. Two techniques, single and triple quadrupole chemical ionization mass spectrometry, were used to quantify ambient concentrations of biogenic oxidation products, hydroxyacetone and glycolaldehyde. The use of these techniques demonstrates the advantage of triple quadrupole mass spectrometry for separation of mass analogues, provided the collision-induced daughter ions are sufficiently distinct. Enhancement ratios of hydroxyacetone and glycolaldehyde in Californian biomass burning plumes are presented as are concentrations of these compounds at a rural ground site downwind of Sacramento.

  4. Constraining depth of anisotropy in the Amazon region (Northern Brasil)

    NASA Astrophysics Data System (ADS)

    Bianchi, Irene; Willy Corrêa Rosa, João; Bokelmann, Götz

    2014-05-01

    Seismic data recorded between November 2009 and September 2013, at the permanent station PTGA of the Brazilian seismic network were used to constrain the depth of anisotropy in the lithosphere beneath the station. 90 receiver functions (RF) have been computed, covering the backazimuthal directions from 0° to 180°. Both radial (R) and transverse (T) components of the RF contain useful information about the subsurface structure. The isotropic part of the seismic velocity profile at depth mainly affects the R-RF component, while anisotropy and dipping structures produce P-to-S conversion recorded on the T-RF component (Levin and Park, 1998; Savage, 1998). The incoming (radially polarized) S waves, when passing through an anisotropic crust, splits and part of it is projected onto the transverse component. The anisotropy symmetry orientations (Φ) can be estimated by the polarity change of the observed phases. The arrival times of the phases is related to the depth of the conversion. Depth and Φ are estimated by isolating phases at certain arrival times. SKS shear-wave splitting results from previous studies in this area (Krüger et al., 2002, Rosa et al., 2014), suggest the presence of anisotropy in the mantle with orientation of the fast splitting axis (about E-W) following major deep tectonic structures. The observed splitting orientation correlates well with the current South America plate motion (i.e. relative to mesosphere), and with observed aeromagnetic trends. This similarity leaves open the possibility of a linkage between the upper mantle fabric imaged by shear wave splitting analysis and the lower crustal structure imaged by aeromagnetometry. In this study we unravel, from RF data, two layers in which anisotropy concentrates, i.e. the lower crust and the upper mantle. Lower crustal and upper mantle anisotropy retrieved by RFs give some new hints in order to interpret the previously observed anisotropic orientations from SKS and the aeromagnetic anomalies.

  5. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lindsay, Sean; Harker, David E.; Kelley, Michael S. P.; Woodward, Charles E.; Murphy, James Richard

    2013-01-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 microns [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, lambdaF lambda vs. lambda) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline

  6. How We Can Constrain Aerosol Type Globally

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph

    2016-01-01

    In addition to aerosol number concentration, aerosol size and composition are essential attributes needed to adequately represent aerosol-cloud interactions (ACI) in models. As the nature of ACI varies enormously with environmental conditions, global-scale constraints on particle properties are indicated. And although advanced satellite remote-sensing instruments can provide categorical aerosol-type classification globally, detailed particle microphysical properties are unobtainable from space with currently available or planned technologies. For the foreseeable future, only in situ measurements can constrain particle properties at the level-of-detail required for ACI, as well as to reduce uncertainties in regional-to-global-scale direct aerosol radiative forcing (DARF). The limitation of in situ measurements for this application is sampling. However, there is a simplifying factor: for a given aerosol source, in a given season, particle microphysical properties tend to be repeatable, even if the amount varies from day-to-day and year-to-year, because the physical nature of the particles is determined primarily by the regional environment. So, if the PDFs of particle properties from major aerosol sources can be adequately characterized, they can be used to add the missing microphysical detail the better sampled satellite aerosol-type maps. This calls for Systematic Aircraft Measurements to Characterize Aerosol Air Masses (SAM-CAAM). We are defining a relatively modest and readily deployable, operational aircraft payload capable of measuring key aerosol absorption, scattering, and chemical properties in situ, and a program for characterizing statistically these properties for the major aerosol air mass types, at a level-of-detail unobtainable from space. It is aimed at: (1) enhancing satellite aerosol-type retrieval products with better aerosol climatology assumptions, and (2) improving the translation between satellite-retrieved aerosol optical properties and

  7. Folding of Small Proteins Using Constrained Molecular Dynamics

    PubMed Central

    Balaraman, Gouthaman S.; Park, In-Hee; Jain, Abhinandan; Vaidehi, Nagarajan

    2011-01-01

    The focus of this paper is to examine whether conformational search using constrained molecular dynamics (MD) method is more enhanced and enriched towards “native-like” structures compared to all-atom MD for the protein folding as a model problem. Constrained MD methods provide an alternate MD tool for protein structure prediction and structure refinement. It is computationally expensive to perform all-atom simulations of protein folding because the processes occur on a timescale of microseconds. Compared to the all-atom MD simulation, constrained MD methods have the advantage that stable dynamics can be achieved for larger time steps and the number of degrees of freedom is an order of magnitude smaller, leading to a decrease in computational cost. We have developed a generalized constrained MD method that allows the user to “freeze and thaw” torsional degrees of freedom as fit for the problem studied. We have used this method to perform all-torsion constrained MD in implicit solvent coupled with the replica exchange method to study folding of small proteins with various secondary structural motifs such as, α-helix (polyalanine, WALP16), β-turn (1E0Q), and a mixed motif protein (Trp-cage). We demonstrate that constrained MD replica exchange method exhibits a wider conformational search than all-atom MD with increased enrichment of near native structures. “Hierarchical” constrained MD simulations, where the partially formed helical regions in the initial stretch of the all-torsion folding simulation trajectory of Trp-cage were frozen, showed a better sampling of near native structures than all-torsion constrained MD simulations. This is in agreement with the zipping-and-assembly folding model put forth by Dill and coworkers for folding proteins. The use of hierarchical “freeze and thaw” clustering schemes in constrained MD simulation can be used to sample conformations that contribute significantly to folding of proteins. PMID:21591767

  8. Residual flexibility test method for verification of constrained structural models

    NASA Technical Reports Server (NTRS)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.

    1992-01-01

    A method is presented for deriving constrained modes and frequencies from a model correlated to a set of free-free test modes and a set of measured residual flexibilities. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow verification of a constrained structural model. Results for two spaceflight structures show quick convergence of constrained modes using an easily measurable set of free-free modes plus the residual flexibility matrix or its boundary partition. This paper further validates the residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  9. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  10. Binding of flexible and constrained ligands to the Grb2 SH2 domain: structural effects of ligand preorganization

    PubMed Central

    Clements, John H.; DeLorbe, John E.; Benfield, Aaron P.; Martin, Stephen F.

    2010-01-01

    Structures of the Grb2 SH2 domain complexed with a series of pseudopeptides containing flexible (benzyl succinate) and constrained (aryl cyclopropanedicarboxylate) replacements of the phosphotyrosine (pY) residue in tripeptides derived from Ac-pYXN-NH2 (where X = V, I, E and Q) were elucidated by X-ray crystallography. Complexes of flexible/constrained pairs having the same pY + 1 amino acid were analyzed in order to ascertain what structural differences might be attributed to constraining the phosphotyrosine replacement. In this context, a given structural dissimilarity between complexes was considered to be significant if it was greater than the corresponding difference in complexes coexisting within the same asymmetric unit. The backbone atoms of the domain generally adopt a similar conformation and orientation relative to the ligands in the complexes of each flexible/constrained pair, although there are some significant differences in the relative orientations of several loop regions, most notably in the BC loop that forms part of the binding pocket for the phosphate group in the tyrosine replacements. These variations are greater in the set of complexes of constrained ligands than in the set of complexes of flexible ligands. The constrained ligands make more direct polar contacts to the domain than their flexible counterparts, whereas the more flexible ligand of each pair makes more single-water-mediated contacts to the domain; there was no correlation between the total number of protein–ligand contacts and whether the phosphotyrosine replacement of the ligand was preorganized. The observed differences in hydrophobic interactions between the complexes of each flexible/constrained ligand pair were generally similar to those observed upon comparing such contacts in coexisting complexes. The average adjusted B factors of the backbone atoms of the domain and loop regions are significantly greater in the complexes of constrained ligands than in the complexes of

  11. Progress in constraining the asymmetry dependence of the nuclear caloric curve

    NASA Astrophysics Data System (ADS)

    McIntosh, Alan B.; Yennello, Sherry J.

    2016-05-01

    The nuclear equation of state is a basic emergent property of nuclear material. Despite its importance in nuclear physics and astrophysics, aspects of it are still poorly constrained. Our research focuses on answering the question: How does the nuclear caloric curve depend on the neutron-proton asymmetry? We briefly describe our initial observation that increasing neutron-richness leads to lower temperatures. We then discuss the status of our recently executed experiment to independently measure the asymmetry dependence of the caloric curve.

  12. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  13. Constraining MHD Disk-Winds with X-ray Absorbers

    NASA Astrophysics Data System (ADS)

    Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.

    2014-01-01

    From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (< 2 keV). While the identified WAs are often mildly blueshifted to yield line-of-sight velocities up to ~100-3,000 km/sec in typical X-ray-bright Seyfert 1 AGNs, a fraction of Seyfert galaxies such as PG 1211+143 exhibits even faster absorbers (v/ 0.1-0.2) called ultra-fast outflows (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.

  14. How tightly does calcite e-twin constrain stress?

    NASA Astrophysics Data System (ADS)

    Yamaji, Atsushi

    2015-03-01

    Mechanical twinning along calcite e-planes has been used for paleostress analyses. Since the twinning has a critical resolved shear stress at ˜10 MPa, not only principal stress axes but also differential stress can be determined from the twins. In this article, five-dimensional stress space used in plasticity theory was introduced to describe the yield loci of calcite e-twinning. The constraints to paleostress from twin and untwin data and from calcite grains twinned on 0, 1, 2 and 3 e-planes were quantified by using their information contents, which were defined in the stress space. The orientations of twinned and untwinned e-planes are known to constrain not only stress axes but also differential stress, D, but they loose the resolution of D if the twin lamellae were formed at D greater than 50-100 MPa. On the other hand, it is difficult to observe twin lamellae subparallel to a thin section. The stochastic modeling of this effect showed that 20-25% of twin lamellae can be overlooked. The degradation of the constraints by this sampling bias can be serious especially for the determination of D.

  15. Ultraconservation identifies a small subset of extremely constrained developmental enhancers

    SciTech Connect

    Pennacchio, Len A.; Visel, Axel; Prabhakar, Shyam; Akiyama, Jennifer A.; Shoukry, Malak; Lewis, Keith D.; Holt, Amy; Plajzer-Frick, Ingrid; Afzal, Veena; Rubin, Edward M.; Pennacchio, Len A.

    2007-10-01

    While experimental studies have suggested that non-coding ultraconserved DNA elements are central nodes in the regulatory circuitry that specifies mammalian embryonic development, the possible functional relevance of their>200bp of perfect sequence conservation between human-mouse-rat remains obscure 1,2. Here we have compared the in vivo enhancer activity of a genome-wide set of 231 non-exonic sequences with ultraconserved cores to that of 206 sequences that are under equivalently severe human-rodent constraint (ultra-like), but lack perfect sequence conservation. In transgenic mouse assays, 50percent of the ultraconserved and 50percent of the ultra-like conserved elements reproducibly functioned as tissue-specific enhancers at embryonic day 11.5. In this in vivo assay, we observed that ultraconserved enhancers and constrained non-ultraconserved enhancers targeted expression to a similar spectrum of tissues with a particular enrichment in the developing central nervous system. A human genome-wide comparative screen uncovered ~;;2,600 non-coding elements that evolved under ultra-like human-rodent constraint and are similarly enriched near transcriptional regulators and developmental genes as the much smaller number of ultraconserved elements. These data indicate that ultraconserved elements possessing absolute human-rodent sequence conservation are not distinct from other non-coding elements that are under comparable purifying selection in mammals and suggest they are principal constituents of the cis-regulatory framework of mammalian development.

  16. Could the Pliocene constrain the equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. C.; Annan, J. D.

    2016-08-01

    The mid-Pliocene Warm Period (mPWP) is the most recent interval in which atmospheric carbon dioxide was substantially higher than in modern pre-industrial times. It is, therefore, a potentially valuable target for testing the ability of climate models to simulate climates warmer than the pre-industrial state. The recent Pliocene Model Intercomparison Project (PlioMIP) presented boundary conditions for the mPWP and a protocol for climate model experiments. Here we analyse results from the PlioMIP and, for the first time, discuss the potential for this interval to usefully constrain the equilibrium climate sensitivity. We observe a correlation in the ensemble between their tropical temperature anomalies at the mPWP and their equilibrium sensitivities. If the real world is assumed to also obey this relationship, then the reconstructed tropical temperature anomaly at the mPWP can in principle generate a constraint on the true sensitivity. Directly applying this methodology using available data yields a range for the equilibrium sensitivity of 1.9-3.7 °C, but there are considerable additional uncertainties surrounding the analysis which are not included in this estimate. We consider the extent to which these uncertainties may be better quantified and perhaps lessened in the next few years.

  17. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovičić discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  18. FXR agonist activity of conformationally constrained analogs of GW 4064

    SciTech Connect

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y.; Caldwell, Richard D.; Caravella, Justin A.; Chen, Lihong; Creech, Katrina L.; Deaton, David N.; Madauss, Kevin P.; Marr, Harry B.; McFadyen, Robert B.; Miller, Aaron B.; Navas, III, Frank; Parks, Derek J.; Spearing, Paul K.; Todd, Dan; Williams, Shawn P.; Wisely, G. Bruce

    2010-09-27

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  19. FXR agonist activity of conformationally constrained analogs of GW 4064.

    PubMed

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y; Caldwell, Richard D; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Bruce Wisely, G

    2009-08-15

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  20. Geometric constrained variational calculus. II: The second variation (Part I)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  1. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  2. Constraining a Distributed Hydrologic Model Using Process Constraints derived from a Catchment Perceptual Model

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Musuuza, Jude; Zhang, Jun

    2015-04-01

    The increased availability of spatial datasets and hydrological monitoring techniques improves the potential to apply distributed hydrologic models robustly to simulate catchment systems. However, distributed catchment modelling remains problematic for several reasons, including the miss-match between the scale of process equations and observations, and the scale at which equations (and parameters) are applied at the model grid resolution. A key problem is that when equations are solved over a distributed grid of the catchment system, models contain a considerable number of distributed parameters, and therefore degrees of freedom, that need to be constrained through calibration. Often computational limitations alone prohibit a full search of the multidimensional parameter space. However, even when possible, insufficient data results in model parameter and/or structural equifinality. Calibration approaches therefore attempt to reduce the dimensions of parameter space to constrain model behaviour, typically by fixing, lumping or relating model parameters in some way when calibrating the model to time-series of response data. An alternative approach to help reduce the space of feasible models has been applied to lumped and semi-distributed models, where additional, often semi-qualitative information is used to constrain the internal states and fluxes of the model, which in turn help to identify feasible sets of model structures and parameters. Such process constraints have not been widely applied to distributed hydrological models, despite the fact that distributed models make more predictions of distributed states and fluxes that can potentially be constrained. This paper presents a methodology for deriving process and parameter constraints through development of a perceptual model for a given catchment system, which can then be applied in distributed model calibration and sensitivity analysis to constrain feasible parameter and model structural space. We argue that

  3. Constraining the initial entropy of directly detected exoplanets

    NASA Astrophysics Data System (ADS)

    Marleau, G.-D.; Cumming, A.

    2014-01-01

    The post-formation, initial entropy Si of a gas giant planet is a key witness to its mass-assembly history and a crucial quantity for its early evolution. However, formation models are not yet able to predict reliably Si, making unjustified the use solely of traditional, `hot-start' cooling tracks to interpret direct-imaging results and calling for an observational determination of initial entropies to guide formation scenarios. Using a grid of models in mass and entropy, we show how to place joint constraints on the mass and initial entropy of an object from its observed luminosity and age. This generalizes the usual estimate of only a lower bound on the real mass, through hot-start tracks. Moreover, we demonstrate that with mass information, e.g. from dynamical-stability analyses or radial velocity, tighter bounds can be set on the initial entropy. We apply this procedure to 2M1207 b and find that its initial entropy is at least 9.2 kB/baryon, assuming that it does not burn deuterium. For the planets of the HR 8799 system, we infer that they must have formed with Si > 9.2 kB/baryon, independent of uncertainties about the age of the star. Finally, a similar analysis for β Pic b reveals that it must have formed with Si > 10.5 kB/baryon, using the radial-velocity mass upper limit. These initial entropy values are, respectively, ca. 0.7, 0.5 and 1.5 kB/baryon higher than the ones obtained from core-accretion models by Marley et al., thereby quantitatively ruling out the coldest starts for these objects and constraining warm starts, especially for β Pic b.

  4. Prediction of noise constrained optimum takeoff procedures

    NASA Technical Reports Server (NTRS)

    Padula, S. L.

    1980-01-01

    An optimization method is used to predict safe, maximum-performance takeoff procedures which satisfy noise constraints at multiple observer locations. The takeoff flight is represented by two-degree-of-freedom dynamical equations with aircraft angle-of-attack and engine power setting as control functions. The engine thrust, mass flow and noise source parameters are assumed to be given functions of the engine power setting and aircraft Mach number. Effective Perceived Noise Levels at the observers are treated as functionals of the control functions. The method is demonstrated by applying it to an Advanced Supersonic Transport aircraft design. The results indicate that automated takeoff procedures (continuously varying controls) can be used to significantly reduce community and certification noise without jeopardizing safety or degrading performance.

  5. ALMA Observations of TNOs

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; Brown, Michael E.

    2016-10-01

    Some of the most fundamental properties of TNOs are still quite poorly constrained, including diameter and density. Observations at long thermal wavelengths, in the millimeter and submillimeter, hold promise for determining these quantities, at least for the largest of these bodies (and notably for those with companions). Knowing this information can then yield clues as to the formation mechanism of these bodies, allowing us to distinguish between pairwise accretion and other formation scenarios.We have used the Atacama Large Millimeter/Submillimeter Array (ALMA) to observe Orcus, Quaoar, Salacia, and 2002 UX25 at wavelengths of 1.3 and 0.8 mm, in order to constrain the sizes of these bodies. We have also used ALMA to make astrometric observations of the Eris-Dysnomia system, in an attempt to measure the wobble of Eris and hence accurately determine its density. Dysnomia should also be directly detectable in those data, separate from Eris (ALMA has sufficient resolution in the configuration in which the observations were made). Results from these observations will be presented and discussed.

  6. Constraining East Antarctic mass trends using a Bayesian inference approach

    NASA Astrophysics Data System (ADS)

    Martin-Español, Alba; Bamber, Jonathan L.

    2016-04-01

    East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (<10 m/yr) , such as those in the interior of East Antarctica (a similar condition as used in Zwally 2015). The model must find a solution that

  7. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  8. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory.

  9. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. PMID:26548947

  10. Energy losses in thermally cycled optical fibers constrained in small bend radii

    SciTech Connect

    Guild, Eric; Morelli, Gregg

    2012-09-23

    High energy laser pulses were fired into a 365μm diameter fiber optic cable constrained in small radii of curvature bends, resulting in a catastrophic failure. Q-switched laser pulses from a flashlamp pumped, Nd:YAG laser were injected into the cables, and the spatial intensity profile at the exit face of the fiber was observed using an infrared camera. The transmission of the radiation through the tight radii resulted in an asymmetric intensity profile with one half of the fiber core having a higher peak-to-average energy distribution. Prior to testing, the cables were thermally conditioned while constrained in the small radii of curvature bends. Single-bend, double-bend, and U-shaped eometries were tested to characterize various cable routing scenarios.

  11. Gravitational-wave limits from pulsar timing constrain supermassive black hole evolution.

    PubMed

    Shannon, R M; Ravi, V; Coles, W A; Hobbs, G; Keith, M J; Manchester, R N; Wyithe, J S B; Bailes, M; Bhat, N D R; Burke-Spolaor, S; Khoo, J; Levin, Y; Osłowski, S; Sarkissian, J M; van Straten, W; Verbiest, J P W; Wang, J-B

    2013-10-18

    The formation and growth processes of supermassive black holes (SMBHs) are not well constrained. SMBH population models, however, provide specific predictions for the properties of the gravitational-wave background (GWB) from binary SMBHs in merging galaxies throughout the universe. Using observations from the Parkes Pulsar Timing Array, we constrain the fractional GWB energy density (Ω(GW)) with 95% confidence to be Ω(GW)(H0/73 kilometers per second per megaparsec)(2) < 1.3 × 10(-9) (where H0 is the Hubble constant) at a frequency of 2.8 nanohertz, which is approximately a factor of 6 more stringent than previous limits. We compare our limit to models of the SMBH population and find inconsistencies at confidence levels between 46 and 91%. For example, the standard galaxy formation model implemented in the Millennium Simulation Project is inconsistent with our limit with 50% probability.

  12. Gravitational-wave limits from pulsar timing constrain supermassive black hole evolution.

    PubMed

    Shannon, R M; Ravi, V; Coles, W A; Hobbs, G; Keith, M J; Manchester, R N; Wyithe, J S B; Bailes, M; Bhat, N D R; Burke-Spolaor, S; Khoo, J; Levin, Y; Osłowski, S; Sarkissian, J M; van Straten, W; Verbiest, J P W; Wang, J-B

    2013-10-18

    The formation and growth processes of supermassive black holes (SMBHs) are not well constrained. SMBH population models, however, provide specific predictions for the properties of the gravitational-wave background (GWB) from binary SMBHs in merging galaxies throughout the universe. Using observations from the Parkes Pulsar Timing Array, we constrain the fractional GWB energy density (Ω(GW)) with 95% confidence to be Ω(GW)(H0/73 kilometers per second per megaparsec)(2) < 1.3 × 10(-9) (where H0 is the Hubble constant) at a frequency of 2.8 nanohertz, which is approximately a factor of 6 more stringent than previous limits. We compare our limit to models of the SMBH population and find inconsistencies at confidence levels between 46 and 91%. For example, the standard galaxy formation model implemented in the Millennium Simulation Project is inconsistent with our limit with 50% probability. PMID:24136962

  13. Improving Ocean Angular Momentum Estimates Using a Model Constrained by Data

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Stammer, Detlef; Wunsch, Carl

    2001-01-01

    Ocean angular momentum (OAM) calculations using forward model runs without any data constraints have, recently revealed the effects of OAM variability on the Earth's rotation. Here we use an ocean model and its adjoint to estimate OAM values by constraining the model to available oceanic data. The optimization procedure yields substantial changes in OAM, related to adjustments in both motion and mass fields, as well as in the wind stress torques acting on the ocean. Constrained and unconstrained OAM values are discussed in the context of closing the planet's angular momentum budget. The estimation procedure, yields noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale. The comparison with Earth rotation measurements provides an independent consistency check on the estimated ocean state and underlines the importance of ocean state estimation for quantitative. studies of the variable large-scale oceanic mass and circulation fields, including studies of OAM.

  14. Constraining and Interpreting Titan's Methane Hydrologic Cycle

    NASA Astrophysics Data System (ADS)

    Lora, Juan; Mitchell, Jonathan; Adamkovics, Mate

    2016-06-01

    Titan's surface supports large reservoirs of stable hydrocarbon liquids, while active weather and a seasonal cycle operate in the troposphere. Titan's hydrologic cycle transports methane between the atmosphere, the surface, and, potentially, the sub-surface on various timescales. Yet the detailed distribution of methane both in the lower atmosphere and in the surface is essentially unknown, though studies of the processes that control it, from seasonal to orbital timescales, are now relatively mature. Many conundrums remain regarding observed hydrologic phenomena. For example, why have widely-expected cloud outbursts at the North pole failed to materialize? Why are there extensive fluvial surface features at the dry low latitudes? Using a combination of general circulation, radiative transfer, and surface modeling, in conjunction with ground-based and Cassini observations, we are working to better characterize and predict tropospheric cloud occurrence, measure and interpret the variability of lower-atmosphere humidity, and clarify the distribution of lakes and surface features and their connection to the atmosphere. Within this context, I will summarize our efforts to improve our understanding of the atmospheric circulation, dynamics, and surface-atmosphere interactions that affect the hydrologic cycle, to gain a better insight into Titan's climate system and its evolution.

  15. Structural Transitions in Topologically Constrained DNA

    NASA Astrophysics Data System (ADS)

    Leger, J.; Romano, G.; Sarkar, A.; Robert, J.; Bourdieu, L.; Chatenay, D.; Marko, J. F.

    2000-03-01

    We propose a theoretical explanation for results of recent single molecule micromanipulation experiments (Leger et al, PRL 83, 1066, 1999) on double-stranded DNA with fixed linking number. The topological constraint leads to novel structural transitions, including a shift of the usual 60 pN B-form to S-form transition force plateau up to a force of 100 pN when linking is fixed at zero. Our model needs five distinct states to explain the four different observed transitions. The various constant-force plateaus observed for different fixed values of linking correspond to a mixture of different pairs of states, weighted to satisfy the topological constraint. Our model allows us to conclude that sufficiently overtwisted DNA (positive linkage number) undergoes a transition from B-form DNA to a mixture of S-form and P-form DNA at a force plateau near 45 pN, and then to homogeneous P-form DNA at a force plateau near 110 pN. A similar two-step transition occurs for undertwisted DNA, and by analysing the twisting necessary to produce pure S-form DNA we conclude that the S-state has helix repeat of 38 bp. Support from the Whitaker Foundation, the NSF, the ACS-PRF and Research Corporation is gratefully acknowledged.

  16. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  17. Constraining Emission Models of Luminous Blazar Sources

    SciTech Connect

    Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  18. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Astrophysics Data System (ADS)

    Wooden, D. H.; Lindsay, S.; Harker, D. E.; Kelley, M. S.; Woodward, C. E.; Murphy, J. R.

    2013-12-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 μm [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, λFλ vs. λ) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline forsterite formed. The

  19. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    ERIC Educational Resources Information Center

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  20. Constraining the reionization history with QSO absorption spectra

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Choudhury, T. Roy; Ferrara, A.

    2006-08-01

    We use a semi-analytical approach to simulate absorption spectra of QSOs at high redshifts with the aim of constraining the cosmic reionization history. We consider two physically motivated and detailed reionization histories: (i) an early reionization model (ERM) in which the intergalactic medium is reionized by Pop III stars at z ~ 14, and (ii) a more standard late reionization model (LRM) in which overlapping, induced by QSOs and normal galaxies, occurs at z ~ 6. From the analysis of current Lyα forest data at z < 6, we conclude that it is impossible to disentangle the two scenarios, which fit equally well the observed Gunn-Peterson optical depth, flux probability distribution function and dark gap width distribution. At z > 6, however, clear differences start to emerge which are best quantified by the dark gap and peak width distributions. We find that 35 (0) per cent of the lines of sight (LOS) within 5.7 < z < 6.3 show dark gaps of widths >50Å in the rest frame of the QSO if reionization is not (is) complete at z >~ 6. Similarly, the ERM predicts peaks of width ~1Å in 40 per cent of the LOS in the redshift range 6.0-6.6 in the same range, LRM predicts no peaks of width >0.8Å. We conclude that the dark gap and peak width statistics represent superb probes of cosmic reionization if about ten QSOs can be found at z > 6. We finally discuss strengths and limitations of our method.

  1. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-11-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  2. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-08-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  3. Constraining The Reionization History With QSO Absorption Spectra

    NASA Astrophysics Data System (ADS)

    Gallerani, S.; Choudhury, T. R.; Ferrara, A.

    2006-08-01

    We use a semi-analytical approach to simulate absorption spectra of QSOs at high redshifts with the aim of constraining the cosmic reionization history. We consider two physically motivated and detailed reionization histories: (i) an Early Reionization Model (ERM) in which the intergalactic medium is reionized by PopIII stars at z~14, and (ii) a more standard Late Reionization Model (LRM) in which overlapping, induced by QSOs and normal galaxies, occurs at z~6. An example of simulated spectra is provided by FIG.1. From the analysis of current Lyα forest data at z<6, we conclude that it is impossible to disentangle the two scenarios, which fit equally well the observed Gunn-Peterson optical depth, flux probability distribution function and dark gap width distribution. At z>6, however, clear differences start to emerge which are best quantified by the dark gap width distribution. We find that 35 (zero) per cent of the lines of sight within 5.750Å in the rest frame of the QSO if re-ionization is not (is) complete at z>~6 (FIG.2). Similarly, the ERM predicts peaks of width ~1Å in 40 per cent of the lines of sight in the redshift range 6.0-6.6; in the same range, LRM predicts no peaks of width >0.8Å (FIG.3). We conclude that the dark gap and peak width statistics represent superb probes of cosmic reionization if about ten QSOs can be found at z>6.

  4. Constraining the Mean Crustal Thickness on Mercury

    NASA Technical Reports Server (NTRS)

    Nimmo, F.

    2001-01-01

    The topography of Mercury is poorly known, with only limited radar and stereo coverage available. However, radar profiles reveal topographic contrasts of several kilometers over wavelengths of approximately 1000 km. The bulk of Mercury's geologic activity took place within the first 1 Ga of the planet's history), and it is therefore likely that these topographic features derive from this period. On Earth, long wavelength topographic features are supported either convectively, or through some combination of isostasy and flexure. Photographic images show no evidence for plume-like features, nor for plate tectonics; I therefore assume that neither convective support nor Pratt isostasy are operating. The composition and structure of the crust of Mercury are almost unknown. The reflectance spectrum of the surface of Mercury is similar to that of the lunar highlands, which are predominantly plagioclase. Anderson et al. used the observed center-of-mass center-of-figure offset together with an assumption of Airy isostasy to infer a crustal thickness of 100-300 km. Based on tidal despinning arguments, the early elastic thickness (T(sub e)) of the (unfractured) lithosphere was approximately equal to or less than 100 km. Thrust faults with lengths of up to 500 km and ages of about 4 Ga B.P. are known to exist on Mercury. Assuming a semicircular slip distribution and a typical thrust fault angle of 10 degrees, the likely vertical depth to the base of these faults is about 45 km. More sophisticated modelling gives similar or slightly smaller answers. The depth to the base of faulting and the elastic layer are usually similar on Earth, and both are thought to be thermally controlled. Assuming that the characteristic temperature is about 750 K, the observed fault depth implies that the heat flux at 4 Ga B.P. is unlikely to be less than 20 mW m(exp -2) for a linear temperature gradient. For an elastic thickness of 45 km, topography at 1000 km wavelength is likely to be about 60

  5. Constraining the nature of the asthenosphere

    NASA Astrophysics Data System (ADS)

    Fahy, E. H.; Hall, P.; Faul, U.

    2010-12-01

    Geophysical observations indicate that the oceanic upper mantle has relatively low seismic velocities, high seismic attenuation, and high electrical conductivity at depths of ~80-200km. These depths coincide with the rheologically weak layer known as the asthenosphere . Three hypotheses have been proposed to account for these observations: 1) the presence of volatiles, namely water, in the oceanic upper mantle; 2) the presence of small-degree partial melts in the oceanic upper mantle; and 3) variations in the physical properties of dry, melt-free peridotite with temperature and pressure. Each of these hypotheses suggests a characteristic distribution of volatiles and melt in the upper mantle, resulting in corresponding spatial variations in viscosity, density, seismic structure, and electrical conductivity. These viscosity and density scenarios can also lead to variations in the onset time and growth rate of thermal instabilities at the base of the overriding lithosphere, which can in turn affect heat flow, bathymetry, and seismic structure. We report on the results of a series of computational geodynamic experiments that evaluate the dynamical consequences of each of the three proposed scenarios. Experiments were conducted using the CitcomCU finite element package to model the evolution of the oceanic lithosphere and flow in the underlying mantle. Our model domain consists of 2048x256x64 elements corresponding, to physical dimensions of 12,800x1600x400km. These dimensions allow us to consider oceanic lithosphere to ages of ~150Ma. We adopt the composite rheology law from Billen & Hirth (2007), which combines both diffusion and dislocation creep mechanisms, and consider a range of rheological parameters (e.g., activation energy, activation volume, grain size) as obtained from laboratory deformation experiments [e.g. Hirth & Kohlstedt, 2003]. Melting and volatile content within the model domain are tracked using a Lagrangian particle scheme. Variations in depletion

  6. Can we constrain the interior structure of rocky exoplanets from mass and radius measurements?

    NASA Astrophysics Data System (ADS)

    Dorn, Caroline; Khan, Amir; Heng, Kevin; Connolly, James A. D.; Alibert, Yann; Benz, Willy; Tackley, Paul

    2015-05-01

    Aims: We present an inversion method based on Bayesian analysis to constrain the interior structure of terrestrial exoplanets, in the form of chemical composition of the mantle and core size. Specifically, we identify what parts of the interior structure of terrestrial exoplanets can be determined from observations of mass, radius, and stellar elemental abundances. Methods: We perform a full probabilistic inverse analysis to formally account for observational and model uncertainties and obtain confidence regions of interior structure models. This enables us to characterize how model variability depends on data and associated uncertainties. Results: We test our method on terrestrial solar system planets and find that our model predictions are consistent with independent estimates. Furthermore, we apply our method to synthetic exoplanets up to 10 Earth masses and up to 1.7 Earth radii, and to exoplanet Kepler-36b. Importantly, the inversion strategy proposed here provides a framework for understanding the level of precision required to characterize the interior of exoplanets. Conclusions: Our main conclusions are (1) observations of mass and radius are sufficient to constrain core size; (2) stellar elemental abundances (Fe, Si, Mg) are principal constraints to reduce degeneracy in interior structure models and to constrain mantle composition; (3) the inherent degeneracy in determining interior structure from mass and radius observations does not only depend on measurement accuracies, but also on the actual size and density of the exoplanet. We argue that precise observations of stellar elemental abundances are central in order to place constraints on planetary bulk composition and to reduce model degeneracy. We provide a general methodology of analyzing interior structures of exoplanets that may help to understand how interior models are distributed among star systems. The methodology we propose is sufficiently general to allow its future extension to more complex

  7. Constraining the Statistics of Population III Binaries

    NASA Astrophysics Data System (ADS)

    Stacy, Athena

    2013-01-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the ten minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 AU. We find a binary fraction of ~36%, with semi-major axes as large as 3000 AU. The distribution of orbital periods is slightly peaked at ~900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the ten minihaloes, ~50% are lost to mergers with larger sinks, and ~50% of the remaining sinks are ejected from their star-forming disks. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  8. Constraining the statistics of Population III binaries

    NASA Astrophysics Data System (ADS)

    Stacy, Athena; Bromm, Volker

    2013-08-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the 10 minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 au. We find a binary fraction of ˜35 per cent, with semi-major axes as large as 3000 au. The distribution of orbital periods is slightly peaked at ≲ 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the 10 minihaloes, ˜50 per cent are lost to mergers with larger sinks, and ˜50 per cent of the remaining sinks are ejected from their star-forming discs. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  9. Quantum field theory constrains traversable wormhole geometries

    SciTech Connect

    Ford, L.H. |; Roman, T.A. |

    1996-05-01

    Recently a bound on negative energy densities in four-dimensional Minkowski spacetime was derived for a minimally coupled, quantized, massless, scalar field in an arbitrary quantum state. The bound has the form of an uncertainty-principle-type constraint on the magnitude and duration of the negative energy density seen by a timelike geodesic observer. When spacetime is curved and/or has boundaries, we argue that the bound should hold in regions small compared to the minimum local characteristic radius of curvature or the distance to any boundaries, since spacetime can be considered approximately Minkowski on these scales. We apply the bound to the stress-energy of static traversable wormhole spacetimes. Our analysis implies that either the wormhole must be only a little larger than Planck size or that there is a large discrepancy in the length scales which characterize the wormhole. In the latter case, the negative energy must typically be concentrated in a thin band many orders of magnitude smaller than the throat size. These results would seem to make the existence of macroscopic traversable wormholes very improbable. {copyright} {ital 1996 The American Physical Society.}

  10. Constraining the Statistics of Population III Binaries

    NASA Technical Reports Server (NTRS)

    Stacy, Athena; Bromm, Volker

    2012-01-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the ten minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 AU. We find a binary fraction of approx. 36, with semi-major axes as large as 3000 AU. The distribution of orbital periods is slightly peaked at approx. < 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the ten minihaloes, approx. 50 are lost to mergers with larger sinks, and 50 of the remaining sinks are ejected from their star-forming disks. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  11. Residual flexibility test method for verification of constrained structural models

    NASA Technical Reports Server (NTRS)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.

    1994-01-01

    A method is described for deriving constrained modes and frequencies from a reduced model based on a subset of the free-free modes plus the residual effects of neglected modes. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow development of a verified constrained (fixed-base) structural model. Results for two spaceflight structures having translational boundary degrees of freedom show quick convergence of constrained modes using a measureable number of free-free modes plus the boundary partition of the residual flexibility matrix. This paper presents the free-free residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  12. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  13. Fast Energy Minimization of large Polymers Using Constrained Optimization

    SciTech Connect

    Todd D. Plantenga

    1998-10-01

    A new computational technique is described that uses distance constraints to calculate empirical potential energy minima of partially rigid molecules. A constrained minimuzation algorithm that works entirely in Cartesian coordinates is used. The algorithm does not obey the constraints until convergence, a feature that reduces ill-conditioning and allows constrained local minima to be computed more quickly than unconstrained minima. Computational speedup exceeds the 3-fold factor commonly obtained in constained molecular dynamics simulations, where the constraints must be strictly obeyed at all times.

  14. Pseudo-updated constrained solution algorithm for nonlinear heat conduction

    NASA Technical Reports Server (NTRS)

    Tovichakchaikul, S.; Padovan, J.

    1983-01-01

    This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.

  15. A lexicographic approach to constrained MDP admission control

    NASA Astrophysics Data System (ADS)

    Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo

    2016-02-01

    This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.

  16. A filter-based evolutionary algorithm for constrained optimization.

    SciTech Connect

    Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann

    2004-02-01

    We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

  17. Robust head pose estimation using locality-constrained sparse coding

    NASA Astrophysics Data System (ADS)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu

    2015-12-01

    Sparse coding (SC) method has been shown to deliver successful result in a variety of computer vision applications. However, it does not consider the underlying structure of the data in the feature space. On the other hand, locality constrained linear coding (LLC) utilizes locality constraint to project each input data into its local-coordinate system. Based on the recent success of LLC, we propose a novel locality-constrained sparse coding (LSC) method to overcome the limitation of the SC. In experiments, the proposed algorithms were applied to head pose estimation applications. Experimental results demonstrated that the LSC method is better than state-of-the-art methods.

  18. Augmented Lagrangian Method for Constrained Nuclear Density Functiional Theory

    SciTech Connect

    Staszczak, A.; Stoitsov, Mario; Baran, Andrzej K; Nazarewicz, Witold

    2010-01-01

    The augmented Lagrangian method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia and is well adapted to supercomputer applications.

  19. Constrained modes in control theory - Transmission zeros of uniform beams

    NASA Technical Reports Server (NTRS)

    Williams, T.

    1992-01-01

    Mathematical arguments are presented demonstrating that the well-established control system concept of the transmission zero is very closely related to the structural concept of the constrained mode. It is shown that the transmission zeros of a flexible structure form a set of constrained natural frequencies for it, with the constraints depending explicitly on the locations and the types of sensors and actuators used for control. Based on this formulation, an algorithm is derived and used to produce dimensionless plots of the zero of a uniform beam with a compatible sensor/actuator pair.

  20. Constrained control landscape for population transfer in a two-level system.

    PubMed

    Moore Tibbetts, Katharine; Rabitz, Herschel

    2015-02-01

    The growing success of controlling the dynamics of quantum systems has been ascribed to the favorable topology of the quantum control landscape, which represents the physical observable as a function of the control field. The landscape contains no suboptimal trapping extrema when reasonable physical assumptions are satisfied, including that no significant constraints are placed on the control resources. A topic of prime interest is understanding the effects of control field constraints on the apparent landscape topology, as constraints on control resources are inevitable in the laboratory. This work particularly explores the effects of constraining the control field fluence on the topology and features of the control landscape for pure-state population transfer in a two-level system through numerical simulations, where unit probability population transfer in the system is only accessible in the strong coupling regime within the model explored here. With the fluence and three phase variables used for optimization, no local optima are found on the landscape, although saddle features are widespread at low fluence values. Global landscape optima are found to exist at two disconnected regions of the fluence that possess distinct topologies and structures. Broad scale connected optimal level sets are found when the fluence is sufficiently large, while the connectivity is reduced as the fluence becomes more constrained. These results suggest that seeking optimal fields with constrained fluence or other resources may encounter complex landscape features, calling for sophisticated algorithms that can efficiently find optimal controls. PMID:25515970

  1. Outcomes of Varus Valgus Constrained Versus Rotating-Hinge Implants in Total Knee Arthroplasty.

    PubMed

    Malcolm, Tennison L; Bederman, S Samuel; Schwarzkopf, Ran

    2016-01-01

    The stability of a total knee arthroplasty is determined by the ability of the prosthesis components in concert with supportive bone and soft tissue structures to sufficiently resist deforming forces transmitted across the knee joint. Constrained prostheses are used in unstable knees due to their ability to resist varus and valgus transformative forces across the knee. Constraint requires inherent rigidity, which can facilitate early implant failure. The purpose of this study was to describe the comparative indications for surgery and postoperative outcomes of varus valgus constrained knee (VVK) and rotating-hinge knee (RHK) total knee arthroplasty prostheses. Seven retrospective observational studies describing 544 VVK and 254 RHK patients with an average follow-up of 66 months (range, 7-197 months) were evaluated. Patients in both groups experienced similar failure rates (P=.74), ranges of motion (P=.81), and Knee Society function scores (P=.29). Average Knee Society knee scores were 4.2 points higher in VVK patients compared with RHK patients, indicating minimal mid-term clinical differences may exist (P<.0001). Absent collateral ligament support is an almost universal indication for RHK implantation vs VVK. Constrained device implantation is routinely guided by inherent stability of the knee, and, when performed, similar postoperative outcomes can be achieved with VVK and RHK prostheses.

  2. Constraining the Cosmic-ray Acceleration Efficiency in the Supernova Remnant IC 443

    NASA Astrophysics Data System (ADS)

    Ritchey, Adam Michael; Federman, Steven R.; Jenkins, Edward B.; Caprioli, Damiano; Wallerstein, George

    2015-08-01

    Supernova remnants are widely believed to be the sources responsible for the acceleration of Galactic cosmic rays. Over the last several years, observations made with the Fermi Gamma-ray Space Telescope have confirmed that cosmic-ray nuclei are indeed accelerated in some supernova remnants, including IC 443, which is a prototype for supernova remnants interacting with molecular clouds. However, the details concerning the particle acceleration processes in middle aged remnants are not fully understood, in part because the basic model parameters are not always well constrained. Here, we present preliminary results of a Hubble Space Telescope investigation into the physical conditions in diffuse molecular gas interacting with IC 443. We examine high-resolution FUV spectra of two stars, one that probes the interior region of the supernova remnant, and the other located just outside the visible edge of IC 443. With this arrangement, we are able to evaluate the densities and temperatures in neutral gas clumps positioned both ahead of and behind the supernova shock front. From these measurements, we obtain estimates for the post-shock temperature and the shock velocity in the interclump medium. We discuss the efficacy of these results for constraining both the age of IC 443, and also the cosmic-ray acceleration efficiency. Finally, we report the first detection of boron in a supernova remnant, and discuss the usefulness of the B/O ratio in constraining the cosmic-ray content of the gas interacting with IC 443.

  3. Constraining primordial magnetic fields with future cosmic shear surveys

    SciTech Connect

    Fedeli, C.; Moscardini, L. E-mail: lauro.moscardini@unibo.it

    2012-11-01

    The origin of astrophysical magnetic fields observed in galaxies and clusters of galaxies is still unclear. One possibility is that primordial magnetic fields generated in the early Universe provide seeds that grow through compression and turbulence during structure formation. A cosmological magnetic field present prior to recombination would produce substantial matter clustering at intermediate/small scales, on top of the standard inflationary power spectrum. In this work we study the effect of this alteration on one particular cosmological observable, cosmic shear. We adopt the semi-analytic halo model in order to describe the non-linear clustering of matter, and feed it with the altered mass variance induced by primordial magnetic fields. We find that the convergence power spectrum is, as expected, substantially enhanced at intermediate/small angular scales, with the exact amplitude of the enhancement depending on the magnitude and power-law index of the magnetic field power spectrum. Specifically, for a fixed amplitude, the effect of magnetic fields is larger for larger spectral indices. We use the predicted statistical errors for a future wide-field cosmic shear survey, on the model of the ESA Cosmic Vision mission Euclid, in order to forecast constraints on the amplitude of primordial magnetic fields as a function of the spectral index. We find that the amplitude will be constrained at the level of ∼ 0.1 nG for n{sub B} ∼ −3, and at the level of ∼ 10{sup −7} nG for n{sub B} ∼ 3. The latter is at the same level of lower bounds coming from the secondary emission of gamma-ray sources, implying that for high spectral indices Euclid will certainly be able to detect primordial magnetic fields, if they exist. The present study shows how large-scale structure surveys can be used for both understanding the origins of astrophysical magnetic fields and shedding new light on the physics of the pre-recombination Universe.

  4. The Emergence of Solar Supergranulation as a Natural Consequence of Rotationally Constrained Interior Convection

    NASA Astrophysics Data System (ADS)

    Featherstone, Nicholas A.; Hindman, Bradley W.

    2016-10-01

    We investigate how rotationally constrained, deep convection might give rise to supergranulation, the largest distinct spatial scale of convection observed in the solar photosphere. While supergranulation is only weakly influenced by rotation, larger spatial scales of convection sample the deep convection zone and are presumably rotationally influenced. We present numerical results from a series of nonlinear, 3D simulations of rotating convection and examine the velocity power distribution realized under a range of Rossby numbers. When rotation is present, the convective power distribution possesses a pronounced peak, at characteristic wavenumber {{\\ell }}{peak}, whose value increases as the Rossby number is decreased. This distribution of power contrasts with that realized in non-rotating convection, where power increases monotonically from high to low wavenumbers. We find that spatial scales smaller than {{\\ell }}{peak} behave in analogy to non-rotating convection. Spatial scales larger than {{\\ell }}{peak} are rotationally constrained and possess substantially reduced power relative to the non-rotating system. We argue that the supergranular scale emerges due to a suppression of power on spatial scales larger than {\\ell }≈ 100 owing to the presence of deep, rotationally constrained convection. Supergranulation thus represents the largest non-rotationally constrained mode of solar convection. We conclude that the characteristic spatial scale of supergranulation bounds that of the deep convective motions from above, making supergranulation an indirect measure of the deep-seated dynamics at work in the solar dynamo. Using the spatial scale of supergranulation in conjunction with our numerical results, we estimate an upper bound of 10 m s‑1 for the Sun’s bulk rms convective velocity.

  5. Multi-asset Black-Scholes model as a variable second class constrained dynamical system

    NASA Astrophysics Data System (ADS)

    Bustamante, M.; Contreras, M.

    2016-09-01

    In this paper, we study the multi-asset Black-Scholes model from a structural point of view. For this, we interpret the multi-asset Black-Scholes equation as a multidimensional Schrödinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the high-energy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi-assets model, there exists a surface (called the Kummer surface ΣK, where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multi-asset Black-Scholes equation and show that inside the Kummer ΣK surface the propagator is well defined, but outside ΣK the propagator diverges and the option price is not well defined. On ΣK the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multi-asset Black-Scholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed.

  6. Inflation model of Uzon caldera, Kamchatka, constrained by satellite radar interferometry observations

    USGS Publications Warehouse

    Lundgren, P.; Lu, Zhiming

    2006-01-01

    We analyzed RADARSAT-1 synthetic aperture radar (SAR) data to compute interferometric SAR (InSAR) images of surface deformation at Uzon caldera, Kamchatka, Russia. From 2000 to 2003 approximately 0.15 m of inflation occurred at Uzon caldera, extending beneath adjacent Kikhpinych volcano. This contrasts with InSAR data showing no significant deformation during either the 1999 to 2000, or 2003 to 2004, time periods. We performed three sets of numerical source inversions to fit InSAR data from three different swaths spanning 2000 to 2003. The preferred source model is an irregularly shaped, pressurized crack, dipping ???20?? to the NW, 4 km below the surface. The geometry of this solution is similar to the upper boundary of the geologically inferred magma chamber. Extension of the surface deformation and source to adjacent Kikhpinych volcano, without an eruption, suggests that the deformation is more likely of hydrothermal origin, possibly driven by recharge of the magma chamber. Copyright 2006 by the American Geophysical Union.

  7. HERSCHEL OBSERVATIONS OF THE T CHA TRANSITION DISK: CONSTRAINING THE OUTER DISK PROPERTIES

    SciTech Connect

    Cieza, Lucas A.; Olofsson, Johan; Henning, Thomas; Harvey, Paul M.; Evans II, Neal J.; Pinte, Christophe; Augereau, Jean-Charles; Menard, Francois; Najita, Joan

    2011-11-10

    T Cha is a nearby (d {approx} 100 pc) transition disk known to have an optically thin gap separating optically thick inner and outer disk components. Huelamo et al. recently reported the presence of a low-mass object candidate within the gap of the T Cha disk, giving credence to the suspected planetary origin of this gap. Here we present the Herschel photometry (70, 160, 250, 350, and 500 {mu}m) of T Cha from the 'Dust, Ice, and Gas in Time' Key Program, which bridges the wavelength range between existing Spitzer and millimeter data and provide important constraints on the outer disk properties of this extraordinary system. We model the entire optical to millimeter wavelength spectral energy distribution (SED) of T Cha (19 data points between 0.36 and 3300 {mu}m without any major gaps in wavelength coverage). T Cha shows a steep spectral slope in the far-IR, which we find clearly favors models with outer disks containing little or no dust beyond {approx}40 AU. The full SED can be modeled equally well with either an outer disk that is very compact (only a few AU wide) or a much larger one that has a very steep surface density profile. That is, T Cha's outer disk seems to be either very small or very tenuous. Both scenarios suggest a highly unusual outer disk and have important but different implications for the nature of T Cha. Spatially resolved images are needed to distinguish between the two scenarios.

  8. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  9. Anelastic Modeling of Upper Mantle Seismic Observations: Explaining Rocky Mountain Isostacy and Constraining Rheological Uncertainty

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Sheehan, A. F.

    2001-12-01

    Utilizing tomographic models of attenuation and velocity derived using the Rocky Mountain Front (RMF) broadband seismic dataset acquired in 1992, this study models the relationships of attenuation to velocity to identify regions of elevated temperature and anomalous rheology. Studies of the area include P, S and surface wave velocity tomography and all indicate slow upper mantle velocities below the Rocky Mountain region. Recent attenuation measurements exhibit a similar trend. The coupling of attenuation and velocity measurements provides an indication of the change in temperature (Karato, 1993). A more vigorous examination of the relationships between attenuation and velocity can provide insight into the rheological parameters of the mantle. The theoretical basis of the modeling is the complex modulus of the standard anelastic solid under the influence of a thermally activated process (Nowick and Berry, 1972). An activation energy of 500 kJ/mol, the diffusion of oxygen through olivine, is assumed. An integral part of the modeling is the assumption that the thermally activated process has a normal distribution of activation energies. A greater variance of this distribution is an indication of the materials inability to equilibrate differential stress. Relationships between attenuation and velocity suggest elevated temperatures up to 300 K 100 to 150 km beneath the Colorado Rocky Mountains. This temperature difference is enough to cause density changes partly responsible for isostacy of the mountains. Additional findings include a significant reduction in variance of the activation energies in the upper mantle coincident with the region of elevated temperature. This is due to a softening of the mantle material and may imply the existence of partial melt. >http://ucsu.colorado.edu/~oliverb/AnMod.html

  10. Constraining sources of ultra high energy cosmic rays using high energy observations with the Fermi satellite

    SciTech Connect

    Pe'er, Asaf; Loeb, Abraham E-mail: aloeb@cfa.harvard.edu

    2012-03-01

    We analyze the conditions that enable acceleration of particles to ultra-high energies, ∼ 10{sup 20} eV (UHECRs). We show that broad band photon data recently provided by WMAP, ISOCAM, Swift and Fermi satellites, yield constraints on the ability of active galactic nuclei (AGN) to produce UHECRs. The high energy (MeV–GeV) photons are produced by Compton scattering of the emitted low energy photons and the cosmic microwave background or extra-galactic background light. The ratio of the luminosities at high and low photon energies can therefore be used as a probe of the physical conditions in the acceleration site. We find that existing data excludes core regions of nearby radio-loud AGN as possible acceleration sites of UHECR protons. However, we show that giant radio lobes are not excluded. We apply our method to Cen A, and show that acceleration of protons to ∼ 10{sup 20} eV can only occur at distances ∼>100 kpc from the core.

  11. Combining Bayesian methods and aircraft observations to constrain the HO. + NO2 reaction rate

    EPA Science Inventory

    Tropospheric ozone is the third strongest greenhouse gas, and has the highest uncertainty in radiative forcing of the top five greenhouse gases. Throughout the troposphere, ozone is produced by radical oxidation of nitrogen oxides (NO,x =NO+NO2). In the uppe...

  12. How well can future CMB missions constrain cosmic inflation?

    SciTech Connect

    Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe E-mail: christophe.ringeval@uclouvain.be

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.

  13. Extracting electron transfer coupling elements from constrained density functional theory

    NASA Astrophysics Data System (ADS)

    Wu, Qin; Van Voorhis, Troy

    2006-10-01

    Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (Hab) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculations on the Zn2+ and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives Hab≈17kcal /mol, which qualitatively disagrees with experimental results, the Hab calculated from constrained DFT is about 3kcal /mol and the generated ground state has a barrier height of 1.70kcal/mol, successfully predicting (Q-TTF-Q)- to be a class II mixed-valence compound.

  14. Excision technique in constrained formulations of Einstein equations: collapse scenario

    NASA Astrophysics Data System (ADS)

    Cordero-Carrión, I.; Vasset, N.; Novak, J.; Jaramillo, J. L.

    2015-04-01

    We present a new excision technique used in constrained formulations of Einstein equations to deal with black hole in numerical simulations. We show the applicability of this scheme in several scenarios. In particular, we present the dynamical evolution of the collapse of a neutron star to a black hole, using the CoCoNuT code and this excision technique.

  15. Applications of a Constrained Mechanics Methodology in Economics

    ERIC Educational Resources Information Center

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  16. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  17. Multiply-Constrained Semantic Search in the Remote Associates Test

    ERIC Educational Resources Information Center

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  18. How well can future CMB missions constrain cosmic inflation?

    NASA Astrophysics Data System (ADS)

    Martin, Jérôme; Ringeval, Christophe; Vennin, Vincent

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10-1 down to 10-7. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.

  19. Constrained Transport vs. Divergence Cleanser Options in Astrophysical MHD Simulations

    NASA Astrophysics Data System (ADS)

    Lindner, Christopher C.; Fragile, P.

    2009-01-01

    In previous work, we presented results from global numerical simulations of the evolution of black hole accretion disks using the Cosmos++ GRMHD code. In those simulations we solved the magnetic induction equation using an advection-split form, which is known not to satisfy the divergence-free constraint. To minimize the build-up of divergence error, we used a hyperbolic cleanser function that simultaneously damped the error and propagated it off the grid. We have since found that this method produces qualitatively and quantitatively different behavior in high magnetic field regions than results published by other research groups, particularly in the evacuated funnels of black-hole accretion disks where Poynting-flux jets are reported to form. The main difference between our earlier work and that of our competitors is their use of constrained-transport schemes to preserve a divergence-free magnetic field. Therefore, to study these differences directly, we have implemented a constrained transport scheme into Cosmos++. Because Cosmos++ uses a zone-centered, finite-volume method, we can not use the traditional staggered-mesh constrained transport scheme of Evans & Hawley. Instead we must implement a more general scheme; we chose the Flux-CT scheme as described by Toth. Here we present comparisons of results using the divergence-cleanser and constrained transport options in Cosmos++.

  20. Testing a Constrained MPC Controller in a Process Control Laboratory

    ERIC Educational Resources Information Center

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  1. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  2. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    ERIC Educational Resources Information Center

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  3. Constraining the structure of the Narrow-Line Region of nearby QSO2s

    NASA Astrophysics Data System (ADS)

    Storchi-Bergmann, Thaisa

    2014-10-01

    The Narrow-Line Region (NLR) of Active Galactic Nuclei (AGN) is the only resolved region of AGN, observed via high excitation ionized gas emission that extends from hundred to kiloparsec scales in the host galaxies. In nearby AGN (z<0.03), the NLR is known to present an elongated or cone-like morphology seen in type 2 AGN, and circular morphology in type 1 AGN, supporting the Unified Model. Nevertheless, at somewhat higher z's (~ 0.5) recent ground-based studies have found mostly circular morphologies in observations of QSO2s (obscured QSOs). But at the corresponding distances of these objects, ground-based observations lack the necessary angular resolution to fully resolve the NLRs. It is not clear if the intrinsic NLR morphology changes for more luminous AGN or this is an effect of the atmospheric seeing. Only with HST we will be able to resolve the NLR morphology down to a few hundred parsec scales in the galaxy. We thus propose a "mini-survey" of the NLRs by obtaining narrow-band images in [OIII] and Halpha+[NII] of a sample of nearby QSO2s spanning the redshift range 0.05constrain the extent, morphology and excitation of the NLR. These data will complement an homogeneous database of HST narrow-band images of ~100 Seyfert galaxies at z<0.03, and will allow us to constrain the relation between the radius and luminosity of the NLR over a luminosity range 39constrain the nature of the emission, providing also a census of the presence of recent star formation in the host galaxies.

  4. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better

  5. Stop as a next-to-lightest supersymmetric particle in constrained MSSM

    SciTech Connect

    Huitu, Katri; Leinonen, Lasse; Laamanen, Jari

    2011-10-01

    So far the squarks have not been detected at the LHC indicating that they are heavier than a few hundred GeVs, if they exist. The lighter stop can be considerably lighter than the other squarks. We study the possibility that a supersymmetric partner of the top quark, stop, is the next-to-lightest supersymmetric particle in the constrained supersymmetric standard model. Various constraints, on top of the mass limits, are taken into an account, and the allowed parameter space for this scenario is determined. Observing stop which is the next-to-lightest supersymmetric particle at the LHC may be difficult.

  6. Constraining X-ray-Induced Photoevaporation of Protoplanetary Disks Orbiting Low-Mass Stars

    NASA Astrophysics Data System (ADS)

    Punzi, Kristina M.; Kastner, Joel H.; Rodriguez, David; Principe, David A.; Vican, Laura

    2016-01-01

    Low-mass, pre-main sequence stars possess intense high-energy radiation fields as a result of their strong stellar magnetic activity. This stellar UV and X-ray radiation may have a profound impact on the lifetimes of protoplanetary disks. We aim to constrain the X-ray-induced photoevaporation rates of protoplanetary disks orbiting low-mass stars by analyzing serendipitous XMM-Newton and Chandra X-ray observations of candidate nearby (D < 100 pc), young (age < 100 Myr) M stars identified in the GALEX Nearby Young-Star Survey (GALNYSS).

  7. Neutrino-Electron Scattering in MINERvA for Constraining the NuMI Neutrino Flux

    SciTech Connect

    Park, Jaewon

    2013-01-01

    Neutrino-electron elastic scattering is used as a reference process to constrain the neutrino flux at the Main Injector (NuMI) beam observed by the MINERvA experiment. Prediction of the neutrino flux at accelerator experiments from other methods has a large uncertainty, and this uncertainty degrades measurements of neutrino oscillations and neutrino cross-sections. Neutrino-electron elastic scattering is a rare process, but its cross-section is precisely known. With a sample corresponding to $3.5\\times10^{20}$ protons on target in the NuMI low-energy neutrino beam, a sample of $120$ $\

  8. Constraining the Antarctic contribution to interglacial sea-level rise

    NASA Astrophysics Data System (ADS)

    Naish, T.; Mckay, R. M.; Barrett, P. J.; Levy, R. H.; Golledge, N. R.; Deconto, R. M.; Horgan, H. J.; Dunbar, G. B.

    2015-12-01

    Observations, models and paleoclimate reconstructions suggest that Antarctica's marine-based ice sheets behave in an unstable manner with episodes of rapid retreat in response to warming climate. Understanding the processes involved in this "marine ice sheet instability" is key for improving estimates of Antarctic ice sheet contribution to future sea-level rise. Another motivating factor is that far-field sea-level reconstructions and ice sheet models imply global mean sea level (GMSL) was up to 20m and 10m higher, respectively, compared with present day, during the interglacials of the warm Pliocene (~4-3Ma) and Late Pleistocene (at ~400ka and 125ka). This was when atmospheric CO2 was between 280 and 400ppm and global average surface temperatures were 1- 3°C warmer, suggesting polar ice sheets are highly sensitive to relatively modest increases in climate forcing. Such magnitudes of GMSL rise not only require near complete melt of the Greenland Ice Sheet and the West Antarctic Ice Sheet, but a substantial retreat of marine-based sectors of East Antarctic Ice Sheet. Recent geological drilling initiatives on the continental margin of Antarctica from both ship- (e.g. IODP; International Ocean Discovery Program) and ice-based (e.g. ANDRILL/Antarctic Geological Drilling) platforms have provided evidence supporting retreat of marine-based ice. However, without direct access through the ice sheet to archives preserved within sub-glacial sedimentary basins, the volume and extent of ice sheet retreat during past interglacials cannot be directly constrained. Sediment cores have been successfully recovered from beneath ice shelves by the ANDRILL Program and ice streams by the WISSARD (Whillans Ice Stream Sub-glacial Access Research Drilling) Project. Together with the potential of the new RAID (Rapid Access Ice Drill) initiative, these demonstrate the technological feasibility of accessing the subglacial bed and deeper sedimentary archives. In this talk I will outline the

  9. Comparative Analysis of Uninhibited and Constrained Avian Wing Aerodynamics

    NASA Astrophysics Data System (ADS)

    Cox, Jordan A.

    The flight of birds has intrigued and motivated man for many years. Bird flight served as the primary inspiration of flying machines developed by Leonardo Da Vinci, Otto Lilienthal, and even the Wright brothers. Avian flight has once again drawn the attention of the scientific community as unmanned aerial vehicles (UAV) are not only becoming more popular, but smaller. Birds are once again influencing the designs of aircraft. Small UAVs operating within flight conditions and low Reynolds numbers common to birds are not yet capable of the high levels of control and agility that birds display with ease. Many researchers believe the potential to improve small UAV performance can be obtained by applying features common to birds such as feathers and flapping flight to small UAVs. Although the effects of feathers on a wing have received some attention, the effects of localized transient feather motion and surface geometry on the flight performance of a wing have been largely overlooked. In this research, the effects of freely moving feathers on a preserved red tailed hawk wing were studied. A series of experiments were conducted to measure the aerodynamic forces on a hawk wing with varying levels of feather movement permitted. Angle of attack and air speed were varied within the natural flight envelope of the hawk. Subsequent identical tests were performed with the feather motion constrained through the use of externally-applied surface treatments. Additional tests involved the study of an absolutely fixed geometry mold-and-cast wing model of the original bird wing. Final tests were also performed after applying surface coatings to the cast wing. High speed videos taken during tests revealed the extent of the feather movement between wing models. Images of the microscopic surface structure of each wing model were analyzed to establish variations in surface geometry between models. Recorded aerodynamic forces were then compared to the known feather motion and surface

  10. Experimentally constraining the boundary conditions for volcanic ash aggregation

    NASA Astrophysics Data System (ADS)

    Kueppers, U.; Auer, B.; Cimarelli, C.; Scolamacchia, T.; Guenthel, M.; Dingwell, D. B.

    2011-12-01

    Volcanic ash is the primary product of various volcanic processes. Due to its size, ash can remain in the atmosphere for a prolonged period of time. Aggregation processes are a first-order influence on the residence time of ash in the atmosphere and its dispersion from the vent. Due to their internal structure, ash aggregates have been classified as ash pellets or accretionary lapilli. Although several concomitant factors may play a role during aggregation, there is a broad consensus that both 1) particle collision and 2) humidity are required for particles to aggregate. However, direct observation of settling aggregates and record of the boundary conditions favourable to their formation are rare, therefore limiting our understanding of the key processes that determine ash aggregates formation. Here, we present the first results from experiments aimed at reproducing ash aggregation by constraining the required boundary conditions. We used a ProCell Lab System of Glatt Ingenieurtechnik GmbH that is conventionally used for food and chemical applications. We varied the following parameters: 1) air flow speed [40-120 m3/h], 2) air temperature [30-60°C], 3) relative humidity [20-50 %], and 4) liquid droplets composition [water and 25% water glass, Na2SiO3]. The starting material (125-90 μm) is obtained by milling natural basaltic lapilli (Etna, Italy). We found that the experimental duration and the chosen conditions were not favourable for the production of stable aggregates when using water as spraying liquid. Using a 25% water-glass solution as binder we could successfully generate and investigate aggregates of up to 2 mm size. Many aggregates are spherical and resemble ash pellets. In nature, ash pellets and accretionary lapilli are the product of complex processes taking place at very different conditions (temperature, humidity, ash concentration, degree of turbulence). These experiments shed some first light on the ash agglomeration process for which direct

  11. Evolution and constrains in the star formation histories of IR-bright star forming galaxies at high redshift

    NASA Astrophysics Data System (ADS)

    Sklias, Panos; Schaerer, Daniel; Elbaz, David

    2015-08-01

    Understanding and constraining the early cosmic star formation history of the Universe is a key question of galaxy evolution. A large fraction of star formation is dust obscured, so it is crucial to have access to the IR emission of galaxies to properly study them.Utilizing the multi-wavelength photometry from GOODS-Herschel, we perform SED fitting with different variable star formation histories (SFHs), which we constrain thanks to the observed IR luminosities, on a large sample of individually IR-detected sources from z~1 to 4. We explore how (and to which extent) constraining dust attenuation thanks to the IR luminosities allows to reduce the scatter (expected when using variable SFHs, in contrast to IR+UV standard calibrations) in physical properties and relations such as mass-SFR and the so-called star-forming Main Sequence (MS).Although limited at the high-z end, our analysis shows a change of trends in SFHs between low and high z, that follows the established cosmic SFR density, with galaxies found to prefer rising SFRs at z~3-4, and declining SFRs at z≤1. We show that a fraction of galaxies (~20%), mainly at z≤2, can have lower SFRs than IR-inferred, but still being compatible with the observations, indicative of being post-starbursts/undergoing quenching while bright in the IR, in agreement with theoretical work. The IR-constrained stellar population models we obtain also indicate that the two main modes of star formation - MS and starburst - evolve differently with time, with the former being mostly slow evolving and lying on the MS for long lasting periods, and the latter being very recent, rapidly increasing bursts (or on the decline, when belonging to the aforementioned "quenched" category). Finally, we illustrate how spectroscopic observation of nebular emission lines further enables as to constrain effectively the SFHs of galaxies.

  12. Mercury's interior structure constrained by geodesy and present-day thermal state

    NASA Astrophysics Data System (ADS)

    Rivoldini, Attilio; Van Hoolst, Tim; Beuthe, Mikael; Deproost, Marie-Hélène

    2016-10-01

    Recent measurements of Mercury's spin state and gravitational field strongly constrain Mercury's core radius and core density, but provide little information on the size of its inner core. Both a fully molten liquid core and a core differentiated into a large solid inner core and a liquid outer part are consistent with the observations, although the observed tides seem to excl