Sample records for source model derived

  1. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    NASA Astrophysics Data System (ADS)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  2. A simulation-based analytic model of radio galaxies

    NASA Astrophysics Data System (ADS)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  3. Characterizing CO and NOy Sources and Relative Ambient Ratios in the Baltimore Area Using Ambient Measurements and Source Attribution Modeling

    EPA Science Inventory

    Modeled source attribution information from the Community Multiscale Air Quality model was coupled with ambient data from the 2011 Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality Baltimore field study. We assess ...

  4. Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation

    NASA Astrophysics Data System (ADS)

    Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.

    2008-12-01

    The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations, as well as event and post-event planning in the Solomon Islands.

  5. Estimating rupture distances without a rupture

    USGS Publications Warehouse

    Thompson, Eric M.; Worden, Charles

    2017-01-01

    Most ground motion prediction equations (GMPEs) require distances that are defined relative to a rupture model, such as the distance to the surface projection of the rupture (RJB) or the closest distance to the rupture plane (RRUP). There are a number of situations in which GMPEs are used where it is either necessary or advantageous to derive rupture distances from point-source distance metrics, such as hypocentral (RHYP) or epicentral (REPI) distance. For ShakeMap, it is necessary to provide an estimate of the shaking levels for events without rupture models, and before rupture models are available for events that eventually do have rupture models. In probabilistic seismic hazard analysis, it is often convenient to use point-source distances for gridded seismicity sources, particularly if a preferred orientation is unknown. This avoids the computationally cumbersome task of computing rupture-based distances for virtual rupture planes across all strikes and dips for each source. We derive average rupture distances conditioned on REPI, magnitude, and (optionally) back azimuth, for a variety of assumed seismological constraints. Additionally, we derive adjustment factors for GMPE standard deviations that reflect the added uncertainty in the ground motion estimation when point-source distances are used to estimate rupture distances.

  6. Use of stable isotope signatures to determine mercury sources in the Great Lakes

    USGS Publications Warehouse

    Lepak, Ryan F.; Yin, Runsheng; Krabbenhoft, David P.; Ogorek, Jacob M.; DeWild, John F.; Holsen, Thomas M.; Hurley, James P.

    2015-01-01

    Sources of mercury (Hg) in Great Lakes sediments were assessed with stable Hg isotope ratios using multicollector inductively coupled plasma mass spectrometry. An isotopic mixing model based on mass-dependent (MDF) and mass-independent fractionation (MIF) (δ202Hg and Δ199Hg) identified three primary Hg sources for sediments: atmospheric, industrial, and watershed-derived. Results indicate atmospheric sources dominate in Lakes Huron, Superior, and Michigan sediments while watershed-derived and industrial sources dominate in Lakes Erie and Ontario sediments. Anomalous Δ200Hg signatures, also apparent in sediments, provided independent validation of the model. Comparison of Δ200Hg signatures in predatory fish from three lakes reveals that bioaccumulated Hg is more isotopically similar to atmospherically derived Hg than a lake’s sediment. Previous research suggests Δ200Hg is conserved during biogeochemical processing and odd mass-independent fractionation (MIF) is conserved during metabolic processing, so it is suspected even is similarly conserved. Given these assumptions, our data suggest that in some cases, atmospherically derived Hg may be a more important source of MeHg to higher trophic levels than legacy sediments in the Great Lakes.

  7. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  8. Shuttle derived atmospheric density model. Part 1: Comparisons of the various ambient atmospheric source data with derived parameters from the first twelve STS entry flights, a data package for AOTV atmospheric development

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Troutman, P. A.

    1984-01-01

    The ambient atmospheric parameter comparisons versus derived values from the first twelve Space Shuttle Orbiter entry flights are presented. Available flights, flight data products, and data sources utilized are reviewed. Comparisons are presented based on remote meteorological measurements as well as two comprehensive models which incorporate latitudinal and seasonal effects. These are the Air Force 1978 Reference Atmosphere and the Marshall Space Flight Center Global Reference Model (GRAM). Atmospheric structure sensible in the Shuttle flight data is shown and discussed. A model for consideration in Aero-assisted Orbital Transfer Vehicle (AOTV) trajectory analysis, proposed to modify the GRAM data to emulate Shuttle experiments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Daniel C.; Bowman, Judd; Parsons, Aaron R.

    We present a catalog of spectral measurements covering a 100-200 MHz band for 32 sources, derived from observations with a 64 antenna deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) in South Africa. For transit telescopes such as PAPER, calibration of the primary beam is a difficult endeavor and errors in this calibration are a major source of error in the determination of source spectra. In order to decrease our reliance on an accurate beam calibration, we focus on calibrating sources in a narrow declination range from –46° to –40°. Since sources atmore » similar declinations follow nearly identical paths through the primary beam, this restriction greatly reduces errors associated with beam calibration, yielding a dramatic improvement in the accuracy of derived source spectra. Extrapolating from higher frequency catalogs, we derive the flux scale using a Monte Carlo fit across multiple sources that includes uncertainty from both catalog and measurement errors. Fitting spectral models to catalog data and these new PAPER measurements, we derive new flux models for Pictor A and 31 other sources at nearby declinations; 90% are found to confirm and refine a power-law model for flux density. Of particular importance is the new Pictor A flux model, which is accurate to 1.4% and shows that between 100 MHz and 2 GHz, in contrast with previous models, the spectrum of Pictor A is consistent with a single power law given by a flux at 150 MHz of 382 ± 5.4 Jy and a spectral index of –0.76 ± 0.01. This accuracy represents an order of magnitude improvement over previous measurements in this band and is limited by the uncertainty in the catalog measurements used to estimate the absolute flux scale. The simplicity and improved accuracy of Pictor A's spectrum make it an excellent calibrator in a band important for experiments seeking to measure 21 cm emission from the epoch of reionization.« less

  10. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  11. Groundwater Source Identification Using Backward Fractional-Derivative Models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sun, H.; Zheng, C.

    2017-12-01

    The forward Fractional Advection Dispersion Equation (FADE) provides a useful model for non-Fickian transport in heterogeneous porous media. This presentation introduces the corresponding backward FADE model, to identify groundwater source location and release time. The backward method is developed from the theory of inverse problems, and the resultant backward FADE differs significantly from the traditional backward ADE because the fractional derivative is not self-adjoint and the probability density function for backward locations is highly skewed. Finally, the method is validated using tracer data from well-known field experiments.

  12. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  13. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  14. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  15. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  16. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  17. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  18. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  19. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  20. A comparison of U.S. Geological Survey three-dimensional model estimates of groundwater source areas and velocities to independently derived estimates, Idaho National Laboratory and vicinity, Idaho

    USGS Publications Warehouse

    Fisher, Jason C.; Rousseau, Joseph P.; Bartholomay, Roy C.; Rattray, Gordon W.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, evaluated a three-dimensional model of groundwater flow in the fractured basalts and interbedded sediments of the eastern Snake River Plain aquifer at and near the Idaho National Laboratory to determine if model-derived estimates of groundwater movement are consistent with (1) results from previous studies on water chemistry type, (2) the geochemical mixing at an example well, and (3) independently derived estimates of the average linear groundwater velocity. Simulated steady-state flow fields were analyzed using backward particle-tracking simulations that were based on a modified version of the particle tracking program MODPATH. Model results were compared to the 5-microgram-per-liter lithium contour interpreted to represent the transition from a water type that is primarily composed of tributary valley underflow and streamflow-infiltration recharge to a water type primarily composed of regional aquifer water. This comparison indicates several shortcomings in the way the model represents flow in the aquifer. The eastward movement of tributary valley underflow and streamflow-infiltration recharge is overestimated in the north-central part of the model area and underestimated in the central part of the model area. Model inconsistencies can be attributed to large contrasts in hydraulic conductivity between hydrogeologic zones. Sources of water at well NPR-W01 were identified using backward particle tracking, and they were compared to the relative percentages of source water chemistry determined using geochemical mass balance and mixing models. The particle tracking results compare reasonably well with the chemistry results for groundwater derived from surface-water sources (-28 percent error), but overpredict the proportion of groundwater derived from regional aquifer water (108 percent error) and underpredict the proportion of groundwater derived from tributary valley underflow from the Little Lost River valley (-74 percent error). These large discrepancies may be attributed to large contrasts in hydraulic conductivity between hydrogeologic zones and (or) a short-circuiting of underflow from the Little Lost River valley to an area of high hydraulic conductivity. Independently derived estimates of the average groundwater velocity at 12 well locations within the upper 100 feet of the aquifer were compared to model-derived estimates. Agreement between velocity estimates was good at wells with travel paths located in areas of sediment-rich rock (root-mean-square error [RMSE] = 5.2 feet per day [ft/d]) and poor in areas of sediment-poor rock (RMSE = 26.2 ft/d); simulated velocities in sediment-poor rock were 2.5 to 4.5 times larger than independently derived estimates at wells USGS 1 (less than 14 ft/d) and USGS 100 (less than 21 ft/d). The models overprediction of groundwater velocities in sediment-poor rock may be attributed to large contrasts in hydraulic conductivity and a very large, model-wide estimate of vertical anisotropy (14,800).

  1. Time-dependent Modeling of Pulsar Wind Nebulae

    NASA Astrophysics Data System (ADS)

    Vorster, M. J.; Tibolla, O.; Ferreira, S. E. S.; Kaufmann, S.

    2013-08-01

    A spatially independent model that calculates the time evolution of the electron spectrum in a spherically expanding pulsar wind nebula (PWN) is presented, allowing one to make broadband predictions for the PWN's non-thermal radiation. The source spectrum of electrons injected at the termination shock of the PWN is chosen to be a broken power law. In contrast to previous PWN models of a similar nature, the source spectrum has a discontinuity in intensity at the transition between the low- and high-energy components. To test the model, it is applied to the young PWN G21.5-0.9, where it is found that a discontinuous source spectrum can model the emission at all wavelengths better than a continuous one. The model is also applied to the unidentified sources HESS J1427-608 and HESS J1507-622. Parameters are derived for these two candidate nebulae that are consistent with the values predicted for other PWNe. For HESS J1427-608, a present day magnetic field of B age = 0.4 μG is derived. As a result of the small present day magnetic field, this source has a low synchrotron luminosity, while remaining bright at GeV/TeV energies. It is therefore possible to interpret HESS J1427-608 within the ancient PWN scenario. For the second candidate PWN HESS J1507-622, a present day magnetic field of B age = 1.7 μG is derived. Furthermore, for this candidate PWN a scenario is favored in the present paper in which HESS J1507-622 has been compressed by the reverse shock of the supernova remnant.

  2. Clonal analysis of synovial fluid stem cells to characterize and identify stable mesenchymal stromal cell/mesenchymal progenitor cell phenotypes in a porcine model: a cell source with enhanced commitment to the chondrogenic lineage.

    PubMed

    Ando, Wataru; Kutcher, Josh J; Krawetz, Roman; Sen, Arindom; Nakamura, Norimasa; Frank, Cyril B; Hart, David A

    2014-06-01

    Previous studies have demonstrated that porcine synovial membrane stem cells can adhere to a cartilage defect in vivo through the use of a tissue-engineered construct approach. To optimize this model, we wanted to compare effectiveness of tissue sources to determine whether porcine synovial fluid, synovial membrane, bone marrow and skin sources replicate our understanding of synovial fluid mesenchymal stromal cells or mesenchymal progenitor cells from humans both at the population level and the single-cell level. Synovial fluid clones were subsequently isolated and characterized to identify cells with a highly characterized optimal phenotype. The chondrogenic, osteogenic and adipogenic potentials were assessed in vitro for skin, bone marrow, adipose, synovial fluid and synovial membrane-derived stem cells. Synovial fluid cells then underwent limiting dilution analysis to isolate single clonal populations. These clonal populations were assessed for proliferative and differentiation potential by use of standardized protocols. Porcine-derived cells demonstrated the same relationship between cell sources as that demonstrated previously for humans, suggesting that the pig may be an ideal preclinical animal model. Synovial fluid cells demonstrated the highest chondrogenic potential that was further characterized, demonstrating the existence of a unique clonal phenotype with enhanced chondrogenic potential. Porcine stem cells demonstrate characteristics similar to those in human-derived mesenchymal stromal cells from the same sources. Synovial fluid-derived stem cells contain an inherent phenotype that may be optimal for cartilage repair. This must be more fully investigated for future use in the in vivo tissue-engineered construct approach in this physiologically relevant preclinical porcine model. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  3. SPE-5 Ground-Motion Prediction at Far-Field Geophone and Accelerometer Array Sites and SPE-5 Moment and Corner-Frequency Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaoning; Patton, Howard John; Chen, Ting

    2016-03-25

    This report offers predictions for the SPE-5 ground-motion and accelerometer array sites. These predictions pertain to the waveform and spectral amplitude at certain geophone sites using Denny&Johnson source model and a source model derived from SPE data; waveform, peak velocity and peak acceleration at accelerometer sites using the SPE source model and the finite-difference simulation with LLNL 3D velocity model; and the SPE-5 moment and corner frequency.

  4. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Comparing geological and statistical approaches for element selection in sediment tracing research

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only the Lexys Creek modelling results differing significantly (35%). The statistical model with expanded elemental selection (DFA0.35) differed from the geological model by an average of 30% for all 6 models. Elemental selection for sediment fingerprinting therefore has the potential to impact modeling results. Accordingly is important to incorporate both robust geological and statistical approaches when selecting elements for sediment fingerprinting. For the Baroon Pocket Dam, management should focus on reducing the supply of sediments derived from felsic sources in each of the subcatchments.

  6. Modeling of the dolphin's clicking sound source: The influence of the critical parameters

    NASA Astrophysics Data System (ADS)

    Dubrovsky, N. A.; Gladilin, A.; Møhl, B.; Wahlberg, M.

    2004-07-01

    A physical and a mathematical models of the dolphin’s source of echolocation clicks have been recently proposed. The physical model includes a bottle of pressurized air connected to the atmosphere with an underwater rubber tube. A compressing rubber ring is placed on the underwater portion of the tube. The ring blocks the air jet passing through the tube from the bottle. This ring can be brought into self-oscillation by the air jet. In the simplest case, the ring displacement follows a repeated triangular waveform. Because the acoustic pressure gradient is proportional to the second time derivative of the displacement, clicks arise at the bends of the displacement waveform. The mathematical model describes the dipole oscillations of a sphere “frozen” in the ring and calculates the waveform and the sound pressure of the generated clicks. The critical parameters of the mathematical model are the radius of the sphere and the peak value and duration of the triangular displacement curve. This model allows one to solve both the forward (deriving the properties of acoustic clicks from the known source parameters) and the inverse (calculating the source parameters from the acoustic data) problems. Data from click records of Odontocetes were used to derive both the displacement waveforms and the size of the “frozen sphere” or a structure functionally similar to it. The mathematical model predicts a maximum source level of up to 235 dB re 1 μPa at 1-m range when using a 5-cm radius of the “frozen” sphere and a 4-mm maximal displacement. The predicted sound pressure level is similar to that of the clicks produced by Odontocetest.

  7. Algorithms for System Identification and Source Location.

    NASA Astrophysics Data System (ADS)

    Nehorai, Arye

    This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.

  8. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  9. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    PubMed

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  10. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P.; Budge, A.; Hudspeth, W.; hide

    2012-01-01

    Juniperus spp. pollen is a significant aeroallergen that can be transported 200-600 km from the source. Local observations of Juniperus spp. phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. Methods: The Dust REgional Atmospheric Model (DREAM)is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We successfully modified the DREAM model to incorporate pollen transport (PREAM) and used MODIS satellite images to develop Juniperus ashei pollen input source masks. The Pollen Release Potential Source Map, also referred to as a source mask in model applications, may use different satellite platforms and sensors and a variety of data sets other than the USGS GAP data we used to map J. ashei cover type. MODIS derived percent tree cover is obtained from MODIS Vegetation Continuous Fields (VCF) product (collection 3 and 4, MOD44B, 500 and 250 m grid resolution). We use updated 2010 values to calculate pollen concentration at source (J. ashei ). The original MODIS derived values are converted from native approx. 250 m to 990m (approx. 1 km) for the calculation of a mask to fit the model (PREAM) resolution. Results: The simulation period is chosen following the information that in the last 2 weeks of December 2010. The PREAM modeled near-surface concentrations (Nm-3) shows the transport patterns of J. ashei pollen over a 5 day period (Fig. 2). Typical scales of the simulated transport process are regional.

  11. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  12. Moment-Tensor Spectra of Source Physics Experiments (SPE) Explosions in Granite

    NASA Astrophysics Data System (ADS)

    Yang, X.; Cleveland, M.

    2016-12-01

    We perform frequency-domain moment tensor inversions of Source Physics Experiments (SPE) explosions conducted in granite during Phase I of the experiment. We test the sensitivity of source moment-tensor spectra to factors such as the velocity model, selected dataset and smoothing and damping parameters used in the inversion to constrain the error bound of inverted source spectra. Using source moments and corner frequencies measured from inverted source spectra of these explosions, we develop a new explosion P-wave source model that better describes observed source spectra of these small and over-buried chemical explosions detonated in granite than classical explosion source models derived mainly from nuclear-explosion data. In addition to source moment and corner frequency, we analyze other features in the source spectra to investigate their physical causes.

  13. Modeling synchronous voltage source converters in transmission system planning studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosterev, D.N.

    1997-04-01

    A Voltage Source Converter (VSC) can be beneficial to power utilities in many ways. To evaluate the VSC performance in potential applications, the device has to be represented appropriately in planning studies. This paper addresses VSC modeling for EMTP, powerflow, and transient stability studies. First, the VSC operating principles are overviewed, and the device model for EMTP studies is presented. The ratings of VSC components are discussed, and the device operating characteristics are derived based on these ratings. A powerflow model is presented and various control modes are proposed. A detailed stability model is developed, and its step-by-step initialization proceduremore » is described. A simplified stability model is also derived under stated assumptions. Finally, validation studies are performed to demonstrate performance of developed stability models and to compare it with EMTP simulations.« less

  14. Characterizing CO and NOy Sources and Relative Ambient Ratios in the Baltimore Area Using Ambient Measurements and Source Attribution Modeling

    NASA Astrophysics Data System (ADS)

    Simon, Heather; Valin, Luke C.; Baker, Kirk R.; Henderson, Barron H.; Crawford, James H.; Pusede, Sally E.; Kelly, James T.; Foley, Kristen M.; Chris Owen, R.; Cohen, Ronald C.; Timin, Brian; Weinheimer, Andrew J.; Possiel, Norm; Misenis, Chris; Diskin, Glenn S.; Fried, Alan

    2018-03-01

    Modeled source attribution information from the Community Multiscale Air Quality model was coupled with ambient data from the 2011 Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality Baltimore field study. We assess source contributions and evaluate the utility of using aircraft measured CO and NOy relationships to constrain emission inventories. We derive ambient and modeled ΔCO:ΔNOy ratios that have previously been interpreted to represent CO:NOy ratios in emissions from local sources. Modeled and measured ΔCO:ΔNOy are similar; however, measured ΔCO:ΔNOy has much more daily variability than modeled values. Sector-based tagging shows that regional transport, on-road gasoline vehicles, and nonroad equipment are the major contributors to modeled CO mixing ratios in the Baltimore area. In addition to those sources, on-road diesel vehicles, soil emissions, and power plants also contribute substantially to modeled NOy in the area. The sector mix is important because emitted CO:NOx ratios vary by several orders of magnitude among the emission sources. The model-predicted gasoline/diesel split remains constant across all measurement locations in this study. Comparison of ΔCO:ΔNOy to emitted CO:NOy is challenged by ambient and modeled evidence that free tropospheric entrainment, and atmospheric processing elevates ambient ΔCO:ΔNOy above emitted ratios. Specifically, modeled ΔCO:ΔNOy from tagged mobile source emissions is enhanced 5-50% above the emitted ratios at times and locations of aircraft measurements. We also find a correlation between ambient formaldehyde concentrations and measured ΔCO:ΔNOy suggesting that secondary CO formation plays a role in these elevated ratios. This analysis suggests that ambient urban daytime ΔCO:ΔNOy values are not reflective of emitted ratios from individual sources.

  15. Lunar Neutral Exposphere Properties from Pickup Ion Analysis

    NASA Technical Reports Server (NTRS)

    Hartle, R. E.; Sarantos, M.; Killen, R.; Sittler, E. C. Jr.; Halekas, J.; Yokota, S.; Saito, Y.

    2009-01-01

    Composition and structure of neutral constituents in the lunar exosphere can be determined through measurements of phase space distributions of pickup ions borne from the exosphere [1]. An essential point made in an early study [ 1 ] and inferred by recent pickup ion measurements [2, 3] is that much lower neutral exosphere densities can be derived from ion mass spectrometer measurements of pickup ions than can be determined by conventional neutral mass spectrometers or remote sensing instruments. One approach for deriving properties of neutral exospheric source gasses is to first compare observed ion spectra with pickup ion model phase space distributions. Neutral exosphere properties are then inferred by adjusting exosphere model parameters to obtain the best fit between the resulting model pickup ion distributions and the observed ion spectra. Adopting this path, we obtain ion distributions from a new general pickup ion model, an extension of a simpler analytic description obtained from the Vlasov equation with an ion source [4]. In turn, the ion source is formed from a three-dimensional exospheric density distribution, which can range from the classical Chamberlain type distribution to one with variable exobase temperatures and nonthermal constituents as well as those empirically derived. The initial stage of this approach uses the Moon's known neutral He and Na exospheres to deriv e He+ and Na+ pickup ion exospheres, including their phase space distributions, densities and fluxes. The neutral exospheres used are those based on existing models and remote sensing studies. As mentioned, future ion measurements can be used to constrain the pickup ion model and subsequently improve the neutral exosphere descriptions. The pickup ion model is also used to estimate the exosphere sources of recently observed pickup ions on KAGUYA [3]. Future missions carrying ion spectrometers (e.g., ARTEMIS) will be able to study the lunar neutral exosphere with great sensitivity, yielding the necessary ion velocity spectra needed to further analysis of parent neutral exosphere properties.

  16. Derivation of revised formulae for eddy viscous forces used in the ocean general circulation model

    NASA Technical Reports Server (NTRS)

    Chou, Ru Ling

    1988-01-01

    Presented is a re-derivation of the eddy viscous dissipation tensor commonly used in present oceanographic general circulation models. When isotropy is imposed, the currently-used form of the tensor fails to return to the laplacian operator. In this paper, the source of this error is identified in a consistent derivation of the tensor in both rectangular and earth spherical coordinates, and the correct form of the eddy viscous tensor is presented.

  17. PM2.5 pollution from household solid fuel burning practices in Central India: 2. Application of receptor models for source apportionment.

    PubMed

    Matawle, Jeevan Lal; Pervez, Shamsh; Deb, Manas Kanti; Shrivastava, Anjali; Tiwari, Suresh

    2018-02-01

    USEPA's UNMIX, positive matrix factorization (PMF) and effective variance-chemical mass balance (EV-CMB) receptor models were applied to chemically speciated profiles of 125 indoor PM 2.5 measurements, sampled longitudinally during 2012-2013 in low-income group households of Central India which uses solid fuels for cooking practices. Three step source apportionment studies were carried out to generate more confident source characterization. Firstly, UNMIX6.0 extracted initial number of source factors, which were used to execute PMF5.0 to extract source-factor profiles in second step. Finally, factor analog locally derived source profiles were supplemented to EV-CMB8.2 with indoor receptor PM 2.5 chemical profile to evaluate source contribution estimates (SCEs). The results of combined use of three receptor models clearly describe that UNMIX and PMF are useful tool to extract types of source categories within small receptor dataset and EV-CMB can pick those locally derived source profiles for source apportionment which are analog to PMF-extracted source categories. The source apportionment results have also shown three fold higher relative contribution of solid fuel burning emissions to indoor PM 2.5 compared to those measurements reported for normal households with LPG stoves. The previously reported influential source marker species were found to be comparatively similar to those extracted from PMF fingerprint plots. The comparison between PMF and CMB SCEs results were also found to be qualitatively similar. The performance fit measures of all three receptor models were cross-verified and validated and support each other to gain confidence in source apportionment results.

  18. Numerical simulation of time fractional dual-phase-lag model of heat transfer within skin tissue during thermal therapy.

    PubMed

    Kumar, Dinesh; Rai, K N

    2017-07-01

    In this paper, we investigated the thermal behavior in living biological tissues using time fractional dual-phase-lag bioheat transfer (DPLBHT) model subjected to Dirichelt boundary condition in presence of metabolic and electromagnetic heat sources during thermal therapy. We solved this bioheat transfer model using finite element Legendre wavelet Galerkin method (FELWGM) with help of block pulse function in sense of Caputo fractional order derivative. We compared the obtained results from FELWGM and exact method in a specific case, and found a high accuracy. Results are interpreted in the form of standard and anomalous cases for taking different order of time fractional DPLBHT model. The time to achieve hyperthermia position is discussed in both cases as standard and time fractional order derivative. The success of thermal therapy in the treatment of metastatic cancerous cell depends on time fractional order derivative to precise prediction and control of temperature. The effect of variability of parameters such as time fractional derivative, lagging times, blood perfusion coefficient, metabolic heat source and transmitted power on dimensionless temperature distribution in skin tissue is discussed in detail. The physiological parameters has been estimated, corresponding to the value of fractional order derivative for hyperthermia treatment therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.

    PubMed

    de Barros, Louis; Dietrich, Michel

    2008-03-01

    Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.

  20. A hybrid phase-space and histogram source model for GPU-based Monte Carlo radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Zavgorodni, Sergei

    2014-12-01

    In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics

  1. A comparison of the structureborne and airborne paths for propfan interior noise

    NASA Technical Reports Server (NTRS)

    Eversman, W.; Koval, L. R.; Ramakrishnan, J. V.

    1986-01-01

    A comparison is made between the relative levels of aircraft interior noise related to structureborne and airborne paths for the same propeller source. A simple, but physically meaningful, model of the structure treats the fuselage interior as a rectangular cavity with five rigid walls. The sixth wall, the fuselage sidewall, is a stiffened panel. The wing is modeled as a simple beam carried into the fuselage by a large discrete stiffener representing the carry-through structure. The fuselage interior is represented by analytically-derived acoustic cavity modes and the entire structure is represented by structural modes derived from a finite element model. The noise source for structureborne noise is the unsteady lift generation on the wing due to the rotating trailing vortex system of the propeller. The airborne noise source is the acoustic field created by a propeller model consistent with the vortex representation. Comparisons are made on the basis of interior noise over a range of propeller rotational frequencies at a fixed thrust.

  2. The spectra of ten galactic X-ray sources in the southern sky

    NASA Technical Reports Server (NTRS)

    Cruddace, R.; Bowyer, S.; Lampton, M.; Mack, J. E., Jr.; Margon, B.

    1971-01-01

    Data on ten galactic X-ray sources were obtained during a rocket flight from Brazil in June 1969. Detailed spectra of these sources have been compared with bremsstrahlung, black body, and power law models, each including interstellar absorption. Six of the sources were fitted well by one or more of these models. In only one case were the data sufficient to distinguish the best model. Three of the sources were not fitted by any of the models, which suggests that more complex emission mechanisms are applicable. A comparison of our results with those of previous investigations provides evidence that five of the sources vary in intensity by a factor of 2 or more, and that three have variable spectra. New or substantially improved positions have been derived for four of the sources observed.

  3. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  5. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  6. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    PubMed

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  7. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  8. The sound field of a rotating dipole in a plug flow.

    PubMed

    Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H

    2018-04-01

    An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friberg, Ari T.; Visser, Taco D.; Wolf, Emil

    A reciprocity inequality is derived, involving the effective size of a planar, secondary, Gaussian Schell-model source and the effective angular spread of the beam that the source generates. The analysis is shown to imply that a fully spatially coherent source of that class (which generates the lowest-order Hermite-Gaussian laser mode) has certain minimal properties. (c) 2000 Optical Society of America.

  10. Phase equilibria constraints on models of subduction zone magmatism

    NASA Astrophysics Data System (ADS)

    Myers, James D.; Johnston, Dana A.

    Petrologic models of subduction zone magmatism can be grouped into three broad classes: (1) predominantly slab-derived, (2) mainly mantle-derived, and (3) multi-source. Slab-derived models assume high-alumina basalt (HAB) approximates primary magma and is derived by partial fusion of the subducting slab. Such melts must, therefore, be saturated with some combination of eclogite phases, e.g. cpx, garnet, qtz, at the pressures, temperatures and water contents of magma generation. In contrast, mantle-dominated models suggest partial melting of the mantle wedge produces primary high-magnesia basalts (HMB) which fractionate to yield derivative HAB magmas. In this context, HMB melts should be saturated with a combination of peridotite phases, i.e. ol, cpx and opx, and have liquid-lines-of-descent that produce high-alumina basalts. HAB generated in this manner must be saturated with a mafic phase assemblage at the intensive conditions of fractionation. Multi-source models combine slab and mantle components in varying proportions to generate the four main lava types (HMB, HAB, high-magnesia andesites (HMA) and evolved lavas) characteristic of subduction zones. The mechanism of mass transfer from slab to wedge as well as the nature and fate of primary magmas vary considerably among these models. Because of their complexity, these models imply a wide range of phase equilibria. Although the experiments conducted on calc-alkaline lavas are limited, they place the following limitations on arc petrologic models: (1) HAB cannot be derived from HMB by crystal fractionation at the intensive conditions thus far investigated, (2) HAB could be produced by anhydrous partial fusion of eclogite at high pressure, (3) HMB liquids can be produced by peridotite partial fusion 50-60 km above the slab-mantle interface, (4) HMA cannot be primary magmas derived by partial melting of the subducted slab, but could have formed by slab melt-peridotite interaction, and (5) many evolved calc-alkaline lavas could have been formed by crystal fractionation at a range of crustal pressures.

  11. Evaluation of Stem Cell-Derived Red Blood Cells as a Transfusion Product Using a Novel Animal Model.

    PubMed

    Shah, Sandeep N; Gelderman, Monique P; Lewis, Emily M A; Farrel, John; Wood, Francine; Strader, Michael Brad; Alayash, Abdu I; Vostal, Jaroslav G

    2016-01-01

    Reliance on volunteer blood donors can lead to transfusion product shortages, and current liquid storage of red blood cells (RBCs) is associated with biochemical changes over time, known as 'the storage lesion'. Thus, there is a need for alternative sources of transfusable RBCs to supplement conventional blood donations. Extracorporeal production of stem cell-derived RBCs (stemRBCs) is a potential and yet untapped source of fresh, transfusable RBCs. A number of groups have attempted RBC differentiation from CD34+ cells. However, it is still unclear whether these stemRBCs could eventually be effective substitutes for traditional RBCs due to potential differences in oxygen carrying capacity, viability, deformability, and other critical parameters. We have generated ex vivo stemRBCs from primary human cord blood CD34+ cells and compared them to donor-derived RBCs based on a number of in vitro parameters. In vivo, we assessed stemRBC circulation kinetics in an animal model of transfusion and oxygen delivery in a mouse model of exercise performance. Our novel, chronically anemic, SCID mouse model can evaluate the potential of stemRBCs to deliver oxygen to tissues (muscle) under resting and exercise-induced hypoxic conditions. Based on our data, stem cell-derived RBCs have a similar biochemical profile compared to donor-derived RBCs. While certain key differences remain between donor-derived RBCs and stemRBCs, the ability of stemRBCs to deliver oxygen in a living organism provides support for further development as a transfusion product.

  12. Constructing Ebola transmission chains from West Africa and estimating model parameters using internet sources.

    PubMed

    Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V

    2017-07-01

    During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.

  13. Microbial water pollution: a screening tool for initial catchment-scale assessment and source apportionment.

    PubMed

    Kay, D; Anthony, S; Crowther, J; Chambers, B J; Nicholson, F A; Chadwick, D; Stapleton, C M; Wyer, M D

    2010-11-01

    The European Union Water Framework Directive requires that Management Plans are developed for individual River Basin Districts. From the point of view of faecal indicator organisms (FIOs), there is a critical need for screening tools that can provide a rapid assessment of the likely FIO concentrations and fluxes within catchments under base- and high-flow conditions, and of the balance ('source apportionment') between agriculture- and sewage-derived sources. Accordingly, the present paper reports on: (1) the development of preliminary generic models, using water quality and land cover data from previous UK catchment studies for assessing FIO concentrations, fluxes and source apportionment within catchments during the summer bathing season; (2) the calibration of national land use data, against data previously used in the models; and (3) provisional FIO concentration and source-apportionment assessments for England and Wales. The models clearly highlighted the crucial importance of high-flow conditions for the flux of FIOs within catchments. At high flow, improved grassland (and associated livestock) was the key FIO source; FIO loadings derived from catchments with high proportions of improved grassland were shown to be as high as from urbanized catchments; and in many rural catchments, especially in NW and SW England and Wales, which are important areas of lowland livestock (especially dairy) farming, ≥ 40% of FIOs was assessed to be derived from agricultural sources. In contrast, under base-flow conditions, when there was little or no runoff from agricultural land, urban (i.e. sewerage-related) sources were assessed to dominate, and even in rural areas the majority of FIOs were attributed to urban sources. The results of the study demonstrate the potential of this type of approach, particularly in light of climate change and the likelihood of more high-flow events, in underpinning informed policy development and prioritization of investment. Copyright © 2009 Elsevier B.V. All rights reserved.

  14. Source apportionment of PM2.5 organic carbon in the San Joaquin Valley using monthly and daily observations and meteorological clustering.

    PubMed

    Skiles, Matthew J; Lai, Alexandra M; Olson, Michael R; Schauer, James J; de Foy, Benjamin

    2018-06-01

    Two hundred sixty-three fine particulate matter (PM 2.5 ) samples collected on 3-day intervals over a 14-month period at two sites in the San Joaquin Valley (SJV) were analyzed for organic carbon (OC), elemental carbon (EC), water soluble organic carbon (WSOC), and organic molecular markers. A unique source profile library was applied to a chemical mass balance (CMB) source apportionment model to develop monthly and seasonally averaged source apportionment results. Five major OC sources were identified: mobile sources, biomass burning, meat smoke, vegetative detritus, and secondary organic carbon (SOC), as inferred from OC not apportioned by CMB. The SOC factor was the largest source contributor at Fresno and Bakersfield, contributing 44% and 51% of PM mass, respectively. Biomass burning was the only source with a statistically different average mass contribution (95% CI) between the two sites. Wintertime peaks of biomass burning, meat smoke, and total OC were observed at both sites, with SOC peaking during the summer months. Exceptionally strong seasonal variation in apportioned meat smoke mass could potentially be explained by oxidation of cholesterol between source and receptor and trends in wind transport outlined in a Residence Time Analysis (RTA). Fast moving nighttime winds prevalent during warmer months caused local emissions to be replaced by air mass transported from the San Francisco Bay Area, consisting of mostly diluted, oxidized concentrations of molecular markers. Good agreement was observed between SOC derived from the CMB model and from non-biomass burning WSOC mass, suggesting the CMB model is sufficiently accurate to assist in policy development. In general, uncertainty in monthly mass values derived from daily CMB apportionments were lower than that of CMB results produced with monthly marker composites, further validating daily sampling methodologies. Strong seasonal trends were observed for biomass and meat smoke OC apportionment, and monthly mass averages had lowest uncertainty when derived from daily CMB apportionments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Combining controlled-source seismology and receiver function information to derive 3-D Moho topography for Italy

    NASA Astrophysics Data System (ADS)

    Spada, M.; Bianchi, I.; Kissling, E.; Agostinetti, N. Piana; Wiemer, S.

    2013-08-01

    The accurate definition of 3-D crustal structures and, in primis, the Moho depth, are the most important requirement for seismological, geophysical and geodynamic modelling in complex tectonic regions. In such areas, like the Mediterranean region, various active and passive seismic experiments are performed, locally reveal information on Moho depth, average and gradient crustal Vp velocity and average Vp/Vs velocity ratios. Until now, the most reliable information on crustal structures stems from controlled-source seismology experiments. In most parts of the Alpine region, a relatively large number of controlled-source seismology information are available though the overall coverage in the central Mediterranean area is still sparse due to high costs of such experiments. Thus, results from other seismic methodologies, such as local earthquake tomography, receiver functions and ambient noise tomography can be used to complement the controlled-source seismology information to increase coverage and thus the quality of 3-D crustal models. In this paper, we introduce a methodology to directly combine controlled-source seismology and receiver functions information relying on the strengths of each method and in relation to quantitative uncertainty estimates for all data to derive a well resolved Moho map for Italy. To obtain a homogeneous elaboration of controlled-source seismology and receiver functions results, we introduce a new classification/weighting scheme based on uncertainty assessment for receiver functions data. In order to tune the receiver functions information quality, we compare local receiver functions Moho depths and uncertainties with a recently derived well-resolved local earthquake tomography-derived Moho map and with controlled-source seismology information. We find an excellent correlation in the Moho information obtained by these three methodologies in Italy. In the final step, we interpolate the controlled-source seismology and receiver functions information to derive the map of Moho topography in Italy and surrounding regions. Our results show high-frequency undulation in the Moho topography of three different Moho interfaces, the European, the Adriatic-Ionian, and the Liguria-Corsica-Sardinia-Tyrrhenia, reflecting the complexity of geodynamical evolution.

  16. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  17. An improved model to predict bandwidth enhancement in an inductively tuned common source amplifier.

    PubMed

    Reza, Ashif; Misra, Anuraag; Das, Parnika

    2016-05-01

    This paper presents an improved model for the prediction of bandwidth enhancement factor (BWEF) in an inductively tuned common source amplifier. In this model, we have included the effect of drain-source channel resistance of field effect transistor along with load inductance and output capacitance on BWEF of the amplifier. A frequency domain analysis of the model is performed and a closed-form expression is derived for BWEF of the amplifier. A prototype common source amplifier is designed and tested. The BWEF of amplifier is obtained from the measured frequency response as a function of drain current and load inductance. In the present work, we have clearly demonstrated that inclusion of drain-source channel resistance in the proposed model helps to estimate the BWEF, which is accurate to less than 5% as compared to the measured results.

  18. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  19. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  20. A Comparison Between Gravity Wave Momentum Fluxes in Observations and Climate Models

    NASA Technical Reports Server (NTRS)

    Geller, Marvin A.; Alexadner, M. Joan; Love, Peter T.; Bacmeister, Julio; Ern, Manfred; Hertzog, Albert; Manzini, Elisa; Preusse, Peter; Sato, Kaoru; Scaife, Adam A.; hide

    2013-01-01

    For the first time, a formal comparison is made between gravity wave momentum fluxes in models and those derived from observations. Although gravity waves occur over a wide range of spatial and temporal scales, the focus of this paper is on scales that are being parameterized in present climate models, sub-1000-km scales. Only observational methods that permit derivation of gravity wave momentum fluxes over large geographical areas are discussed, and these are from satellite temperature measurements, constant-density long-duration balloons, and high-vertical-resolution radiosonde data. The models discussed include two high-resolution models in which gravity waves are explicitly modeled, Kanto and the Community Atmosphere Model, version 5 (CAM5), and three climate models containing gravity wave parameterizations,MAECHAM5, Hadley Centre Global Environmental Model 3 (HadGEM3), and the Goddard Institute for Space Studies (GISS) model. Measurements generally show similar flux magnitudes as in models, except that the fluxes derived from satellite measurements fall off more rapidly with height. This is likely due to limitations on the observable range of wavelengths, although other factors may contribute. When one accounts for this more rapid fall off, the geographical distribution of the fluxes from observations and models compare reasonably well, except for certain features that depend on the specification of the nonorographic gravity wave source functions in the climate models. For instance, both the observed fluxes and those in the high-resolution models are very small at summer high latitudes, but this is not the case for some of the climate models. This comparison between gravity wave fluxes from climate models, high-resolution models, and fluxes derived from observations indicates that such efforts offer a promising path toward improving specifications of gravity wave sources in climate models.

  1. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  2. Neural Differentiation of Human Pluripotent Stem Cells for Nontherapeutic Applications: Toxicology, Pharmacology, and In Vitro Disease Modeling.

    PubMed

    Yap, May Shin; Nathan, Kavitha R; Yeo, Yin; Lim, Lee Wei; Poh, Chit Laa; Richards, Mark; Lim, Wei Ling; Othman, Iekhsan; Heng, Boon Chin

    2015-01-01

    Human pluripotent stem cells (hPSCs) derived from either blastocyst stage embryos (hESCs) or reprogrammed somatic cells (iPSCs) can provide an abundant source of human neuronal lineages that were previously sourced from human cadavers, abortuses, and discarded surgical waste. In addition to the well-known potential therapeutic application of these cells in regenerative medicine, these are also various promising nontherapeutic applications in toxicological and pharmacological screening of neuroactive compounds, as well as for in vitro modeling of neurodegenerative and neurodevelopmental disorders. Compared to alternative research models based on laboratory animals and immortalized cancer-derived human neural cell lines, neuronal cells differentiated from hPSCs possess the advantages of species specificity together with genetic and physiological normality, which could more closely recapitulate in vivo conditions within the human central nervous system. This review critically examines the various potential nontherapeutic applications of hPSC-derived neuronal lineages and gives a brief overview of differentiation protocols utilized to generate these cells from hESCs and iPSCs.

  3. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  4. Population and Activity of On-road Vehicles in MOVES2014 ...

    EPA Pesticide Factsheets

    This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, lawnmowers, etc.) mobile sources. The national default activity information in MOVES2014 provides a reasonable basis for estimating national emissions. However, the uncertainties and variability in the default data contribute to the uncertainty in the resulting emission estimates. Properly characterizing emissions from the on-road vehicle subset requires a detailed understanding of the cars and trucks that make up the vehicle fleet and their patterns of operation. The MOVES model calculates emission inventories by multiplying emission rates by the appropriate emission-related activity, applying correction (adjustment) factors as needed to simulate specific situations, and then adding up the emissions from all sources (populations) and regions. This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, law

  5. Origins and Asteroid Main-Belt Stratigraphy for H-, L-, LL-Chondrite Meteorites

    NASA Astrophysics Data System (ADS)

    Binzel, Richard; DeMeo, Francesca; Burbine, Thomas; Polishook, David; Birlan, Mirel

    2016-10-01

    We trace the origins of ordinary chondrite meteorites to their main-belt sources using their (presumably) larger counterparts observable as near-Earth asteroids (NEAs). We find the ordinary chondrite stratigraphy in the main belt to be LL, H, L (increasing distance from the Sun). We derive this result using spectral information from more than 1000 near-Earth asteroids [1]. Our methodology is to correlate each NEA's main-belt source region [2] with its modeled mineralogy [3]. We find LL chondrites predominantly originate from the inner edge of the asteroid belt (nu6 region at 2.1 AU), H chondrites from the 3:1 resonance region (2.5 AU), and the L chondrites from the outer belt 5:2 resonance region (2.8 AU). Each of these source regions has been cited by previous researchers [e.g. 4, 5, 6], but this work uses an independent methodology that simultaneously solves for the LL, H, L stratigraphy. We seek feedback from the planetary origins and meteoritical communities on the viability or implications of this stratrigraphy.Methodology: Spectroscopic and taxonomic data are from the NASA IRTF MIT-Hawaii Near-Earth Object Spectroscopic Survey (MITHNEOS) [1]. For each near-Earth asteroid, we use the Bottke source model [2] to assign a probability that the object is derived from five different main-belt source regions. For each spectrum, we apply the Shkuratov model [3] for radiative transfer within compositional mixing to derive estimates for the ol / (ol+px) ratio (and its uncertainty). The Bottke source region model [2] and the Shkuratov mineralogic model [3] each deliver a probability distribution. For each NEA, we convolve its source region probability distribution with its meteorite class distribution to yield a likelihood for where that class originates. Acknowledgements: This work supported by the National Science Foundation Grant 0907766 and NASA Grant NNX10AG27G.References: [1] Binzel et al. (2005), LPSC XXXVI, 36.1817. [2] Bottke et al. (2002). Icarus 156, 399. [3] Shkuratov et al. (1999). Icarus 137, 222. [4] Vernazza et al. (2008). Nature 454, 858. [5] Thomas et al. (2010). Icarus 205, 419. [6] Nesvorný et al.(2009). Icarus 200, 698.

  6. Simulation of future groundwater recharge using a climate model ensemble and SAR-image based soil parameter distributions - A case study in an intensively-used Mediterranean catchment.

    PubMed

    Herrmann, Frank; Baghdadi, Nicolas; Blaschek, Michael; Deidda, Roberto; Duttmann, Rainer; La Jeunesse, Isabelle; Sellami, Haykel; Vereecken, Harry; Wendland, Frank

    2016-02-01

    We used observed climate data, an ensemble of four GCM-RCM combinations (global and regional climate models) and the water balance model mGROWA to estimate present and future groundwater recharge for the intensively-used Thau lagoon catchment in southern France. In addition to a highly resolved soil map, soil moisture distributions obtained from SAR-images (Synthetic Aperture Radar) were used to derive the spatial distribution of soil parameters covering the full simulation domain. Doing so helped us to assess the impact of different soil parameter sources on the modelled groundwater recharge levels. Groundwater recharge was simulated in monthly time steps using the ensemble approach and analysed in its spatial and temporal variability. The soil parameters originating from both sources led to very similar groundwater recharge rates, proving that soil parameters derived from SAR images may replace traditionally used soil maps in regions where soil maps are sparse or missing. Additionally, we showed that the variance in different GCM-RCMs influences the projected magnitude of future groundwater recharge change significantly more than the variance in the soil parameter distributions derived from the two different sources. For the period between 1950 and 2100, climate change impacts based on the climate model ensemble indicated that overall groundwater recharge will possibly show a low to moderate decrease in the Thau catchment. However, as no clear trend resulted from the ensemble simulations, reliable recommendations for adapting the regional groundwater management to changed available groundwater volumes could not be derived. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. An application of a hydraulic model simulator in flood risk assessment under changing climatic conditions

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, J. M.; Romanowicz, R. J.

    2016-12-01

    The standard procedure of climate change impact assessment on future hydrological extremes consists of a chain of consecutive actions, starting from the choice of GCM driven by an assumed CO2 scenario, through downscaling of climatic forcing to a catchment scale, estimation of hydrological extreme indices using hydrological modelling tools and subsequent derivation of flood risk maps with the help of a hydraulic model. Among many possible sources of uncertainty, the main are the uncertainties related to future climate scenarios, climate models, downscaling techniques and hydrological and hydraulic models. Unfortunately, we cannot directly assess the impact of these different sources of uncertainties on flood risk in future due to lack of observations of future climate realizations. The aim of this study is an assessment of a relative impact of different sources of uncertainty on the uncertainty of flood risk maps. Due to the complexity of the processes involved, an assessment of total uncertainty of maps of inundation probability might be very computer time consuming. As a way forward we present an application of a hydraulic model simulator based on a nonlinear transfer function model for the chosen locations along the river reach. The transfer function model parameters are estimated based on the simulations of the hydraulic model at each of the model cross-sections. The study shows that the application of a simulator substantially reduces the computer requirements related to the derivation of flood risk maps under future climatic conditions. Biala Tarnowska catchment, situated in southern Poland is used as a case study. Future discharges at the input to a hydraulic model are obtained using the HBV model and climate projections obtained from the EUROCORDEX project. The study describes a cascade of uncertainty related to different stages of the process of derivation of flood risk maps under changing climate conditions. In this context it takes into account the uncertainty of future climate projections, an uncertainty of flow routing model, the propagation of that uncertainty through the hydraulic model, and finally, the uncertainty related to the derivation of flood risk maps.

  8. Improved radial dose function estimation using current version MCNP Monte-Carlo simulation: Model 6711 and ISC3500 125I brachytherapy sources.

    PubMed

    Duggan, Dennis M

    2004-12-01

    Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.

  9. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations.

    PubMed

    Hardstaff, Joanne L; Bulling, Mark T; Marion, Glenn; Hutchings, Michael R; White, Piran C L

    2012-06-27

    The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6-8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such external sources of infection.

  10. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations

    PubMed Central

    2012-01-01

    Background The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. Results The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6–8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. Conclusions External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such external sources of infection. PMID:22738118

  11. Project on Elite Athlete Commitment (PEAK): IV. identification of new candidate commitment sources in the sport commitment model.

    PubMed

    Scanlan, Tara K; Russell, David G; Scanlan, Larry A; Klunchoo, Tatiana J; Chow, Graig M

    2013-10-01

    Following a thorough review of the current updated Sport Commitment Model, new candidate commitment sources for possible future inclusion in the model are presented. They were derived from data obtained using the Scanlan Collaborative Interview Method. Three elite New Zealand teams participated: amateur All Black rugby players, amateur Silver Fern netball players, and professional All Black rugby players. An inductive content analysis of these players' open-ended descriptions of their sources of commitment identified four unique new candidate commitment sources: Desire to Excel, Team Tradition, Elite Team Membership, and Worthy of Team Membership. A detailed definition of each candidate source is included along with example quotes from participants. Using a mixed-methods approach, these candidate sources provide a basis for future investigations to test their viability and generalizability for possible expansion of the Sport Commitment Model.

  12. Identification of Dust Source Regions at High-Resolution and Dynamics of Dust Source Mask over Southwest United States Using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Sprigg, W. A.; Sahoo, S.; Prasad, A. K.; Venkatesh, A. S.; Vukovic, A.; Nickovic, S.

    2015-12-01

    Identification and evaluation of sources of aeolian mineral dust is a critical task in the simulation of dust. Recently, time series of space based multi-sensor satellite images have been used to identify and monitor changes in the land surface characteristics. Modeling of windblown dust requires precise delineation of mineral dust source and its strength that varies over a region as well as seasonal and inter-annual variability due to changes in land use and land cover. Southwest USA is one of the major dust emission prone zone in North American continent where dust is generated from low lying dried-up areas with bare ground surface and they may be scattered or appear as point sources on high resolution satellite images. In the current research, various satellite derived variables have been integrated to produce a high-resolution dust source mask, at grid size of 250 m, using data such as digital elevation model, surface reflectance, vegetation cover, land cover class, and surface wetness. Previous dust source models have been adopted to produce a multi-parameter dust source mask using data from satellites such as Terra (Moderate Resolution Imaging Spectroradiometer - MODIS), and Landsat. The dust source mask model captures the topographically low regions with bare soil surface, dried-up river plains, and lakes which form important source of dust in southwest USA. The study region is also one of the hottest regions of USA where surface dryness, land use (agricultural use), and vegetation cover changes significantly leading to major changes in the areal coverage of potential dust source regions. A dynamic high resolution dust source mask have been produced to address intra-annual change in the aerial extent of bare dry surfaces. Time series of satellite derived data have been used to create dynamic dust source masks. A new dust source mask at 16 day interval allows enhanced detection of potential dust source regions that can be employed in the dust emission and transport pathways models for better estimation of emission of dust during dust storms, particulate air pollution, public health risk assessment tools and decision support systems.

  13. Constraining the Dust Opacity Law in Three Small and Isolated Molecular Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb, K. A.; Thanjavur, K.; Di Francesco, J.

    Density profiles of isolated cores derived from thermal dust continuum emission rely on models of dust properties, such as mass opacity, that are poorly constrained. With complementary measures from near-infrared extinction maps, we can assess the reliability of commonly used dust models. In this work, we compare Herschel -derived maps of the optical depth with equivalent maps derived from CFHT WIRCAM near-infrared observations for three isolated cores: CB 68, L 429, and L 1552. We assess the dust opacities provided from four models: OH1a, OH5a, Orm1, and Orm4. Although the consistency of the models differs between the three sources, themore » results suggest that the optical properties of dust in the envelopes of the cores are best described by either silicate and bare graphite grains (e.g., Orm1) or carbonaceous grains with some coagulation and either thin or no ice mantles (e.g., OH5a). None of the models, however, individually produced the most consistent optical depth maps for every source. The results suggest that either the dust in the cores is not well-described by any one dust property model, the application of the dust models cannot be extended beyond the very center of the cores, or more complex SED fitting functions are necessary.« less

  14. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    NASA Astrophysics Data System (ADS)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  15. Mineralogies and source regions of near-Earth asteroids

    NASA Astrophysics Data System (ADS)

    Dunn, Tasha L.; Burbine, Thomas H.; Bottke, William F.; Clark, John P.

    2013-01-01

    Near-Earth Asteroids (NEAs) offer insight into a size range of objects that are not easily observed in the main asteroid belt. Previous studies on the diversity of the NEA population have relied primarily on modeling and statistical analysis to determine asteroid compositions. Olivine and pyroxene, the dominant minerals in most asteroids, have characteristic absorption features in the visible and near-infrared (VISNIR) wavelengths that can be used to determine their compositions and abundances. However, formulas previously used for deriving compositions do not work very well for ordinary chondrite assemblages. Because two-thirds of NEAs have ordinary chondrite-like spectral parameters, it is essential to determine accurate mineralogies. Here we determine the band area ratios and Band I centers of 72 NEAs with visible and near-infrared spectra and use new calibrations to derive the mineralogies 47 of these NEAs with ordinary chondrite-like spectral parameters. Our results indicate that the majority of NEAs have LL-chondrite mineralogies. This is consistent with results from previous studies but continues to be in conflict with the population of recovered ordinary chondrites, of which H chondrites are the most abundant. To look for potential correlations between asteroid size, composition, and source region, we use a dynamical model to determine the most probable source region of each NEA. Model results indicate that NEAs with LL chondrite mineralogies appear to be preferentially derived from the ν6 secular resonance. This supports the hypothesis that the Flora family, which lies near the ν6 resonance, is the source of the LL chondrites. With the exception of basaltic achondrites, NEAs with non-chondrite spectral parameters are slightly less likely to be derived from the ν6 resonance than NEAs with chondrite-like mineralogies. The population of NEAs with H, L, and LL chondrite mineralogies does not appear to be influenced by size, which would suggest that ordinary chondrites are not preferentially sourced from meter-sized objects due to Yarkovsky effect.

  16. The Herschel-ATLAS: magnifications and physical sizes of 500-μm-selected strongly lensed galaxies

    NASA Astrophysics Data System (ADS)

    Enia, A.; Negrello, M.; Gurwell, M.; Dye, S.; Rodighiero, G.; Massardi, M.; De Zotti, G.; Franceschini, A.; Cooray, A.; van der Werf, P.; Birkinshaw, M.; Michałowski, M. J.; Oteo, I.

    2018-04-01

    We perform lens modelling and source reconstruction of Sub-millimetre Array (SMA) data for a sample of 12 strongly lensed galaxies selected at 500μm in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). A previous analysis of the same data set used a single Sérsic profile to model the light distribution of each background galaxy. Here we model the source brightness distribution with an adaptive pixel scale scheme, extended to work in the Fourier visibility space of interferometry. We also present new SMA observations for seven other candidate lensed galaxies from the H-ATLAS sample. Our derived lens model parameters are in general consistent with previous findings. However, our estimated magnification factors, ranging from 3 to 10, are lower. The discrepancies are observed in particular where the reconstructed source hints at the presence of multiple knots of emission. We define an effective radius of the reconstructed sources based on the area in the source plane where emission is detected above 5σ. We also fit the reconstructed source surface brightness with an elliptical Gaussian model. We derive a median value reff ˜ 1.77 kpc and a median Gaussian full width at half-maximum ˜1.47 kpc. After correction for magnification, our sources have intrinsic star formation rates (SFR) ˜ 900-3500 M⊙ yr-1, resulting in a median SFR surface density ΣSFR ˜ 132 M⊙ yr-1 kpc-2 (or ˜218 M⊙ yr-1 kpc-2 for the Gaussian fit). This is consistent with that observed for other star-forming galaxies at similar redshifts, and is significantly below the Eddington limit for a radiation pressure regulated starburst.

  17. BIASES IN PHYSICAL PARAMETER ESTIMATES THROUGH DIFFERENTIAL LENSING MAGNIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Er Xinzhong; Ge Junqiang; Mao Shude, E-mail: xer@nao.cas.cn

    2013-06-20

    We study the lensing magnification effect on background galaxies. Differential magnification due to different magnifications of different source regions of a galaxy will change the lensed composite spectra. The derived properties of the background galaxies are therefore biased. For simplicity, we model galaxies as a superposition of an axis-symmetric bulge and a face-on disk in order to study the differential magnification effect on the composite spectra. We find that some properties derived from the spectra (e.g., velocity dispersion, star formation rate, and metallicity) are modified. Depending on the relative positions of the source and the lens, the inferred results canmore » be either over- or underestimates of the true values. In general, for an extended source at strong lensing regions with high magnifications, the inferred physical parameters (e.g., metallicity) can be strongly biased. Therefore, detailed lens modeling is necessary to obtain the true properties of the lensed galaxies.« less

  18. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  19. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  20. Geochemistry of amphibolites from the Kolar Schist Belt

    NASA Technical Reports Server (NTRS)

    Balakrishnan, S.; Hanson, G. N.; Rajamani, V.

    1988-01-01

    How the Nd isotope data suggest that the amphibolites from the schist belt were derived from long-term depleted mantle sources at about 2.7 Ga is described. Trace element and Pb isotope data from the amphibolites also suggest that the sources for the amphibolites on the western and eastern sides of the narrow schist belt were derived from different sources. The Pb data from one outcrop of the central tholeiitic amphibolites lies on a 2.7 Ga isochron with a low model. The other amphibolites (W komatiitic, E komatiitic, and E tholeiitic) do not define isochrons, but suggest that they were derived from sources with distinct histories of U/Pb. There is some suggestion that the E komatiitic amphibolites may have been contaminated by fluids carrying Pb from a long-term, high U/Pb source, such as the old granitic crust on the west side of the schist belt. This is consistent with published galena Pb isotope data from the ore lodes within the belt, which also show a history of long-term U/Pb enrichment.

  1. Spatially Resolved Isotopic Source Signatures of Wetland Methane Emissions

    NASA Astrophysics Data System (ADS)

    Ganesan, A. L.; Stell, A. C.; Gedney, N.; Comyn-Platt, E.; Hayman, G.; Rigby, M.; Poulter, B.; Hornibrook, E. R. C.

    2018-04-01

    We present the first spatially resolved wetland δ13C(CH4) source signature map based on data characterizing wetland ecosystems and demonstrate good agreement with wetland signatures derived from atmospheric observations. The source signature map resolves a latitudinal difference of 10‰ between northern high-latitude (mean -67.8‰) and tropical (mean -56.7‰) wetlands and shows significant regional variations on top of the latitudinal gradient. We assess the errors in inverse modeling studies aiming to separate CH4 sources and sinks by comparing atmospheric δ13C(CH4) derived using our spatially resolved map against the common assumption of globally uniform wetland δ13C(CH4) signature. We find a larger interhemispheric gradient, a larger high-latitude seasonal cycle, and smaller trend over the period 2000-2012. The implication is that erroneous CH4 fluxes would be derived to compensate for the biases imposed by not utilizing spatially resolved signatures for the largest source of CH4 emissions. These biases are significant when compared to the size of observed signals.

  2. Precise Absolute Astrometry from the VLBA Imaging and Polarimetry Survey at 5 GHz

    NASA Technical Reports Server (NTRS)

    Petrov, L.; Taylor, G. B.

    2011-01-01

    We present accurate positions for 857 sources derived from the astrometric analysis of 16 eleven-hour experiments from the Very Long Baseline Array imaging and polarimetry survey at 5 GHz (VIPS). Among the observed sources, positions of 430 objects were not previously determined at milliarcsecond-level accuracy. For 95% of the sources the uncertainty of their positions ranges from 0.3 to 0.9 mas, with a median value of 0.5 mas. This estimate of accuracy is substantiated by the comparison of positions of 386 sources that were previously observed in astrometric programs simultaneously at 2.3/8.6 GHz. Surprisingly, the ionosphere contribution to group delay was adequately modeled with the use of the total electron content maps derived from GPS observations and only marginally affected estimates of source coordinates.

  3. The Life of Meaning: A Model of the Positive Contributions to Well-Being from Veterinary Work.

    PubMed

    Cake, Martin A; Bell, Melinda A; Bickley, Naomi; Bartram, David J

    2015-01-01

    We present a veterinary model of work-derived well-being, and argue that educators should not only present a (potentially self-fulfilling) stress management model of future wellness, but also balance this with a positive psychology-based approach depicting a veterinary career as a richly generative source of satisfaction and fulfillment. A review of known sources of satisfaction for veterinarians finds them to be based mostly in meaningful purpose, relationships, and personal growth. This positions veterinary well-being within the tradition of eudaimonia, an ancient concept of achieving one's best possible self, and a term increasingly employed to describe well-being derived from living a life that is engaging, meaningful, and deeply fulfilling. The theory of eudaimonia for workplace well-being should inform development of personal resources that foster resilience in undergraduate and graduate veterinarians.

  4. New empirically-derived solar radiation pressure model for GPS satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Y.; Kuang, D.

    2003-01-01

    Solar radiation pressure force is the second largest perturbation acting on GPS satellites, after the gravitational attraction from the Earth, Sun, and Moon. It is the largest error source in the modeling of GPS orbital dynamics.

  5. DESIGN OF AQUIFER REMEDIATION SYSTEMS: (2) Estimating site-specific performance and benefits of partial source removal

    EPA Science Inventory

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...

  6. Global epidemic invasion thresholds in directed cattle subpopulation networks having source, sink, and transit nodes

    USDA-ARS?s Scientific Manuscript database

    Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The ...

  7. Lenstronomy: Multi-purpose gravitational lens modeling software package

    NASA Astrophysics Data System (ADS)

    Birrer, Simon; Amara, Adam

    2018-04-01

    Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

  8. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    NASA Astrophysics Data System (ADS)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  9. Astrometric light-travel time signature of sources in nonlinear motion. I. Derivation of the effect and radial motion

    NASA Astrophysics Data System (ADS)

    Anglada-Escudé, G.; Torra, J.

    2006-04-01

    Context: .Very precise planned space astrometric missions and recent improvements in imaging capabilities require a detailed review of the assumptions of classical astrometric modeling.Aims.We show that Light-Travel Time must be taken into account in modeling the kinematics of astronomical objects in nonlinear motion, even at stellar distances.Methods.A closed expression to include Light-Travel Time in the current astrometric models with nonlinear motion is provided. Using a perturbative approach the expression of the Light-Travel Time signature is derived. We propose a practical form of the astrometric modelling to be applied in astrometric data reduction of sources at stellar distances(d>1 pc).Results.We show that the Light-Travel Time signature is relevant at μ as accuracy (or even at mas) depending on the time span of the astrometric measurements. We explain how information on the radial motion of a source can be obtained. Some estimates are provided for known nearby binary systemsConclusions.Given the obtained results, it is clear that this effect must be taken into account in interpreting precise astrometric measurements. The effect is particularly relevant in measurements performed by the planned astrometric space missions (GAIA, SIM, JASMINE, TPF/DARWIN). An objective criterion is provided to quickly evaluate whether the Light-Travel Time modeling is required for a given source or system.

  10. New constraints on neutron star models of gamma-ray bursts. II - X-ray observations of three gamma-ray burst error boxes

    NASA Technical Reports Server (NTRS)

    Boer, M.; Hurley, K.; Pizzichini, G.; Gottardi, M.

    1991-01-01

    Exosat observations are presented for 3 gamma-ray-burst error boxes, one of which may be associated with an optical flash. No point sources were detected at the 3-sigma level. A comparison with Einstein data (Pizzichini et al., 1986) is made for the March 5b, 1979 source. The data are interpreted in the framework of neutron star models and derive upper limits for the neutron star surface temperatures, accretion rates, and surface densities of an accretion disk. Apart from the March 5b, 1979 source, consistency is found with each model.

  11. Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    The results of a study to analyze data and document dynamic program highlights of the Skylab Program are presented. Included are structural model sources, illustration of the analytical models, utilization of models and the resultant derived data, data supplied to organization and subsequent utilization, and specifications of model cycles.

  12. Neural Differentiation of Human Pluripotent Stem Cells for Nontherapeutic Applications: Toxicology, Pharmacology, and In Vitro Disease Modeling

    PubMed Central

    Yap, May Shin; Nathan, Kavitha R.; Yeo, Yin; Poh, Chit Laa; Richards, Mark; Lim, Wei Ling; Othman, Iekhsan; Heng, Boon Chin

    2015-01-01

    Human pluripotent stem cells (hPSCs) derived from either blastocyst stage embryos (hESCs) or reprogrammed somatic cells (iPSCs) can provide an abundant source of human neuronal lineages that were previously sourced from human cadavers, abortuses, and discarded surgical waste. In addition to the well-known potential therapeutic application of these cells in regenerative medicine, these are also various promising nontherapeutic applications in toxicological and pharmacological screening of neuroactive compounds, as well as for in vitro modeling of neurodegenerative and neurodevelopmental disorders. Compared to alternative research models based on laboratory animals and immortalized cancer-derived human neural cell lines, neuronal cells differentiated from hPSCs possess the advantages of species specificity together with genetic and physiological normality, which could more closely recapitulate in vivo conditions within the human central nervous system. This review critically examines the various potential nontherapeutic applications of hPSC-derived neuronal lineages and gives a brief overview of differentiation protocols utilized to generate these cells from hESCs and iPSCs. PMID:26089911

  13. A Kalman filter approach for the determination of celestial reference frames

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.

  14. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system

    PubMed Central

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-01-01

    ABSTRACT In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods. PMID:28515537

  15. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system.

    PubMed

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-05-19

    In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods.

  16. A reduced-form intensity-based model under fuzzy environments

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Zhuang, Yaming

    2015-05-01

    The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.

  17. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  18. Petrogenesis of Igneous-Textured Clasts in Martian Meteorite Northwest Africa 7034

    NASA Technical Reports Server (NTRS)

    Santos, A. R.; Agee, C. B.; Humayun, M.; McCubbin, F. M.; Shearer, C. K.

    2016-01-01

    The martian meteorite Northwest Africa 7034 (and pairings) is a breccia that samples a variety of materials from the martian crust. Several previous studies have identified multiple types of igneous-textured clasts within the breccia [1-3], and these clasts have the potential to provide insight into the igneous evolution of Mars. One challenge presented by studying these small rock fragments is the lack of field context for this breccia (i.e., where on Mars it formed), so we do not know how many sources these small rock fragments are derived from or the exact formation his-tory of these sources (i.e., are the sources mantle de-rived melt or melts contaminated by a meteorite impactor on Mars). Our goal in this study is to examine specific igneous-textured clast groups to determine if they are petrogenetically related (i.e., from the same igneous source) and determine more information about their formation history, then use them to derive new insights about the igneous history of Mars. We will focus on the basalt clasts, FTP clasts (named due to their high concentration of iron, titanium, and phosphorous), and mineral fragments described by [1] (Fig. 1). We will examine these materials for evidence of impactor contamination (as proposed for some materials by [2]) or mantle melt derivation. We will also test the petrogenetic models proposed in [1], which are igneous processes that could have occurred regardless of where the melt parental to the clasts was formed. These models include 1) derivation of the FTP clasts from a basalt clast melt through silicate liquid immiscibility (SLI), 2) derivation of the FTP clasts from a basalt clast melt through fractional crystallization, and 3) a lack of petrogenetic relationship between these clast groups. The relationship between the clast groups and the mineral fragments will also be explored.

  19. Forecasting future phosphorus export to the Laurentian Great Lakes from land-derived nutrient inputs

    NASA Astrophysics Data System (ADS)

    LaBeau, M. B.; Robertson, D. M.; Mayer, A. S.; Pijanowski, B. C.

    2011-12-01

    Anthropogenic use of the land through agricultural and urban activities has significantly increased phosphorus loading to rivers that flow to the Great Lakes. Phosphorus (P) is a critical element in the eutrophication of the freshwater ecosystems, most notably the Great Lakes. To better understand factors influencing phosphorus delivery to aquatic systems and thus their potential harmful effects to lake ecosystems, models that predict P export should incorporate account for changing changes in anthropogenic activities. Land-derived P from high yielding sources, such as agriculture and urban areas, affect eutrophication at various scales (e.g. specific bays to all of Lake Erie). SPARROW (SPAtially Referenced Regression On Watershed attributes) is a spatially explicit watershed model that has been used to understand linkages between land-derived sources and nutrient transport to the Great Lakes. The Great Lakes region is expected to experience a doubling of urbanized areas along with a ten percent increase in agricultural use over the next 40 years, which is likely to increase P loading. To determine how these changes will impact P loading, SPARROW have been developed that relate changes in land use to changes in nutrient sources, including relationships between row crop acreage and fertilizer intensity and urban land use and point source intensity. We used land use projections from the Land Transformation Model, a, spatially explicit, neural-net based land change model. Land use patterns from current to 2040 were used as input into HydroSPARROW, a forecasting tool that enables SPARROW to simulate the effects of various land-use and climate scenarios. Consequently, this work is focusing on understanding the effects of how specific agriculture and urbanization activities affect P loading in the watersheds of the Laurentian Great Lakes to potentially find strategies to reduce the extent and severity of future eutrophication.

  20. Sources of black carbon to the Himalayan-Tibetan Plateau glaciers

    NASA Astrophysics Data System (ADS)

    Li, Chaoliu; Bosch, Carme; Kang, Shichang; Andersson, August; Chen, Pengfei; Zhang, Qianggong; Cong, Zhiyuan; Chen, Bing; Qin, Dahe; Gustafsson, Örjan

    2016-08-01

    Combustion-derived black carbon (BC) aerosols accelerate glacier melting in the Himalayas and in Tibet (the Third Pole (TP)), thereby limiting the sustainable freshwater supplies for billions of people. However, the sources of BC reaching the TP remain uncertain, hindering both process understanding and efficient mitigation. Here we present the source-diagnostic Δ14C/δ13C compositions of BC isolated from aerosol and snowpit samples in the TP. For the Himalayas, we found equal contributions from fossil fuel (46+/-11%) and biomass (54+/-11%) combustion, consistent with BC source fingerprints from the Indo-Gangetic Plain, whereas BC in the remote northern TP predominantly derives from fossil fuel combustion (66+/-16%), consistent with Chinese sources. The fossil fuel contributions to BC in the snowpits of the inner TP are lower (30+/-10%), implying contributions from internal Tibetan sources (for example, yak dung combustion). Constraints on BC sources facilitate improved modelling of climatic patterns, hydrological effects and provide guidance for effective mitigation actions.

  1. Multimedia Model for Polycyclic Aromatic Hydrocarbons (PAHs) and Nitro-PAHs in Lake Michigan

    PubMed Central

    2015-01-01

    Polycyclic aromatic hydrocarbon (PAH) contamination in the U.S. Great Lakes has long been of concern, but information regarding the current sources, distribution, and fate of PAH contamination is lacking, and very little information exists for the potentially more toxic nitro-derivatives of PAHs (NPAHs). This study uses fugacity, food web, and Monte Carlo models to examine 16 PAHs and five NPAHs in Lake Michigan, and to derive PAH and NPAH emission estimates. Good agreement was found between predicted and measured PAH concentrations in air, but concentrations in water and sediment were generally under-predicted, possibly due to incorrect parameter estimates for degradation rates, discharges to water, or inputs from tributaries. The food web model matched measurements of heavier PAHs (≥5 rings) in lake trout, but lighter PAHs (≤4 rings) were overpredicted, possibly due to overestimates of metabolic half-lives or gut/gill absorption efficiencies. Derived PAH emission rates peaked in the 1950s, and rates now approach those in the mid-19th century. The derived emission rates far exceed those in the source inventories, suggesting the need to reconcile differences and reduce uncertainties. Although additional measurements and physiochemical data are needed to reduce uncertainties and for validation purposes, the models illustrate the behavior of PAHs and NPAHs in Lake Michigan, and they provide useful and potentially diagnostic estimates of emission rates. PMID:25373871

  2. The Potential Effects of Minimum Wage Changes on Naval Accessions

    DTIC Science & Technology

    2017-03-01

    price floor affects the market’s demand for labor and utilizes the two-sector and search models to demonstrate how the minimum wage market ...and search models to demonstrate how the minimum wage market correlates to military ascensions. Finally, the report examines studies that show the...that Derives from a Price Floor. Source: “Price Floor” (n.d.). .....14 Figure 3. Price Floor below the Market . Source: “Price Floor” (n.d

  3. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  4. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  5. Gummel Symmetry Test on charge based drain current expression using modified first-order hyperbolic velocity-field expression

    NASA Astrophysics Data System (ADS)

    Singh, Kirmender; Bhattacharyya, A. B.

    2017-03-01

    Gummel Symmetry Test (GST) has been a benchmark industry standard for MOSFET models and is considered as one of important tests by the modeling community. BSIM4 MOSFET model fails to pass GST as the drain current equation is not symmetrical because drain and source potentials are not referenced to bulk. BSIM6 MOSFET model overcomes this limitation by taking all terminal biases with reference to bulk and using proper velocity saturation (v -E) model. The drain current equation in BSIM6 is charge based and continuous in all regions of operation. It, however, adopts a complicated method to compute source and drain charges. In this work we propose to use conventional charge based method formulated by Enz for obtaining simpler analytical drain current expression that passes GST. For this purpose we adopt two steps: (i) In the first step we use a modified first-order hyperbolic v -E model with adjustable coefficients which is integrable, simple and accurate, and (ii) In the second we use a multiplying factor in the modified first-order hyperbolic v -E expression to obtain correct monotonic asymptotic behavior around the origin of lateral electric field. This factor is of empirical form, which is a function of drain voltage (vd) and source voltage (vs) . After considering both the above steps we obtain drain current expression whose accuracy is similar to that obtained from second-order hyperbolic v -E model. In modified first-order hyperbolic v -E expression if vd and vs is replaced by smoothing functions for the effective drain voltage (vdeff) and effective source voltage (vseff), it will as well take care of discontinuity between linear to saturation regions of operation. The condition of symmetry is shown to be satisfied by drain current and its higher order derivatives, as both of them are odd functions and their even order derivatives smoothly pass through the origin. In strong inversion region and technology node of 22 nm the GST is shown to pass till sixth-order derivative and for weak inversion it is shown till fifth-order derivative. In the expression of drain current major short channel phenomena like vertical field mobility reduction, velocity saturation and velocity overshoot have been taken into consideration.

  6. Delineating sources of groundwater recharge in an arsenic-affected Holocene aquifer in Cambodia using stable isotope-based mixing models

    NASA Astrophysics Data System (ADS)

    Richards, Laura A.; Magnone, Daniel; Boyce, Adrian J.; Casanueva-Marenco, Maria J.; van Dongen, Bart E.; Ballentine, Christopher J.; Polya, David A.

    2018-02-01

    Chronic exposure to arsenic (As) through the consumption of contaminated groundwaters is a major threat to public health in South and Southeast Asia. The source of As-affected groundwaters is important to the fundamental understanding of the controls on As mobilization and subsequent transport throughout shallow aquifers. Using the stable isotopes of hydrogen and oxygen, the source of groundwater and the interactions between various water bodies were investigated in Cambodia's Kandal Province, an area which is heavily affected by As and typical of many circum-Himalayan shallow aquifers. Two-point mixing models based on δD and δ18O allowed the relative extent of evaporation of groundwater sources to be estimated and allowed various water bodies to be broadly distinguished within the aquifer system. Model limitations are discussed, including the spatial and temporal variation in end member compositions. The conservative tracer Cl/Br is used to further discriminate between groundwater bodies. The stable isotopic signatures of groundwaters containing high As and/or high dissolved organic carbon plot both near the local meteoric water line and near more evaporative lines. The varying degrees of evaporation of high As groundwater sources are indicative of differing recharge contributions (and thus indirectly inferred associated organic matter contributions). The presence of high As groundwaters with recharge derived from both local precipitation and relatively evaporated surface water sources, such as ponds or flooded wetlands, are consistent with (but do not provide direct evidence for) models of a potential dual role of surface-derived and sedimentary organic matter in As mobilization.

  7. Into the deep: Evaluation of SourceTracker for assessment of faecal contamination of coastal waters.

    PubMed

    Henry, Rebekah; Schang, Christelle; Coutts, Scott; Kolotelo, Peter; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O'Brien, Peter; Deletic, Ana; McCarthy, David

    2016-04-15

    Faecal contamination of recreational waters is an increasing global health concern. Tracing the source of the contaminant is a vital step towards mitigation and disease prevention. Total 16S rRNA amplicon data for a specific environment (faeces, water, soil) and computational tools such as the Markov-Chain Monte Carlo based SourceTracker can be applied to microbial source tracking (MST) and attribution studies. The current study applied artificial and in-laboratory derived bacterial communities to define the potential and limitations associated with the use of SourceTracker, prior to its application for faecal source tracking at three recreational beaches near Port Phillip Bay (Victoria, Australia). The results demonstrated that at minimum multiple model runs of the SourceTracker modelling tool (i.e. technical replicates) were required to identify potential false positive predictions. The calculation of relative standard deviations (RSDs) for each attributed source improved overall predictive confidence in the results. In general, default parameter settings provided high sensitivity, specificity, accuracy and precision. Application of SourceTracker to recreational beach samples identified treated effluent as major source of human-derived faecal contamination, present in 69% of samples. Site-specific sources, such as raw sewage, stormwater and bacterial populations associated with the Yarra River estuary were also identified. Rainfall and associated sand resuspension at each location correlated with observed human faecal indicators. The results of the optimised SourceTracker analysis suggests that local sources of contamination have the greatest effect on recreational coastal water quality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation

    PubMed Central

    Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048

  9. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    PubMed

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  10. Reciprocity relationships in vector acoustics and their application to vector field calculations.

    PubMed

    Deal, Thomas J; Smith, Kevin B

    2017-08-01

    The reciprocity equation commonly stated in underwater acoustics relates pressure fields and monopole sources. It is often used to predict the pressure measured by a hydrophone for multiple source locations by placing a source at the hydrophone location and calculating the field everywhere for that source. A similar equation that governs the orthogonal components of the particle velocity field is needed to enable this computational method to be used for acoustic vector sensors. This paper derives a general reciprocity equation that accounts for both monopole and dipole sources. This vector-scalar reciprocity equation can be used to calculate individual components of the received vector field by altering the source type used in the propagation calculation. This enables a propagation model to calculate the received vector field components for an arbitrary number of source locations with a single model run for each vector field component instead of requiring one model run for each source location. Application of the vector-scalar reciprocity principle is demonstrated with analytic solutions for a range-independent environment and with numerical solutions for a range-dependent environment using a parabolic equation model.

  11. Development of High-Resolution Dynamic Dust Source Function - A Case Study with a Strong Dust Storm in a Regional Model

    NASA Technical Reports Server (NTRS)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2017-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  12. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model

    PubMed Central

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2018-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events. PMID:29632432

  13. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model.

    PubMed

    Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul

    2017-06-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  14. Continuous-variable quantum key distribution with Gaussian source noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yujie; Peng Xiang; Yang Jian

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  15. From Oss CAD to Bim for Cultural Heritage Digital Representation

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Stylianidis, E.

    2017-02-01

    The paper illustrates the use of open source Computer-aided design (CAD) environments in order to develop Building Information Modelling (BIM) tools able to manage 3D models in the field of cultural heritage. Nowadays, the development of Free and Open Source Software (FOSS) has been rapidly growing and their use tends to be consolidated. Although BIM technology is widely known and used, there is a lack of integrated open source platforms able to support all stages of Historic Building Information Modelling (HBIM) processes. The present research aims to use a FOSS CAD environment in order to develop BIM plug-ins which will be able to import and edit digital representations of cultural heritage models derived by photogrammetric methods.

  16. Problems encountered with the use of simulation in an attempt to enhance interpretation of a secondary data source in epidemiologic mental health research

    PubMed Central

    2010-01-01

    Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271

  17. Absolute measurement of LDR brachytherapy source emitted power: Instrument design and initial measurements.

    PubMed

    Malin, Martha J; Palmer, Benjamin R; DeWerd, Larry A

    2016-02-01

    Energy-based source strength metrics may find use with model-based dose calculation algorithms, but no instruments exist that can measure the energy emitted from low-dose rate (LDR) sources. This work developed a calorimetric technique for measuring the power emitted from encapsulated low-dose rate, photon-emitting brachytherapy sources. This quantity is called emitted power (EP). The measurement methodology, instrument design and performance, and EP measurements made with the calorimeter are presented in this work. A calorimeter operating with a liquid helium thermal sink was developed to measure EP from LDR brachytherapy sources. The calorimeter employed an electrical substitution technique to determine the power emitted from the source. The calorimeter's performance and thermal system were characterized. EP measurements were made using four (125)I sources with air-kerma strengths ranging from 2.3 to 5.6 U and corresponding EPs of 0.39-0.79 μW, respectively. Three Best Medical 2301 sources and one Oncura 6711 source were measured. EP was also computed by converting measured air-kerma strengths to EPs through Monte Carlo-derived conversion factors. The measured EP and derived EPs were compared to determine the accuracy of the calorimeter measurement technique. The calorimeter had a noise floor of 1-3 nW and a repeatability of 30-60 nW. The calorimeter was stable to within 5 nW over a 12 h measurement window. All measured values agreed with derived EPs to within 10%, with three of the four sources agreeing to within 4%. Calorimeter measurements had uncertainties ranging from 2.6% to 4.5% at the k = 1 level. The values of the derived EPs had uncertainties ranging from 2.9% to 3.6% at the k = 1 level. A calorimeter capable of measuring the EP from LDR sources has been developed and validated for (125)I sources with EPs between 0.43 and 0.79 μW.

  18. Sources and Sinks: A Stochastic Model of Evolution in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Hermsen, Rutger; Hwa, Terence

    2010-12-01

    We study evolution driven by spatial heterogeneity in a stochastic model of source-sink ecologies. A sink is a habitat where mortality exceeds reproduction so that a local population persists only due to immigration from a source. Immigrants can, however, adapt to conditions in the sink by mutation. To characterize the adaptation rate, we derive expressions for the first arrival time of adapted mutants. The joint effects of migration, mutation, birth, and death result in two distinct parameter regimes. These results may pertain to the rapid evolution of drug-resistant pathogens and insects.

  19. Dementia and Depression: A Process Model for Differential Diagnosis.

    ERIC Educational Resources Information Center

    Hill, Carrie L.; Spengler, Paul M.

    1997-01-01

    Delineates a process model for mental-health counselors to follow in formulating a differential diagnosis of dementia and depression in adults 65 years and older. The model is derived from empirical, theoretical, and clinical sources of evidence. Explores components of the clinical interview, of hypothesis formation, and of hypothesis testing.…

  20. Population and Activity of On-road Vehicles in MOVES2014

    EPA Science Inventory

    This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emiss...

  1. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  2. Cosine-Gaussian Schell-model sources.

    PubMed

    Mei, Zhangrong; Korotkova, Olga

    2013-07-15

    We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.

  3. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  4. Can microbes compete with cows for sustainable protein production - A feasibility study on high quality protein

    NASA Astrophysics Data System (ADS)

    Vestergaard, Mike; Chan, Siu Hung Joshua; Jensen, Peter Ruhdal

    2016-11-01

    An increasing population and their increased demand for high-protein diets will require dramatic changes in the food industry, as limited resources and environmental issues will make animal derived foods and proteins, gradually more unsustainable to produce. To explore alternatives to animal derived proteins, an economic model was built around the genome-scale metabolic network of E. coli to study the feasibility of recombinant protein production as a food source. Using a novel model, we predicted which microbial production strategies are optimal for economic return, by capturing the tradeoff between the market prices of substrates, product output and the efficiency of microbial production. A case study with the food protein, Bovine Alpha Lactalbumin was made to evaluate the upstream economic feasibilities. Simulations with different substrate profiles at maximum productivity were used to explore the feasibility of recombinant Bovine Alpha Lactalbumin production coupled with market prices of utilized materials. We found that recombinant protein production could be a feasible food source and an alternative to traditional sources.

  5. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  6. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  7. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  8. A Mueller matrix model of Haidinger's brushes.

    PubMed

    Misson, Gary P

    2003-09-01

    Stokes vectors and Mueller matrices are used to model the polarisation properties (birefringence, dichroism and depolarisation) of any optical system, in particular the human eye. An explanation of the form and behaviour of the entoptic phenomenon of Haidinger's brushes is derived that complements and expands upon a previous study. The relationship between the appearance of Haidinger's brushes and intrinsic ocular retardation is quantified and the model allows prediction of the effect of any retarder of any orientation placed between a source of polarised light and the eye. The simple relationship of minimum contrast of Haidinger's brushes to the cosine of total retardation is derived.

  9. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. L. Lewicki; G. E. Hilley; L. Dobeck

    A set of CO2 flux, geochemical, and hydrologic measurement techniques was used to characterize the source of and quantify gaseous and dissolved CO2 discharges from the area of Soda Springs, southeastern Idaho. An eddy covariance system was deployed for approximately one month near a bubbling spring and measured net CO2 fluxes from - 74 to 1147 g m- 2 d- 1. An inversion of measured eddy covariance CO2 fluxes and corresponding modeled source weight functions mapped the surface CO2 flux distribution within and quantified CO2 emission rate (24.9 t d- 1) from a 0.05 km2 area surrounding the spring. Soilmore » CO2 fluxes (< 1 to 52,178 g m- 2 d- 1) were measured within a 0.05 km2 area of diffuse degassing using the accumulation chamber method. The estimated CO2 emission rate from this area was 49 t d- 1. A carbon mass balance approach was used to estimate dissolved CO2 discharges from contributing sources at nine springs and the Soda Springs geyser. Total dissolved inorganic carbon (as CO2) discharge for all sampled groundwater features was 57.1 t d- 1. Of this quantity, approximately 3% was derived from biogenic carbon dissolved in infiltrating groundwater, 35% was derived from carbonate mineral dissolution within the aquifer(s), and 62% was derived from deep source(s). Isotopic compositions of helium (1.74–2.37 Ra) and deeply derived carbon (d13C approximately 3‰) suggested contribution of volatiles from mantle and carbonate sources. Assuming that the deeply derived CO2 discharge estimated for sampled groundwater features (approximately 35 t d- 1) is representative of springs throughout the study area, the total rate of deeply derived CO2 input into the groundwater system within this area could be ~ 350 t d- 1, similar to CO2 emission rates from a number of quiescent volcanoes.« less

  11. Balancing Information Analysis and Decision Value: A Model to Exploit the Decision Process

    DTIC Science & Technology

    2011-12-01

    technical intelli- gence e.g. signals and sensors (SIGINT and MASINT), imagery (!MINT), as well and human and open source intelligence (HUMINT and OSINT ...Clark 2006). The ability to capture large amounts of da- ta and the plenitude of modem intelligence information sources provides a rich cache of...many tech- niques for managing information collected and derived from these sources , the exploitation of intelligence assets for decision-making

  12. Radio astronomy aspects of the NASA SETI Sky Survey

    NASA Technical Reports Server (NTRS)

    Klein, Michael J.

    1986-01-01

    The application of SETI data to radio astronomy is studied. The number of continuum radio sources in the 1-10 GHz region to be counted and cataloged is predicted. The radio luminosity functions for steep and flat spectrum sources at 2, 8, and 22 GHz are derived using the model of Peacock and Gull (1981). The relation between source number and flux density is analyzed and the sensitivity of the system is evaluated.

  13. Integrating global satellite-derived data products as a pre-analysis for hydrological modelling studies: a case study for the Red River Basin

    USDA-ARS?s Scientific Manuscript database

    With changes in weather patterns and intensifying anthropogenic water use, there is an increasing need for spatio-temporal information on water fluxes and stocks in river basins. The assortment of satellite-derived open-access information sources on rainfall (P) and land use / land cover (LULC) is c...

  14. Functional Comparison of Induced Pluripotent Stem Cell- and Blood-Derived GPIIbIIIa Deficient Platelets

    PubMed Central

    Haas, Jessica; Sandrock-Lang, Kirstin; Gärtner, Florian; Jung, Christian Billy; Zieger, Barbara; Parrotta, Elvira; Kurnik, Karin; Sinnecker, Daniel; Wanner, Gerhard; Laugwitz, Karl-Ludwig; Massberg, Steffen; Moretti, Alessandra

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs) represent a versatile tool to model genetic diseases and are a potential source for cell transfusion therapies. However, it remains elusive to which extent patient-specific hiPSC-derived cells functionally resemble their native counterparts. Here, we generated a hiPSC model of the primary platelet disease Glanzmann thrombasthenia (GT), characterized by dysfunction of the integrin receptor GPIIbIIIa, and compared side-by-side healthy and diseased hiPSC-derived platelets with peripheral blood platelets. Both GT-hiPSC-derived platelets and their peripheral blood equivalents showed absence of membrane expression of GPIIbIIIa, a reduction of PAC-1 binding, surface spreading and adherence to fibrinogen. We demonstrated that GT-hiPSC-derived platelets recapitulate molecular and functional aspects of the disease and show comparable behavior to their native counterparts encouraging the further use of hiPSC-based disease models as well as the transition towards a clinical application. PMID:25607928

  15. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County.

    PubMed

    Wang, Long; Wei, Jiahua; Huang, Yuefei; Wang, Guangqian; Maqsood, Imran

    2011-07-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  17. Effect of polarization on the evolution of electromagnetic hollow Gaussian Schell-model beam

    NASA Astrophysics Data System (ADS)

    Long, Xuewen; Lu, Keqing; Zhang, Yuhong; Guo, Jianbang; Li, Kehao

    2011-02-01

    Based on the theory of coherence, an analytical propagation formula for partially polarized and partially coherent hollow Gaussian Schell-model beams (HGSMBs) passing through a paraxial optical system is derived. Furthermore, we show that the degree of polarization of source may affect the evolution of HGSMBs and a tunable dark region may exist. For two special cases of fully coherent and partially coherent δxx = δyy, normalized intensity distributions are independent of the polarization of source.

  18. The Velocity and Density Distribution of Earth-Intersecting Meteoroids: Implications for Environment Models

    NASA Technical Reports Server (NTRS)

    Moorhead, A. V.; Brown, P. G.; Campbell-Brown, M. D.; Moser, D. E.; Blaauw, R. C.; Cooke, W. J.

    2017-01-01

    Meteoroids are known to damage spacecraft: they can crater or puncture components, disturb a spacecraft's attitude, and potentially create secondary electrical effects. Because the damage done depends on the speed, size, density, and direction of the impactor, accurate environment models are critical for mitigating meteoroid-related risks. Yet because meteoroid properties are derived from indirect observations such as meteors and impact craters, many characteristics of the meteoroid environment are uncertain. In this work, we present recent improvements to the meteoroid speed and density distributions. Our speed distribution is derived from observations made by the Canadian Meteor Orbit Radar. These observations are de-biased using modern descriptions of the ionization efficiency. Our approach yields a slower meteoroid population than previous analyses (see Fig. 1 for an example) and we compute the uncertainties associated with our derived distribution. We adopt a higher fidelity density distribution than that used by many older models. In our distribution, meteoroids with TJ less than 2 are assigned to a low-density population, while those with TJ greater than 2 have higher densities (see Fig. 2). This division and the distributions themselves are derived from the densities reported by Kikwaya et al. These changes have implications for the environment: for instance, the helion/antihelion sporadic sources have lower speeds than the apex and toroidal sources and originate from high-T(sub J) parent bodies. Our on-average slower and denser distributions thus imply that the helion and antihelion sources dominate the meteoroid environment even more completely than previously thought. Finally, for a given near-Earth meteoroid cratering rate, a slower meteoroid population produces a comparatively higher rate of satellite attitude disturbances.

  19. The sources of atmospheric black carbon at a European gateway to the Arctic

    NASA Astrophysics Data System (ADS)

    Winiger, P.; Andersson, A.; Eckhardt, S.; Stohl, A.; Gustafsson, Ö.

    2016-09-01

    Black carbon (BC) aerosols from incomplete combustion of biomass and fossil fuel contribute to Arctic climate warming. Models--seeking to advise mitigation policy--are challenged in reproducing observations of seasonally varying BC concentrations in the Arctic air. Here we compare year-round observations of BC and its δ13C/Δ14C-diagnosed sources in Arctic Scandinavia, with tailored simulations from an atmospheric transport model. The model predictions for this European gateway to the Arctic are greatly improved when the emission inventory of anthropogenic sources is amended by satellite-derived estimates of BC emissions from fires. Both BC concentrations (R2=0.89, P<0.05) and source contributions (R2=0.77, P<0.05) are accurately mimicked and linked to predominantly European emissions. This improved model skill allows for more accurate assessment of sources and effects of BC in the Arctic, and a more credible scientific underpinning of policy efforts aimed at efficiently reducing BC emissions reaching the European Arctic.

  20. Exploring the Dynamics of the August 2010 Mount Meager Rock Slide-Debris Flow Jointly with Seismic Source Inversion and Numerical Landslide Modeling

    NASA Astrophysics Data System (ADS)

    Allstadt, K.; Moretti, L.; Mangeney, A.; Stutzmann, E.; Capdeville, Y.

    2014-12-01

    The time series of forces exerted on the earth by a large and rapid landslide derived remotely from the inversion of seismic records can be used to tie post-slide evidence to what actually occurred during the event and can be used to tune numerical models and test theoretical methods. This strategy is applied to the 48.5 Mm3 August 2010 Mount Meager rockslide-debris flow in British Columbia, Canada. By inverting data from just five broadband seismic stations less than 300 km from the source, we reconstruct the time series of forces that the landslide exerted on the Earth as it occurred. The result illuminates a complex retrogressive initiation sequence and features attributable to flow over a complicated path including several curves and runup against a valley wall. The seismically derived force history also allows for the estimation of the horizontal acceleration (0.39 m/s^2) and average apparent coefficient of basal friction (0.38) of the rockslide, and the speed of the center of mass of the debris flow (peak of 92 m/s). To extend beyond these simple calculations and to test the interpretation, we also use the seismically derived force history to guide numerical modeling of the event - seeking to simulate the landslide in a way that best fits both the seismic and field constraints. This allows for a finer reconstruction of the volume, timing, and sequence of events, estimates of friction, and spatiotemporal variations in speed and flow thickness. The modeling allowed us to analyze the sensitivity of the force to the different parameters involved in the landslide modeling to better understand what can and cannot be constrained from seismic source inversions of landslide signals.

  1. Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources

    NASA Astrophysics Data System (ADS)

    Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa

    2017-07-01

    The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.

  2. Software Toolbox Development for Rapid Earthquake Source Optimisation Combining InSAR Data and Seismic Waveforms

    NASA Astrophysics Data System (ADS)

    Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.

    2017-04-01

    We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  3. DERIVATION OF STOCHASTIC ACCELERATION MODEL CHARACTERISTICS FOR SOLAR FLARES FROM RHESSI HARD X-RAY OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrosian, Vahe; Chen Qingrong

    2010-04-01

    The model of stochastic acceleration of particles by turbulence has been successful in explaining many observed features of solar flares. Here, we demonstrate a new method to obtain the accelerated electron spectrum and important acceleration model parameters from the high-resolution hard X-ray (HXR) observations provided by RHESSI. In our model, electrons accelerated at or very near the loop top (LT) produce thin target bremsstrahlung emission there and then escape downward producing thick target emission at the loop footpoints (FPs). Based on the electron flux spectral images obtained by the regularized spectral inversion of the RHESSI count visibilities, we derive severalmore » important parameters for the acceleration model. We apply this procedure to the 2003 November 3 solar flare, which shows an LT source up to 100-150 keV in HXR with a relatively flat spectrum in addition to two FP sources. The results imply the presence of strong scattering and a high density of turbulence energy with a steep spectrum in the acceleration region.« less

  4. Sourcing of an alternative pericyte-like cell type from peripheral blood in clinically relevant numbers for therapeutic angiogenic applications.

    PubMed

    Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael

    2015-03-01

    Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10-40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies.

  5. Sourcing of an Alternative Pericyte-Like Cell Type from Peripheral Blood in Clinically Relevant Numbers for Therapeutic Angiogenic Applications

    PubMed Central

    Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael

    2015-01-01

    Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10–40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies. PMID:25582709

  6. (U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-02

    Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less

  7. A hydrologic network supporting spatially referenced regression modeling in the Chesapeake Bay watershed

    USGS Publications Warehouse

    Brakebill, J.W.; Preston, S.D.

    2003-01-01

    The U.S. Geological Survey has developed a methodology for statistically relating nutrient sources and land-surface characteristics to nutrient loads of streams. The methodology is referred to as SPAtially Referenced Regressions On Watershed attributes (SPARROW), and relates measured stream nutrient loads to nutrient sources using nonlinear statistical regression models. A spatially detailed digital hydrologic network of stream reaches, stream-reach characteristics such as mean streamflow, water velocity, reach length, and travel time, and their associated watersheds supports the regression models. This network serves as the primary framework for spatially referencing potential nutrient source information such as atmospheric deposition, septic systems, point-sources, land use, land cover, and agricultural sources and land-surface characteristics such as land use, land cover, average-annual precipitation and temperature, slope, and soil permeability. In the Chesapeake Bay watershed that covers parts of Delaware, Maryland, Pennsylvania, New York, Virginia, West Virginia, and Washington D.C., SPARROW was used to generate models estimating loads of total nitrogen and total phosphorus representing 1987 and 1992 land-surface conditions. The 1987 models used a hydrologic network derived from an enhanced version of the U.S. Environmental Protection Agency's digital River Reach File, and course resolution Digital Elevation Models (DEMs). A new hydrologic network was created to support the 1992 models by generating stream reaches representing surface-water pathways defined by flow direction and flow accumulation algorithms from higher resolution DEMs. On a reach-by-reach basis, stream reach characteristics essential to the modeling were transferred to the newly generated pathways or reaches from the enhanced River Reach File used to support the 1987 models. To complete the new network, watersheds for each reach were generated using the direction of surface-water flow derived from the DEMs. This network improves upon existing digital stream data by increasing the level of spatial detail and providing consistency between the reach locations and topography. The hydrologic network also aids in illustrating the spatial patterns of predicted nutrient loads and sources contributed locally to each stream, and the percentages of nutrient load that reach Chesapeake Bay.

  8. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  9. A Comprehensive Model of the Near-Earth Magnetic Field. Phase 3

    NASA Technical Reports Server (NTRS)

    Sabaka, Terence J.; Olsen, Nils; Langel, Robert A.

    2000-01-01

    The near-Earth magnetic field is due to sources in Earth's core, ionosphere, magnetosphere, lithosphere, and from coupling currents between ionosphere and magnetosphere and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modeled simultaneously, and fields from other sources modeled separately. Such a scheme, however, can introduce spurious features. A new model, designated CMP3 (Comprehensive Model: Phase 3), has been derived from quiet-time Magsat and POGO satellite measurements and observatory hourly and annual means measurements as part of an effort to coestimate fields from all of these sources. This model represents a significant advancement in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of ionospheric and magnetospheric influence on Earth-induced fields, and an explicit parameterization and estimation of the lithospheric field. The result of this effort is a model whose fits to the data are generally superior to previous models and whose parameter states for the various constituent sources are very reasonable.

  10. Numerical calculations of spectral turnover and synchrotron self-absorption in CSS and GPS radio sources

    NASA Astrophysics Data System (ADS)

    Jeyakumar, S.

    2016-06-01

    The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.

  11. First Measurements of 15N Fractionation in N2H+ toward High-mass Star-forming Cores

    NASA Astrophysics Data System (ADS)

    Fontani, F.; Caselli, P.; Palau, A.; Bizzocchi, L.; Ceccarelli, C.

    2015-08-01

    We report on the first measurements of the isotopic ratio 14N/15N in N2H+ toward a statistically significant sample of high-mass star-forming cores. The sources belong to the three main evolutionary categories of the high-mass star formation process: high-mass starless cores, high-mass protostellar objects, and ultracompact H ii regions. Simultaneous measurements of the 14N/15N ratio in CN have been made. The 14N/15N ratios derived from N2H+ show a large spread (from ∼180 up to ∼1300), while those derived from CN are in between the value measured in the terrestrial atmosphere (∼270) and that of the proto-solar nebula (∼440) for the large majority of the sources within the errors. However, this different spread might be due to the fact that the sources detected in the N2H+ isotopologues are more than those detected in the CN ones. The 14N/15N ratio does not change significantly with the source evolutionary stage, which indicates that time seems to be irrelevant for the fractionation of nitrogen. We also find a possible anticorrelation between the 14N/15N (as derived from N2H+) and the H/D isotopic ratios. This suggests that 15N enrichment could not be linked to the parameters that cause D enrichment, in agreement with the prediction by recent chemical models. These models, however, are not able to reproduce the observed large spread in 14N/15N, pointing out that some important routes of nitrogen fractionation could be still missing in the models. Based on observations carried out with the IRAM-30 m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain).

  12. Constraints on Janus Cosmological model from recent observations of supernovae type Ia

    NASA Astrophysics Data System (ADS)

    D'Agostini, G.; Petit, J. P.

    2018-07-01

    From our exact solution of the Janus Cosmological equation we derive the relation of the predicted magnitude of distant sources versus their red shift. The comparison, through this one free parameter model, to the available data from 740 distant supernovae shows an excellent fit.

  13. Cost Effectiveness of On-Line Retrieval System.

    ERIC Educational Resources Information Center

    King, Donald W.; Neel, Peggy W.

    A recently developed cost-effectiveness model for on-line retrieval systems is discussed through use of an example utilizing performance results collected from several independent sources and cost data derived for a recently completed study for the American Psychological Association. One of the primary attributes of the model rests in its great…

  14. A Qualitative Approach to Portfolios: The Early Assessment for Exceptional Potential Model.

    ERIC Educational Resources Information Center

    Shaklee, Beverly D.; Viechnicki, Karen J.

    1995-01-01

    The Early Assessment for Exceptional Potential portfolio assessment model assesses children as exceptional learners, users, generators, and pursuers of knowledge. It is based on use of authentic learning opportunities; interaction of assessment, curriculum, and instruction; multiple criteria derived from multiple sources; and systematic teacher…

  15. Generation of functional islets from human umbilical cord and placenta derived mesenchymal stem cells.

    PubMed

    Kadam, Sachin; Govindasamy, Vijayendran; Bhonde, Ramesh

    2012-01-01

    Bone marrow-derived mesenchymal stem cells (BM-MSCs) have been used for allogeneic application in tissue engineering but have certain drawbacks. Therefore, mesenchymal stem cells (MSCs) derived from other adult tissue sources have been considered as an alternative. The human umbilical cord and placenta are easily available noncontroversial sources of human tissue, which are often discarded as biological waste, and their collection is noninvasive. These sources of MSCs are not subjected to ethical constraints, as in the case of embryonic stem cells. MSCs derived from umbilical cord and placenta are multipotent and have the ability to differentiate into various cell types crossing the lineage boundary towards endodermal lineage. The aim of this chapter is to provide a detailed reproducible cookbook protocol for the isolation, propagation, characterization, and differentiation of MSCs derived from human umbilical cord and placenta with special reference to harnessing their potential towards pancreatic/islet lineage for utilization as a cell therapy product. We show here that mesenchymal stromal cells can be extensively expanded from umbilical cord and placenta of human origin retaining their multilineage differentiation potential in vitro. Our report indicates that postnatal tissues obtained as delivery waste represent a rich source of mesenchymal stromal cells, which can be differentiated into functional islets employing three-stage protocol developed by our group. These islets could be used as novel in vitro model for screening hypoglycemics/insulin secretagogues, thus reducing animal experimentation for this purpose and for the future human islet transplantation programs to treat diabetes.

  16. Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2013-02-01

    We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a large range of flux densities. We derive the multi-frequency Euclidean level - the plateau in the normalised differential counts at high flux-density - and compare it to WMAP, Spitzer and IRAS results. The submillimetre number counts are not well reproduced by current evolution models of dusty galaxies, whereas the millimetre part appears reasonably well fitted by the most recent model for synchrotron-dominated sources. Finally we provide estimates of the local luminosity density of dusty galaxies, providing the first such measurements at 545 and 857 GHz. Appendices are available in electronic form at http://www.aanda.orgCorresponding author: herve.dole@ias.u-psud.fr

  17. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  18. Asymptotic solutions for the case of nearly symmetric gravitational lens systems

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Pelgrims, V.; Surdej, J.

    2012-08-01

    Gravitational lensing provides a powerful tool to determine the Hubble parameter H0 from the measurement of the time delay Δt between two lensed images of a background variable source. Nevertheless, knowledge of the deflector mass distribution constitutes a hurdle. We propose in the present work interesting solutions for the case of nearly symmetric gravitational lens systems. For the case of a small misalignment between the source, the deflector and the observer, we first consider power-law (ɛ) axially symmetric models for which we derive an analytical relation between the amplification ratio and source position which is independent of the power-law slope ɛ. According to this relation, we deduce an expression for H0 also irrespective of the value ɛ. Secondly, we consider the power-law axially symmetric lens models with an external large-scale gravitational field, the shear γ, resulting in the so-called ɛ-γ models, for which we deduce simple first-order equations linking the model parameters and the lensed image positions, the latter being observable quantities. We also deduce simple relations between H0 and observables quantities only. From these equations, we may estimate the value of the Hubble parameter in a robust way. Nevertheless, comparison between the ɛ-γ and singular isothermal ellipsoid (SIE) models leads to the conclusion that these models remain most often distinct. Therefore, even for the case of a small misalignment, use of the first-order equations and precise astrometric measurements of the positions of the lensed images with respect to the centre of the deflector enables one to discriminate between these two families of models. Finally, we confront the models with numerical simulations to evaluate the intrinsic error of the first-order expressions used when deriving the model parameters under the assumption of a quasi-alignment between the source, the deflector and the observer. From these same simulations, we estimate for the case of the ɛ-γ family of models that the standard deviation affecting H0 is ? which merely reflects the adopted astrometric uncertainties on the relative image positions, typically ? arcsec. In conclusions, we stress the importance of getting very accurate measurements of the relative positions of the multiple lensed images and of the time delays for the case of nearly symmetric gravitational lens systems, in order to derive robust and precise values of the Hubble parameter.

  19. Xiphoid Process-Derived Chondrocytes: A Novel Cell Source for Elastic Cartilage Regeneration

    PubMed Central

    Nam, Seungwoo; Cho, Wheemoon; Cho, Hyunji; Lee, Jungsun

    2014-01-01

    Reconstruction of elastic cartilage requires a source of chondrocytes that display a reliable differentiation tendency. Predetermined tissue progenitor cells are ideal candidates for meeting this need; however, it is difficult to obtain donor elastic cartilage tissue because most elastic cartilage serves important functions or forms external structures, making these tissues indispensable. We found vestigial cartilage tissue in xiphoid processes and characterized it as hyaline cartilage in the proximal region and elastic cartilage in the distal region. Xiphoid process-derived chondrocytes (XCs) showed superb in vitro expansion ability based on colony-forming unit fibroblast assays, cell yield, and cumulative cell growth. On induction of differentiation into mesenchymal lineages, XCs showed a strong tendency toward chondrogenic differentiation. An examination of the tissue-specific regeneration capacity of XCs in a subcutaneous-transplantation model and autologous chondrocyte implantation model confirmed reliable regeneration of elastic cartilage regardless of the implantation environment. On the basis of these observations, we conclude that xiphoid process cartilage, the only elastic cartilage tissue source that can be obtained without destroying external shape or function, is a source of elastic chondrocytes that show superb in vitro expansion and reliable differentiation capacity. These findings indicate that XCs could be a valuable cell source for reconstruction of elastic cartilage. PMID:25205841

  20. Transient pressure analysis of fractured well in bi-zonal gas reservoirs

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo

    2015-05-01

    For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.

  1. GeoDataspaces: Simplifying Data Management Tasks with Globus

    NASA Astrophysics Data System (ADS)

    Malik, T.; Chard, K.; Tchoua, R. B.; Foster, I.

    2014-12-01

    Data and its management are central to modern scientific enterprise. Typically, geoscientists rely on observations and model output data from several disparate sources (file systems, RDBMS, spreadsheets, remote data sources). Integrated data management solutions that provide intuitive semantics and uniform interfaces, irrespective of the kind of data source are, however, lacking. Consequently, geoscientists are left to conduct low-level and time-consuming data management tasks, individually, and repeatedly for discovering each data source, often resulting in errors in handling. In this talk we will describe how the EarthCube GeoDataspace project is improving this situation for seismologists, hydrologists, and space scientists by simplifying some of the existing data management tasks that arise when developing computational models. We will demonstrate a GeoDataspace, bootstrapped with "geounits", which are self-contained metadata packages that provide complete description of all data elements associated with a model run, including input/output and parameter files, model executable and any associated libraries. Geounits link raw and derived data as well as associating provenance information describing how data was derived. We will discuss challenges in establishing geounits and describe machine learning and human annotation approaches that can be used for extracting and associating ad hoc and unstructured scientific metadata hidden in binary formats with data resources and models. We will show how geounits can improve search and discoverability of data associated with model runs. To support this model, we will describe efforts related towards creating a scalable metadata catalog that helps to maintain, search and discover geounits within the Globus network of accessible endpoints. This talk will focus on the issue of creating comprehensive personal inventories of data assets for computational geoscientists, and describe a publishing mechanism, which can be used to feed into national, international, or thematic discovery portals.

  2. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  3. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  4. Saint: a lightweight integration environment for model annotation.

    PubMed

    Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil

    2009-11-15

    Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).

  5. Noble gas models of mantle structure and reservoir mass transfer

    NASA Astrophysics Data System (ADS)

    Harrison, Darrell; Ballentine, Chris J.

    Noble gas observations from different mantle samples have provided some of the key observational data used to develop and support the geochemical "layered" mantle model. This model has dominated our conceptual understanding of mantle structure and evolution for the last quarter of a century. Refinement in seismic tomography and numerical models of mantle convection have clearly shown that geochemical layering, at least at the 670 km phase change in the mantle, is no longer tenable. Recent adaptations of the mantle-layering model that more successfully reconcile whole-mantle convection with the simplest data have two common features: (i) the requirement for the noble gases in the convecting mantle to be sourced, or "fluxed", by a deep long-lived volatile-rich mantle reservoir; and (ii) the requirement for the deep mantle reservoirs to be seismically invisible. The fluxing requirement is derived from the low mid-ocean ridge basalt (MORB)-source mantle 3He concentration, in turn calculated from the present day 3He flux from mid-ocean ridges into the oceans (T½ ˜ 1,000 yr) and the ocean crust generation rate (T½ ˜ 108 yr). Because of these very different residence times we consider the 3He concentration constraint to be weak. Furthermore, data show 3He/22Ne ratios derived from different mantle reservoirs to be distinct and require additional complexities to be added to any model advocating fluxing of the convecting mantle from a volatile-rich mantle reservoir. Recent work also shows that the convecting mantle 20Ne/22Ne isotopic composition is derived from an implanted meteoritic source and is distinct from at least one plume source system. If Ne isotope heterogeneity between convecting mantle and plume source mantle is confirmed, this result then excludes all mantle fluxing models. While isotopic heterogeneity requires further quantification, it has been shown that higher 3He concentrations in the convecting mantle, by a factor of 3.5, remove the need for the noble gases in the convecting mantle to be sourced from such a deep hidden reservoir. This "zero paradox" concentration [Ballentine et al., 2002] is then consistent with the different mantle source 3He/22Ne and 20Ne/22Ne heterogeneities. Higher convecting mantle noble gas concentrations also eliminate the requirement for a hidden mantle 40Ar rich-reservoir and enables the heat/4He imbalance to be explained by temporal variance in the different mechanisms of heat vs. He removal from the mantle system—two other key arguments for mantle layering. Confirmation of higher average convecting mantle noble gas concentrations remains the key test of such a concept.

  6. Boundedness and global existence in the higher-dimensional parabolic-parabolic chemotaxis system with/without growth source

    NASA Astrophysics Data System (ADS)

    Xiang, Tian

    2015-06-01

    In this paper, we are concerned with a general class of quasilinear parabolic-parabolic chemotaxis systems with/without growth source, under homogeneous Neumann boundary conditions in a smooth bounded domain Ω ⊂Rn with n ≥ 2. It is recently known that blowup is possible even in the presence of superlinear growth restrictions. Here, we derive new and interesting characterizations on the growth versus the boundedness. We show that the hard task of proving the L∞-boundedness of the cell density can be reduced to proving its Lr-boundedness. In other words, we show that the Lr-boundedness of the cell density can successfully guarantee its L∞-boundedness and hence its global boundedness, where r = n + ɛ or n/2 + ɛ depending on whether the growth restriction is essentially linear (including no growth) or superlinear. Hence, a blowup solution also blows up in Lp-norm for any suitably large p. More detailed information on how the growth source affects the boundedness of the solution is derived. These results reveal deep understandings of blowup mechanism for chemotaxis models. Then we use these criteria to establish uniform boundedness and hence global existence of the underlying models: logistic source in 2-D, cubic source as initially proposed by Mimura and Tsujikawa in 3-D, [ (n - 1) + ɛ ]st source in n-D with n ≥ 4. As a consequence, in a chemotaxis-growth model, blowup is impossible if the growth effect is suitably strong. Finally, we underline that our results remove the commonly assumed convexity on the domain Ω.

  7. Can we identify source lithology of basalt?

    PubMed

    Yang, Zong-Feng; Zhou, Jun-Hong

    2013-01-01

    The nature of source rocks of basaltic magmas plays a fundamental role in understanding the composition, structure and evolution of the solid earth. However, identification of source lithology of basalts remains uncertainty. Using a parameterization of multi-decadal melting experiments on a variety of peridotite and pyroxenite, we show here that a parameter called FC3MS value (FeO/CaO-3*MgO/SiO2, all in wt%) can identify most pyroxenite-derived basalts. The continental oceanic island basalt-like volcanic rocks (MgO>7.5%) (C-OIB) in eastern China and Mongolia are too high in the FC3MS value to be derived from peridotite source. The majority of the C-OIB in phase diagrams are equilibrium with garnet and clinopyroxene, indicating that garnet pyroxenite is the dominant source lithology. Our results demonstrate that many reputed evolved low magnesian C-OIBs in fact represent primary pyroxenite melts, suggesting that many previous geological and petrological interpretations of basalts based on the single peridotite model need to be reconsidered.

  8. Can we identify source lithology of basalt?

    PubMed Central

    Yang, Zong-Feng; Zhou, Jun-Hong

    2013-01-01

    The nature of source rocks of basaltic magmas plays a fundamental role in understanding the composition, structure and evolution of the solid earth. However, identification of source lithology of basalts remains uncertainty. Using a parameterization of multi-decadal melting experiments on a variety of peridotite and pyroxenite, we show here that a parameter called FC3MS value (FeO/CaO-3*MgO/SiO2, all in wt%) can identify most pyroxenite-derived basalts. The continental oceanic island basalt-like volcanic rocks (MgO>7.5%) (C-OIB) in eastern China and Mongolia are too high in the FC3MS value to be derived from peridotite source. The majority of the C-OIB in phase diagrams are equilibrium with garnet and clinopyroxene, indicating that garnet pyroxenite is the dominant source lithology. Our results demonstrate that many reputed evolved low magnesian C-OIBs in fact represent primary pyroxenite melts, suggesting that many previous geological and petrological interpretations of basalts based on the single peridotite model need to be reconsidered. PMID:23676779

  9. Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog

    NASA Technical Reports Server (NTRS)

    Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.

    1995-01-01

    A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.

  10. DAMIT: a database of asteroid models

    NASA Astrophysics Data System (ADS)

    Durech, J.; Sidorin, V.; Kaasalainen, M.

    2010-04-01

    Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).

  11. Investigation of hydraulic transmission noise sources

    NASA Astrophysics Data System (ADS)

    Klop, Richard J.

    Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.

  12. Teaching Image Formation by Extended Light Sources: The Use of a Model Derived from the History of Science

    ERIC Educational Resources Information Center

    Dedes, Christos; Ravanis, Konstantinos

    2009-01-01

    This research, carried out in Greece on pupils aged 12-16, focuses on the transformation of their representations concerning light emission and image formation by extended light sources. The instructive process was carried out in two stages, each one having a different, distinct target set. During the first stage, the appropriate conflict…

  13. A moving medium formulation for prediction of propeller noise at incidence

    NASA Astrophysics Data System (ADS)

    Ghorbaniasl, Ghader; Lacor, Chris

    2012-01-01

    This paper presents a time domain formulation for the sound field radiated by moving bodies in a uniform steady flow with arbitrary orientation. The aim is to provide a formulation for prediction of noise from body so that effects of crossflow on a propeller can be modeled in the time domain. An established theory of noise generation by a moving source is combined with the moving medium Green's function for derivation of the formulation. A formula with Doppler factor is developed because it is more easily interpreted and is more helpful in examining the physic of systems. Based on the technique presented, the source of asymmetry of the sound field can be explained in terms of physics of a moving source. It is shown that the derived formulation can be interpreted as an extension of formulation 1 and 1A of Farassat based on the Ffowcs Williams and Hawkings (FW-H) equation for moving medium problems. Computational results for a stationary monopole and dipole point source in moving medium, a rotating point force in crossflow, a model of helicopter blade at incidence and a propeller case with subsonic tips at incidence verify the formulation.

  14. Laurentide Ice-Sheet Meltwater Sources to the Gulf of Mexico During the Last Deglaciation: Assessing Data Reconstructions Using Water Isotope Enabled Simulations

    NASA Astrophysics Data System (ADS)

    Vetter, L.; LeGrande, A. N.; Ullman, D. J.; Carlson, A. E.

    2017-12-01

    Sediment cores from the Gulf of Mexico show evidence of meltwater derived from the Laurentide Ice Sheet during the last deglaciation. Recent studies using geochemical measurements of individual foraminifera suggest changes in the oxygen isotopic composition of the meltwater as deglaciation proceeded. Here we use the water isotope enabled climate model simulations (NASA GISS ModelE-R) to investigate potential sources of meltwater within the ice sheet. We find that initial melting of the ice sheet from the southern margin contributed an oxygen isotope value reflecting a low-elevation, local precipitation source. As deglacial melting proceeded, meltwater delivered to the Gulf of Mexico had a more negative oxygen isotopic value, which the climate model simulates as being sourced from the high-elevation, high-latitude interior of the ice sheet. This study demonstrates the utility of combining stable isotope analyses with climate model simulations to investigate past changes in the hydrologic cycle.

  15. Nonlinear optimal control policies for buoyancy-driven flows in the built environment

    NASA Astrophysics Data System (ADS)

    Nabi, Saleh; Grover, Piyush; Caulfield, Colm

    2017-11-01

    We consider optimal control of turbulent buoyancy-driven flows in the built environment, focusing on a model test case of displacement ventilation with a time-varying heat source. The flow is modeled using the unsteady Reynolds-averaged equations (URANS). To understand the stratification dynamics better, we derive a low-order partial-mixing ODE model extending the buoyancy-driven emptying filling box problem to the case of where both the heat source and the (controlled) inlet flow are time-varying. In the limit of a single step-change in the heat source strength, our model is consistent with that of Bower et al.. Our model considers the dynamics of both `filling' and `intruding' added layers due to a time-varying source and inlet flow. A nonlinear direct-adjoint-looping optimal control formulation yields time-varying values of temperature and velocity of the inlet flow that lead to `optimal' time-averaged temperature relative to appropriate objective functionals in a region of interest.

  16. Geochemical and Nd-Sr-Pb isotope characteristics of synorogenic lower crust-derived granodiorites (Central Damara orogen, Namibia)

    NASA Astrophysics Data System (ADS)

    Simon, I.; Jung, S.; Romer, R. L.; Garbe-Schönberg, D.; Berndt, J.

    2017-03-01

    The 547 ± 7 Ma old Achas intrusion (Damara orogen, Namibia) includes magnesian, metaluminous to slightly peraluminous, calcic to calc-alkalic granodiorites and ferroan, metaluminous to slightly peraluminous, calc-alkalic to alkali-calcic leucogranites. For the granodiorites, major and trace element variations show weak if any evidence for fractional crystallization whereas some leucogranites are highly fractionated. Both, granodiorites and leucogranites are isotopically evolved (granodiorites: εNdinit: - 12.4 to - 20.5; TDM: 2.4-1.9; leucogranites: εNdinit: - 12.1 to - 20.6, TDM: 2.5-2.0), show similar Pb isotopic compositions, and may be derived from late Archean to Paleoproterozoic crustal source rocks. Comparison with melting experiments and simple partial melting modeling indicate that the granodiorites may be derived by extensive melting (> 40%) at 900-950 °C under water-undersaturated conditions (< 5 wt.% H2O) of felsic gneisses. Al-Ti and zircon saturation thermometry of the most primitive granodiorite sample yielded temperatures of ca. 930 °C and ca. 800 °C. In contrast to other lower crust-derived granodiorites and granites of the Central Damara orogen, the composition of the magma source is considered the first-order cause of the compositional diversity of the Achas granite. Second-order processes such as fractional crystallization at least for the granodiorites were minor and evidence for coupled assimilation-fractional crystallization processes is lacking. The most likely petrogenetic model involves high temperature partial melting of a Paleoproterozoic felsic source in the lower crust ca. 10-20 Ma before the first peak of regional high-temperature metamorphism. Underplating of the lower crust by magmas derived from the lithospheric mantle may have provided the heat for melting of the basement to produce anhydrous granodioritic melts.

  17. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  18. Magnitudes and Sources of Catchment Sediment: When A + B Doesn't Equal C

    NASA Astrophysics Data System (ADS)

    Simon, A.

    2015-12-01

    The export of land-based sediments to receiving waters can cause degradation of water quality and habitat, loss of reservoir capacity and damage to reef ecosystems. Predictions of sources and magnitudes generally come from simulations using catchment models that focus on overland flow processes at the expense of gully and channel processes. This is not appropriate for many catchments where recent research has shown that the dominant erosion sources have shifted from the uplands and fields following European Settlement, to the alluvial valleys today. Still, catchment models which fail to adequately address channel and bank processes are still the overwhelming choice by resource agencies to help manage sediment export. These models often utilize measured values of sediment load at the river mouth to "calibrate" the magnitude of loads emanating from uplands and fields. The difference between the sediment load at the mouth and the simulated upland loading is then proportioned to channel sources.Bank erosion from the Burnett River (a "Reef Catchment" in eastern Queensland) was quantified by comparisons of bank-top locations and by numerical modeling using BSTEM. Results show that bank-derived sediment contributes between 44 and 73% of the sediment load being exported to the Coral Sea. In comparison reported results from a catchment model showed bank contributions of 8%. In absolute terms, this is an increase in the reported average, annual rate of bank erosion from 0.175 Mt/y to 2.0 Mt/y.In the Hoteo River, New Zealand, a rural North Island catchment characterized by resistant cohesive sediments, bank erosion was found to contribute at least 48% of the total specific yield of sediment. Combining the bank-derived, fine-grained loads from some of the major tributaries gives a total, average annual loading rate for fine material of about 10,900 t/y for the studied reaches in the Hoteo River System. If the study was extended to include the lower reaches of the main stem channel and other tributary reaches, this percentage would be higher. Similar studies in the United States using combinations of empirical and numerical modeling techniques have also disclosed that bank-derived sediment can be the major source of sediment in many catchments. An approach to improve the accuracy of predictions is proposed.

  19. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  20. Influence of Elevation Data Source on 2D Hydraulic Modelling

    NASA Astrophysics Data System (ADS)

    Bakuła, Krzysztof; StĘpnik, Mateusz; Kurczyński, Zdzisław

    2016-08-01

    The aim of this paper is to analyse the influence of the source of various elevation data on hydraulic modelling in open channels. In the research, digital terrain models from different datasets were evaluated and used in two-dimensional hydraulic models. The following aerial and satellite elevation data were used to create the representation of terrain-digital terrain model: airborne laser scanning, image matching, elevation data collected in the LPIS, EuroDEM, and ASTER GDEM. From the results of five 2D hydrodynamic models with different input elevation data, the maximum depth and flow velocity of water were derived and compared with the results of the most accurate ALS data. For such an analysis a statistical evaluation and differences between hydraulic modelling results were prepared. The presented research proved the importance of the quality of elevation data in hydraulic modelling and showed that only ALS and photogrammetric data can be the most reliable elevation data source in accurate 2D hydraulic modelling.

  1. A global reconstruction of climate-driven subdecadal water storage variability

    NASA Astrophysics Data System (ADS)

    Humphrey, V.; Gudmundsson, L.; Seneviratne, S. I.

    2017-03-01

    Since 2002, the Gravity Recovery and Climate Experiment (GRACE) mission has provided unprecedented observations of global mass redistribution caused by hydrological processes. However, there are still few sources on pre-2002 global terrestrial water storage (TWS). Classical approaches to retrieve past TWS rely on either land surface models (LSMs) or basin-scale water balance calculations. Here we propose a new approach which statistically relates anomalies in atmospheric drivers to monthly GRACE anomalies. Gridded subdecadal TWS changes and time-dependent uncertainty intervals are reconstructed for the period 1985-2015. Comparisons with model results demonstrate the performance and robustness of the derived data set, which represents a new and valuable source for studying subdecadal TWS variability, closing the ocean/land water budgets and assessing GRACE uncertainties. At midpoint between GRACE observations and LSM simulations, the statistical approach provides TWS estimates (doi:10.5905/ethz-1007-85) that are essentially derived from observations and are based on a limited number of transparent model assumptions.

  2. Update on GOCART Model Development and Applications

    NASA Technical Reports Server (NTRS)

    Kim, Dongchul

    2013-01-01

    Recent results from the GOCART and GMI models are reported. They include: Updated emission inventories for anthropogenic and volcano sources, satellite-derived vegetation index for seasonal variations of dust emission, MODIS-derived smoke AOT for assessing uncertainties of biomass-burning emissions, long-range transport of aerosol across the Pacific Ocean, and model studies on the multi-decadal trend of regional and global aerosol distributions from 1980 to 2010, volcanic aerosols, and nitrate aerosols. The document was presented at the 2013 AEROCENTER Annual Meeting held at the GSFC Visitors Center, May 31, 2013. The Organizers of the meeting are posting the talks to the public Aerocentr website, after the meeting.

  3. Modeling Insights into Deuterium Excess as an Indicator of Water Vapor Source Conditions

    NASA Technical Reports Server (NTRS)

    Lewis, Sophie C.; Legrande, Allegra Nicole; Kelley, Maxwell; Schmidt, Gavin A.

    2013-01-01

    Deuterium excess (d) is interpreted in conventional paleoclimate reconstructions as a tracer of oceanic source region conditions, such as temperature, where precipitation originates. Previous studies have adopted co-isotopic approaches to estimate past changes in both site and oceanic source temperatures for ice core sites using empirical relationships derived from conceptual distillation models, particularly Mixed Cloud Isotopic Models (MCIMs). However, the relationship between d and oceanic surface conditions remains unclear in past contexts. We investigate this climate-isotope relationship for sites in Greenland and Antarctica using multiple simulations of the water isotope-enabled Goddard Institute for Space Studies (GISS) ModelE-R general circulation model and apply a novel suite of model vapor source distribution (VSD) tracers to assess d as a proxy for source temperature variability under a range of climatic conditions. Simulated average source temperatures determined by the VSDs are compared to synthetic source temperature estimates calculated using MCIM equations linking d to source region conditions. We show that although deuterium excess is generally a faithful tracer of source temperatures as estimated by the MCIM approach, large discrepancies in the isotope-climate relationship occur around Greenland during the Last Glacial Maximum simulation, when precipitation seasonality and moisture source regions were notably different from present. This identified sensitivity in d as a source temperature proxy suggests that quantitative climate reconstructions from deuterium excess should be treated with caution for some sites when boundary conditions are significantly different from the present day. Also, the exclusion of the influence of humidity and other evaporative source changes in MCIM regressions may be a limitation of quantifying source temperature fluctuations from deuterium excess in some instances.

  4. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  5. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  6. Uncertainty of Passive Imager Cloud Optical Property Retrievals to Instrument Radiometry and Model Assumptions: Examples from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Meyer, Kerry; Amarasinghe, Nandana; Arnold, G. Thomas; Zhang, Zhibo; King, Michael D.

    2013-01-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global-daily 1 km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VISNIR channel paired with a 1.6, 2.1, and 3.7 m spectral channel. The MOD06 forward model is derived from on a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1 aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. I n Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 m band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 m, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  7. Uncertainty of passive imager cloud retrievals to instrument radiometry and model assumptions: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Amarasinghe, N.; Arnold, G. T.; Zhang, Z.; Meyer, K.; King, M. D.

    2013-12-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VIS/NIR channel paired with a 1.6, 2.1, and 3.7 μm spectral channel. The MOD06 forward model is derived from a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1° aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. In Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 μm band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 μm, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  8. Alternative functional in vitro models of human intestinal epithelia

    PubMed Central

    Kauffman, Amanda L.; Gyurdieva, Alexandra V.; Mabus, John R.; Ferguson, Chrissa; Yan, Zhengyin; Hornby, Pamela J.

    2013-01-01

    Physiologically relevant sources of absorptive intestinal epithelial cells are crucial for human drug transport studies. Human adenocarcinoma-derived intestinal cell lines, such as Caco-2, offer conveniences of easy culture maintenance and scalability, but do not fully recapitulate in vivo intestinal phenotypes. Additional sources of renewable physiologically relevant human intestinal cells would provide a much needed tool for drug discovery and intestinal physiology. We compared two alternative sources of human intestinal cells, commercially available primary human intestinal epithelial cells (hInEpCs) and induced pluripotent stem cell (iPSC)-derived intestinal cells to Caco-2, for use in in vitro transwell monolayer intestinal transport assays. To achieve this for iPSC-derived cells, intestinal organogenesis was adapted to transwell differentiation. Intestinal cells were assessed by marker expression through immunocytochemical and mRNA expression analyses, monolayer integrity through Transepithelial Electrical Resistance (TEER) measurements and molecule permeability, and functionality by taking advantage the well-characterized intestinal transport mechanisms. In most cases, marker expression for primary hInEpCs and iPSC-derived cells appeared to be as good as or better than Caco-2. Furthermore, transwell monolayers exhibited high TEER with low permeability. Primary hInEpCs showed molecule efflux indicative of P-glycoprotein (Pgp) transport. Primary hInEpCs and iPSC-derived cells also showed neonatal Fc receptor-dependent binding of immunoglobulin G variants. Primary hInEpCs and iPSC-derived intestinal cells exhibit expected marker expression and demonstrate basic functional monolayer formation, similar to or better than Caco-2. These cells could offer an alternative source of human intestinal cells for understanding normal intestinal epithelial physiology and drug transport. PMID:23847534

  9. Alternative functional in vitro models of human intestinal epithelia.

    PubMed

    Kauffman, Amanda L; Gyurdieva, Alexandra V; Mabus, John R; Ferguson, Chrissa; Yan, Zhengyin; Hornby, Pamela J

    2013-01-01

    Physiologically relevant sources of absorptive intestinal epithelial cells are crucial for human drug transport studies. Human adenocarcinoma-derived intestinal cell lines, such as Caco-2, offer conveniences of easy culture maintenance and scalability, but do not fully recapitulate in vivo intestinal phenotypes. Additional sources of renewable physiologically relevant human intestinal cells would provide a much needed tool for drug discovery and intestinal physiology. We compared two alternative sources of human intestinal cells, commercially available primary human intestinal epithelial cells (hInEpCs) and induced pluripotent stem cell (iPSC)-derived intestinal cells to Caco-2, for use in in vitro transwell monolayer intestinal transport assays. To achieve this for iPSC-derived cells, intestinal organogenesis was adapted to transwell differentiation. Intestinal cells were assessed by marker expression through immunocytochemical and mRNA expression analyses, monolayer integrity through Transepithelial Electrical Resistance (TEER) measurements and molecule permeability, and functionality by taking advantage the well-characterized intestinal transport mechanisms. In most cases, marker expression for primary hInEpCs and iPSC-derived cells appeared to be as good as or better than Caco-2. Furthermore, transwell monolayers exhibited high TEER with low permeability. Primary hInEpCs showed molecule efflux indicative of P-glycoprotein (Pgp) transport. Primary hInEpCs and iPSC-derived cells also showed neonatal Fc receptor-dependent binding of immunoglobulin G variants. Primary hInEpCs and iPSC-derived intestinal cells exhibit expected marker expression and demonstrate basic functional monolayer formation, similar to or better than Caco-2. These cells could offer an alternative source of human intestinal cells for understanding normal intestinal epithelial physiology and drug transport.

  10. Discrete time modeling and stability analysis of TCP Vegas

    NASA Astrophysics Data System (ADS)

    You, Byungyong; Koo, Kyungmo; Lee, Jin S.

    2007-12-01

    This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.

  11. Costs and benefits

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two models of cost benefit analysis are illustrated and the application of these models to assessing the economic scope of space applications programs was discussed. Four major areas cited as improvable through space derived information - food supply and distribution, energy sources, mineral reserves, and communication and navigation were - discussed. Specific illustrations are given for agriculture and maritime traffic.

  12. Anti-aging effects of vitamin C on human pluripotent stem cell-derived cardiomyocytes.

    PubMed

    Kim, Yoon Young; Ku, Seung-Yup; Huh, Yul; Liu, Hung-Ching; Kim, Seok Hyun; Choi, Young Min; Moon, Shin Yong

    2013-10-01

    Human pluripotent stem cells (hPSCs) have arisen as a source of cells for biomedical research due to their developmental potential. Stem cells possess the promise of providing clinicians with novel treatments for disease as well as allowing researchers to generate human-specific cellular metabolism models. Aging is a natural process of living organisms, yet aging in human heart cells is difficult to study due to the ethical considerations regarding human experimentation as well as a current lack of alternative experimental models. hPSC-derived cardiomyocytes (CMs) bear a resemblance to human cardiac cells and thus hPSC-derived CMs are considered to be a viable alternative model to study human heart cell aging. In this study, we used hPSC-derived CMs as an in vitro aging model. We generated cardiomyocytes from hPSCs and demonstrated the process of aging in both human embryonic stem cell (hESC)- and induced pluripotent stem cell (hiPSC)-derived CMs. Aging in hESC-derived CMs correlated with reduced membrane potential in mitochondria, the accumulation of lipofuscin, a slower beating pattern, and the downregulation of human telomerase RNA (hTR) and cell cycle regulating genes. Interestingly, the expression of hTR in hiPSC-derived CMs was not significantly downregulated, unlike in hESC-derived CMs. In order to delay aging, vitamin C was added to the cultured CMs. When cells were treated with 100 μM of vitamin C for 48 h, anti-aging effects, specifically on the expression of telomere-related genes and their functionality in aging cells, were observed. Taken together, these results suggest that hPSC-derived CMs can be used as a unique human cardiomyocyte aging model in vitro and that vitamin C shows anti-aging effects in this model.

  13. The Chandra Source Catalog: Spectral Properties

    NASA Astrophysics Data System (ADS)

    Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).

  14. Using Sediment Records to Reconstruct Historical Inputs Combustion-Derived Contaminants to Urban Airsheds/Watersheds: A Case Study From the Puget Sound

    NASA Astrophysics Data System (ADS)

    Louchouarn, P. P.; Kuo, L.; Brandenberger, J.; Marcantonio, F.; Wade, T. L.; Crecelius, E.; Gobeil, C.

    2008-12-01

    Urban centers are major sources of combustion-derived particulate matter (e.g. black carbon (BC), polycyclic aromatic hydrocarbons (PAH), anhydrosugars) and volatile organic compounds to the atmosphere. Evidence is mounting that atmospheric emissions from combustion sources remain major contributors to air pollution of urban systems. For example, recent historical reconstructions of depositional fluxes for pyrogenic PAHs close to urban systems have shown an unanticipated reversal in the trends of decreasing emissions initiated during the mid-20th Century. Here we compare a series of historical reconstructions of combustion emission in urban and rural airsheds over the last century using sedimentary records. A complex suite of combustion proxies (BC, PAHs, anhydrosugars, stable lead concentrations and isotope signatures) assisted in elucidating major changes in the type of atmospheric aerosols originating from specific processes (i.e. biomass burning vs. fossil fuel combustion) or fuel sources (wood vs. coal vs. oil). In all studied locations, coal continues to be a major source of combustion-derived aerosols since the early 20th Century. Recently, however, oil and biomass combustion have become substantial additional sources of atmospheric contamination. In the Puget Sound basin, along the Pacific Northwest region of the U.S., rural locations not impacted by direct point sources of contamination have helped assess the influence of catalytic converters on concentrations of oil-derived PAH and lead inputs since the early 1970s. Although atmospheric deposition of lead has continued to drop since the introduction of catalytic converters and ban on leaded gasoline, PAH inputs have "rebounded" in the last decade. A similar steady and recent rise in PAH accumulations in urban systems has been ascribed to continued urban sprawl and increasing vehicular traffic. In the U.S., automotive emissions, whether from gasoline or diesel combustion, are becoming a major source of combustion-derived PM and BC to the atmosphere and have started to replace coal as the major source in some surficial reservoirs. This increased urban influence of gasoline and diesel combustion on BC emissions was also observed in Europe both from model estimates as well as from measured fluxes in recent lake sediments.

  15. Calculation and Analysis of magnetic gradient tensor components of global magnetic models

    NASA Astrophysics Data System (ADS)

    Schiffler, Markus; Queitsch, Matthias; Schneider, Michael; Stolz, Ronny; Krech, Wolfram; Meyer, Hans-Georg; Kukowski, Nina

    2014-05-01

    Magnetic mapping missions like SWARM and its predecessors, e.g. the CHAMP and MAGSAT programs, offer high resolution Earth's magnetic field data. These datasets are usually combined with magnetic observatory and survey data, and subject to harmonic analysis. The derived spherical harmonic coefficients enable magnetic field modelling using a potential series expansion. Recently, new instruments like the JeSSY STAR Full Tensor Magnetic Gradiometry system equipped with very high sensitive sensors can directly measure the magnetic field gradient tensor components. The full understanding of the quality of the measured data requires the extension of magnetic field models to gradient tensor components. In this study, we focus on the extension of the derivation of the magnetic field out of the potential series magnetic field gradient tensor components and apply the new theoretical framework to the International Geomagnetic Reference Field (IGRF) and the High Definition Magnetic Model (HDGM). The gradient tensor component maps for entire Earth's surface produced for the IGRF show low values and smooth variations reflecting the core and mantle contributions whereas those for the HDGM gives a novel tool to unravel crustal structure and deep-situated ore bodies. For example, the Thor Suture and the Sorgenfrei-Thornquist Zone in Europe are delineated by a strong northward gradient. Derived from Eigenvalue decomposition of the magnetic gradient tensor, the scaled magnetic moment, normalized source strength (NSS) and the bearing of the lithospheric sources are presented. The NSS serves as a tool for estimating the lithosphere-asthenosphere boundary as well as the depth of plutons and ore bodies. Furthermore changes in magnetization direction parallel to the mid-ocean ridges can be obtained from the scaled magnetic moment and the normalized source strength discriminates the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.

  16. Probabilistic Seismic Hazard Maps for Ecuador

    NASA Astrophysics Data System (ADS)

    Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.

    2017-12-01

    A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.

  17. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.

  18. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  19. Diverse Effects of Phytoestrogens on the Reproductive Performance: Cow as a Model

    PubMed Central

    Wocławek-Potocka, Izabela; Mannelli, Chiara; Boruszewska, Dorota; Kowalczyk-Zieba, Ilona; Waśniewski, Tomasz; Skarżyński, Dariusz J.

    2013-01-01

    Phytoestrogens, polyphenolic compounds derived from plants, are more and more common constituents of human and animal diets. In most of the cases, these chemicals are much less potent than endogenous estrogens but exert their biological effects via similar mechanisms of action. The most common source of phytoestrogen exposure to humans as well as ruminants is soybean-derived foods that are rich in the isoflavones genistein and daidzein being metabolized in the digestive tract to even more potent metabolites—para-ethyl-phenol and equol. Phytoestrogens have recently come into considerable interest due to the increasing information on their adverse effects in human and animal reproduction, increasing the number of people substituting animal proteins with plant-derived proteins. Finally, the soybean becomes the main source of protein in animal fodder because of an absolute prohibition of bone meal use for animal feeding in 1995 in Europe. The review describes how exposure of soybean-derived phytoestrogens can have adverse effects on reproductive performance in female adults. PMID:23710176

  20. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.

  1. Estimating the susceptibility of surface water in Texas to nonpoint-source contamination by use of logistic regression modeling

    USGS Publications Warehouse

    Battaglin, William A.; Ulery, Randy L.; Winterstein, Thomas; Welborn, Toby

    2003-01-01

    In the State of Texas, surface water (streams, canals, and reservoirs) and ground water are used as sources of public water supply. Surface-water sources of public water supply are susceptible to contamination from point and nonpoint sources. To help protect sources of drinking water and to aid water managers in designing protective yet cost-effective and risk-mitigated monitoring strategies, the Texas Commission on Environmental Quality and the U.S. Geological Survey developed procedures to assess the susceptibility of public water-supply source waters in Texas to the occurrence of 227 contaminants. One component of the assessments is the determination of susceptibility of surface-water sources to nonpoint-source contamination. To accomplish this, water-quality data at 323 monitoring sites were matched with geographic information system-derived watershed- characteristic data for the watersheds upstream from the sites. Logistic regression models then were developed to estimate the probability that a particular contaminant will exceed a threshold concentration specified by the Texas Commission on Environmental Quality. Logistic regression models were developed for 63 of the 227 contaminants. Of the remaining contaminants, 106 were not modeled because monitoring data were available at less than 10 percent of the monitoring sites; 29 were not modeled because there were less than 15 percent detections of the contaminant in the monitoring data; 27 were not modeled because of the lack of any monitoring data; and 2 were not modeled because threshold values were not specified.

  2. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  3. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    PubMed

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  4. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2003-04-01

    A method for the calculation of source-receptor (s-r) relationships (sensitivity of a trace substance concentration at some place and time to emission at some place and time) with Lagrangian particle models has been derived and presented previously (Air Pollution Modeling and its Application XIV, Proc. of ITM Boulder 2000). Now, the generalisation to any linear s-r relationship, including dry and wet deposition, decay etc., is presented. It was implemented in the model FLEXPART and tested extensively in idealised set-ups. These tests turned out to be very useful for finding minor model bugs and inaccuracies, and can be recommended generally for model testing. Recently, a convection scheme has been integrated in FLEXPART which was also tested. Both source and receptor can be specified in mass mixing ratio or mass units. Properly taking care of this is quite relevant for sources and receptors at different levels in the atmosphere. Furthermore, we present a test with the transport of aerosol-bound Caesium-137 from the areas contaminated by the Chernobyl disaster to Stockholm during one month.

  5. Domain-Invariant Partial-Least-Squares Regression.

    PubMed

    Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne

    2018-05-11

    Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.

  6. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    USGS Publications Warehouse

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  7. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    PubMed Central

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  8. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    PubMed

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  9. Marine-Sourced Anti-Cancer and Cancer Pain Control Agents in Clinical and Late Preclinical Development †

    PubMed Central

    Newman, David J.; Cragg, Gordon M.

    2014-01-01

    The marine habitat has produced a significant number of very potent marine-derived agents that have the potential to inhibit the growth of human tumor cells in vitro and, in a number of cases, in both in vivo murine models and in humans. Although many agents have entered clinical trials in cancer, to date, only Cytarabine, Yondelis® (ET743), Eribulin (a synthetic derivative based on the structure of halichondrin B), and the dolastatin 10 derivative, monomethylauristatin E (MMAE or vedotin) as a warhead, have been approved for use in humans (Adcetris®). In this review, we show the compounds derived from marine sources that are currently in clinical trials against cancer. We have included brief discussions of the approved agents, where they are in trials to extend their initial approved activity (a common practice once an agent is approved), and have also included an extensive discussion of the use of auristatin derivatives as warheads, plus an area that has rarely been covered, the use of marine-derived agents to ameliorate the pain from cancers in humans, and to act as an adjuvant in immunological therapies. PMID:24424355

  10. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  11. The sources of atmospheric black carbon at a European gateway to the Arctic

    PubMed Central

    Winiger, P; Andersson, A; Eckhardt, S; Stohl, A; Gustafsson, Ö.

    2016-01-01

    Black carbon (BC) aerosols from incomplete combustion of biomass and fossil fuel contribute to Arctic climate warming. Models—seeking to advise mitigation policy—are challenged in reproducing observations of seasonally varying BC concentrations in the Arctic air. Here we compare year-round observations of BC and its δ13C/Δ14C-diagnosed sources in Arctic Scandinavia, with tailored simulations from an atmospheric transport model. The model predictions for this European gateway to the Arctic are greatly improved when the emission inventory of anthropogenic sources is amended by satellite-derived estimates of BC emissions from fires. Both BC concentrations (R2=0.89, P<0.05) and source contributions (R2=0.77, P<0.05) are accurately mimicked and linked to predominantly European emissions. This improved model skill allows for more accurate assessment of sources and effects of BC in the Arctic, and a more credible scientific underpinning of policy efforts aimed at efficiently reducing BC emissions reaching the European Arctic. PMID:27627859

  12. Differentiation and characterization of human pluripotent stem cell-derived brain microvascular endothelial cells.

    PubMed

    Stebbins, Matthew J; Wilson, Hannah K; Canfield, Scott G; Qian, Tongcheng; Palecek, Sean P; Shusta, Eric V

    2016-05-15

    The blood-brain barrier (BBB) is a critical component of the central nervous system (CNS) that regulates the flux of material between the blood and the brain. Because of its barrier properties, the BBB creates a bottleneck to CNS drug delivery. Human in vitro BBB models offer a potential tool to screen pharmaceutical libraries for CNS penetration as well as for BBB modulators in development and disease, yet primary and immortalized models respectively lack scalability and robust phenotypes. Recently, in vitro BBB models derived from human pluripotent stem cells (hPSCs) have helped overcome these challenges by providing a scalable and renewable source of human brain microvascular endothelial cells (BMECs). We have demonstrated that hPSC-derived BMECs exhibit robust structural and functional characteristics reminiscent of the in vivo BBB. Here, we provide a detailed description of the methods required to differentiate and functionally characterize hPSC-derived BMECs to facilitate their widespread use in downstream applications. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Modeling of negative ion extraction from a magnetized plasma source: Derivation of scaling laws and description of the origins of aberrations in the ion beam

    NASA Astrophysics Data System (ADS)

    Fubiani, G.; Garrigues, L.; Boeuf, J. P.

    2018-02-01

    We model the extraction of negative ions from a high brightness high power magnetized negative ion source. The model is a Particle-In-Cell (PIC) algorithm with Monte-Carlo Collisions. The negative ions are generated only on the plasma grid surface (which separates the plasma from the electrostatic accelerator downstream). The scope of this work is to derive scaling laws for the negative ion beam properties versus the extraction voltage (potential of the first grid of the accelerator) and plasma density and investigate the origins of aberrations on the ion beam. We show that a given value of the negative ion beam perveance correlates rather well with the beam profile on the extraction grid independent of the simulated plasma density. Furthermore, the extracted beam current may be scaled to any value of the plasma density. The scaling factor must be derived numerically but the overall gain of computational cost compared to performing a PIC simulation at the real plasma density is significant. Aberrations appear for a meniscus curvature radius of the order of the radius of the grid aperture. These aberrations cannot be cancelled out by switching to a chamfered grid aperture (as in the case of positive ions).

  14. Derivation of the linear-logistic model and Cox's proportional hazard model from a canonical system description.

    PubMed

    Voit, E O; Knapp, R G

    1997-08-15

    The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.

  15. An open source multivariate framework for n-tissue segmentation with evaluation on public data.

    PubMed

    Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C

    2011-12-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.

  16. An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data

    PubMed Central

    Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.

    2012-01-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993

  17. Coincident Detection Significance in Multimessenger Astronomy

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Burns, E.; Dal Canton, T.; Dent, T.; Eggenstein, H.-B.; Nielsen, A. B.; Prix, R.; Was, M.; Zhu, S. J.

    2018-06-01

    We derive a Bayesian criterion for assessing whether signals observed in two separate data sets originate from a common source. The Bayes factor for a common versus unrelated origin of signals includes an overlap integral of the posterior distributions over the common-source parameters. Focusing on multimessenger gravitational-wave astronomy, we apply the method to the spatial and temporal association of independent gravitational-wave and electromagnetic (or neutrino) observations. As an example, we consider the coincidence between the recently discovered gravitational-wave signal GW170817 from a binary neutron star merger and the gamma-ray burst GRB 170817A: we find that the common-source model is enormously favored over a model describing them as unrelated signals.

  18. Eolian Dust and the Origin of Sedimentary Chert

    USGS Publications Warehouse

    Cecil, C. Blaine

    2004-01-01

    This paper proposes an alternative model for the primary source of silica contained in bedded sedimentary chert. The proposed model is derived from three principal observations as follows: (1) eolian processes in warm-arid climates produce copious amounts of highly reactive fine-grained quartz particles (dust), (2) eolian processes in warm-arid climates export enormous quantities of quartzose dust to marine environments, and (3) bedded sedimentary cherts generally occur in marine strata that were deposited in warm-arid paleoclimates where dust was a potential source of silica. An empirical integration of these observations suggests that eolian dust best explains both the primary and predominant source of silica for most bedded sedimentary cherts.

  19. Estimating nitrate emissions to surface water at regional and national scale: comparison of models using detailed regional and national-wide databases (France)

    NASA Astrophysics Data System (ADS)

    Dupas, R.; Gascuel-Odoux, C.; Durand, P.; Parnaudeau, V.

    2012-04-01

    The European Union (EU) Water Framework Directive (WFD) requires River Basin District managers to carry out an analysis of nutrient pressures and impacts, in order to evaluate the risk of water bodies failing to reach "good ecological status" and to identify those catchments where prioritized nonpoint-source control measures should be implemented. A model has been developed to estimate nitrate nonpoint-source emissions to surface water, using readily available data in France. It was inspired from US model SPARROW (Smith al., 1997) and European model GREEN (Grizzetti et al., 2008), i.e. statistical approaches consisting of linking nitrogen sources and catchments' land and rivers characteristics. The N-nitrate load (L) at the outlet of a catchment is expressed as: L= R*(B*Lsgw+Ldgw+PS)-denitlake Where denitlake is a denitrification factor for lakes and reservoirs, Lsgw is the shallow groundwater discharge to streams (derived from the base flow index and N surplus in kgN.ha-1.yr-1), Ldgw is the deep groundwater discharge to streams (derived from total runoff, the base flow index and deep groundwater N concentration), PS is point sources from domestic and industrial origin (kgN.ha-1.yr-1) and R and B are the river system and basin reduction factor, respectively. Besides calibrating and evaluating the model at a national scale, its predictive quality was compared with those of regionalized models in Brittany (Western France) and in the Seine river basin (Paris basin), where detailed regional databases are available. The national-scale model proved to provide robust predictions in most conditions encountered in France, as it fitted observed N-nitrate load with an efficiency of 0.69. Regionalization of the model reduced the standard error in the prediction of N-nitrate loads by about 19 Hence, the development of regionalized models should be advocated only after the trade-off between improvement of fit and degradation of parameters' estimation has come under scrutiny.

  20. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  1. Automatic Classification of Time-variable X-Ray Sources

    NASA Astrophysics Data System (ADS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ~97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7-500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  2. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  3. Chemical transport model simulations of organic aerosol in southern California: model evaluation and gasoline and diesel source contributions

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.

    2017-03-01

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.

  4. Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.

    NASA Astrophysics Data System (ADS)

    Heimisson, E. R.; Segall, P.

    2017-12-01

    Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes related to aftershocks, VT and injection-induced seismicity. However, the role of various processes such as earthquake interactions and magnitudes and effective normal stress has been unclear. The new theory presented resolves some of the pertinent issues raised in the literature with application of the Dieterich 1994 model.

  5. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    NASA Astrophysics Data System (ADS)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  6. Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) for Breast Tumor Imaging: Numerical Modeling and Simulation

    PubMed Central

    Zhou, Lian; Li, Xu; Zhu, Shanan; He, Bin

    2011-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) was recently introduced as a noninvasive electrical conductivity imaging approach with high spatial resolution close to ultrasound imaging. In the present study, we test the feasibility of the MAT-MI method for breast tumor imaging using numerical modeling and computer simulation. Using the finite element method, we have built three dimensional numerical breast models with varieties of embedded tumors for this simulation study. In order to obtain an accurate and stable forward solution that does not have numerical errors caused by singular MAT-MI acoustic sources at conductivity boundaries, we first derive an integral forward method for calculating MAT-MI acoustic sources over the entire imaging volume. An inverse algorithm for reconstructing the MAT-MI acoustic source is also derived with spherical measurement aperture, which simulates a practical setup for breast imaging. With the numerical breast models, we have conducted computer simulations under different imaging parameter setups and all the results suggest that breast tumors that have large conductivity contrast to its surrounding tissues as reported in literature may be readily detected in the reconstructed MAT-MI images. In addition, our simulations also suggest that the sensitivity of imaging breast tumors using the presented MAT-MI setup depends more on the tumor location and the conductivity contrast between the tumor and its surrounding tissues than on the tumor size. PMID:21364262

  7. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  8. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    Statistical fluctuntions of neutron populations in reactors are analyzed by means of an approximate theoretical model. Development of the model is given in detail; also included are extensive numerical results derived from its application to systems with time-dependent reactivity, namely, a reactor during start-up. The special relationships of fluctuations to safety considerations are discussed. (auth)

  9. Amnion-derived stem cells: in quest of clinical applications

    PubMed Central

    2011-01-01

    In the promising field of regenerative medicine, human perinatal stem cells are of great interest as potential stem cells with clinical applications. Perinatal stem cells could be isolated from normally discarded human placentae, which are an ideal cell source in terms of availability, the fewer number of ethical concerns, less DNA damage, and so on. Numerous studies have demonstrated that some of the placenta-derived cells possess stem cell characteristics like pluripotent differentiation ability, particularly in amniotic epithelial (AE) cells. Term human amniotic epithelium contains a relatively large number of stem cell marker-positive cells as an adult stem cell source. In this review, we introduce a model theory of why so many AE cells possess stem cell characteristics. We also describe previous work concerning the therapeutic applications and discuss the pluripotency of the AE cells and potential pitfalls for amnion-derived stem cell research. PMID:21596003

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Xiao-Feng; Yu, Yun-Wei; Dai, Zi-Gao, E-mail: yuyw@mail.ccnu.edu.cn

    The similarity of the host galaxy of FRB 121102 with those of long gamma-ray bursts and Type I superluminous supernovae suggests that this fast radio burst (FRB) could be associated with a young magnetar. By assuming the FRB emission is produced within the magnetosphere, we derive a lower limit on the age of the magnetar, after which GHz emission is able to escape freely from the dense relativistic wind of the magnetar. Another lower limit is obtained by requiring the dispersion measure contributed by the electron/positron pair wind to be consistent with the observations of the host galaxy. Furthermore, wemore » also derive some upper limits on the magnetar age with discussions on possible energy sources of the FRB emission and the recently discovered persistent radio counterpart. As a result, some constraints on model parameters are addressed by reconciling the lower limits with the possible upper limits that are derived with an assumption of the rotational energy source.« less

  11. The Chandra Source Catalog 2.0: Spectral Properties

    NASA Astrophysics Data System (ADS)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  12. Scaling isotopic emissions and microbes across a permafrost thaw landscape

    NASA Astrophysics Data System (ADS)

    Varner, R. K.; Palace, M. W.; Saleska, S. R.; Bolduc, B.; Braswell, B. H., Jr.; Crill, P. M.; Chanton, J.; DelGreco, J.; Deng, J.; Frolking, S. E.; Herrick, C.; Hines, M. E.; Li, C.; McArthur, K. J.; McCalley, C. K.; Persson, A.; Roulet, N. T.; Torbick, N.; Tyson, G. W.; Rich, V. I.

    2017-12-01

    High latitude peatlands are a significant source of atmospheric methane. This source is spatially and temporally heterogeneous, resulting in a wide range of emission estimates for the atmospheric budget. Increasing atmospheric temperatures are causing degradation of underlying permafrost, creating changes in surface soil moisture, the surface and sub-surface hydrological patterns, vegetation and microbial communities, but the consequences to rates and magnitudes of methane production and emissions are poorly accounted for in global budgets. We combined field observations, multi-source remote sensing data and biogeochemical modeling to predict methane dynamics, including the fraction derived from hydrogenotrophic versus acetoclastic microbial methanogenesis across Stordalen mire, a heterogeneous discontinuous permafrost wetland located in northernmost Sweden. Using the field measurement validated Wetland-DNDC biogeochemical model, we estimated mire-wide CH4 and del13CH4 production and emissions for 2014 with input from field and unmanned aerial system (UAS) image derived vegetation maps, local climatology and water table from insitu and remotely sensed data. Model simulated methanogenic pathways correlate with sequence-based observations of methanogen community composition in samples collected from across the permafrost thaw landscape. This approach enables us to link below ground microbial community composition with emissions and indicates a potential for scaling across broad areas of the Arctic region.

  13. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  14. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  15. Modelling of piezoelectric actuator dynamics for active structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Chung, Walter H.; Von Flotow, Andreas

    1990-01-01

    The paper models the effects of dynamic coupling between a structure and an electrical network through the piezoelectric effect. The coupled equations of motion of an arbitrary elastic structure with piezoelectric elements and passive electronics are derived. State space models are developed for three important cases: direct voltage driven electrodes, direct charge driven electrodes, and an indirect drive case where the piezoelectric electrodes are connected to an arbitrary electrical circuit with embedded voltage and current sources. The equations are applied to the case of a cantilevered beam with surface mounted piezoceramics and indirect voltage and current drive. The theoretical derivations are validated experimentally on an actively controlled cantilevered beam test article with indirect voltage drive.

  16. Scoping Calculations of Power Sources for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Difilippo, F. C.

    1994-01-01

    This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to make scoping calculations for mission analysis.

  17. Limiting cases in relativistic field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, C.K.

    1988-05-01

    For nearly ninety years, electromagnetic fields caused by relativistically moving sources have been modeled according to formulas derived at the turn of the present century by Lienard and Wiechert. Recently, questions have started to surface about the Lienard-Wiechert derivation method, about all the subsequent modern rederivation methods, and about the results themselves. The present paper continues this critique. The field results in various idealized limiting cases are examined for plausibility and absurdities are revealed.

  18. Duty-cycle and energetics of remnant radio-loud AGN

    NASA Astrophysics Data System (ADS)

    Turner, Ross J.

    2018-05-01

    Deriving the energetics of remnant and restarted active galactic nuclei (AGNs) is much more challenging than for active sources due to the complexity in accurately determining the time since the nucleus switched-off. I resolve this problem using a new approach that combines spectral ageing and dynamical models to tightly constrain the energetics and duty-cycles of dying sources. Fitting the shape of the integrated radio spectrum yields the fraction of the source age the nucleus is active; this, in addition to the flux density, source size, axis ratio, and properties of the host environment, provides a constraint on dynamical models describing the remnant radio source. This technique is used to derive the intrinsic properties of the well-studied remnant radio source B2 0924+30. This object is found to spend 50_{-12}^{+14} Myr in the active phase and a further 28_{-5}^{+6} Myr in the quiescent phase, have a jet kinetic power of 3.6_{-1.7}^{+3.0}× 10^{37} W, and a lobe magnetic field strength below equipartition at the 8σ level. The integrated spectra of restarted and intermittent radio sources are found to yield a `steep-shallow' shape when the previous outburst occurred within 100 Myr. The duty-cycle of B2 0924+30 is hence constrained to be δ < 0.15 by fitting the shortest time to the previous comparable outburst that does not appreciably modify the remnant spectrum. The time-averaged feedback energy imparted by AGNs into their host galaxy environments can in this manner be quantified.

  19. Comparisons of thermospheric density data sets and models

    NASA Astrophysics Data System (ADS)

    Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin

    During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.

  20. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    NASA Astrophysics Data System (ADS)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the little detail about the chemical components of this source used in the models. The sensitivity tests show that chemical transport models show better performances when displaying a detailed set of sources (14) than when using a simplified one (only 8). It was also observed that an enhanced vertical profiling can improve the estimation of specific sources, such as industry, under complex meteorological conditions and that an insufficient spatial resolution in urban areas can impact on the capabilities of models to estimate the contribution of diffuse primary sources (e.g. traffic). Both families of models identify traffic and biomass burning as the first and second most contributing categories, respectively, to elemental carbon. The results of this study demonstrate that the source apportionment assessment methodology developed by the JRC is applicable to any kind of SA model. The same methodology is implemented in the on-line DeltaSA tool to support source apportionment model evaluation (http://source-apportionment.jrc.ec.europa.eu/).

  1. Mass loss from alpha Cyg /A2Ia/ derived from the profiles of low excitation Fe II lines

    NASA Technical Reports Server (NTRS)

    Hensberge, H.; De Loore, C.; Lamers, H. J. G. L. M.; Bruhweiler, F. C.

    1982-01-01

    The low-excitation Fe II lines in the spectral region 2000-3000 A are studied in the spectrum of alpha-Cyg. The profiles of the resonance lines are described by four representative parameters, and a preliminary model is derived from the dependence of these parameters on theoretical line strength, taking into account the influence of blending photospheric lines in an overall and qualitative way. At least 11% of all iron in the wind is once ionized, unless a non-thermal heating source enhances the fraction Fe(++) without destroying much Al(+). It is shown that the contribution of blending photospheric absorption lines to weaker P Cygni profiles has been previously largely underestimated. The mass loss rate corresponding to the model is derived, and is smaller by a factor of 500 than the one derived from the infrared excess by Barlow and Cohen (1977).

  2. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  3. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    NASA Astrophysics Data System (ADS)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good success rates to objectively detect landslides in high-resolution topography data by GEOBIA.

  4. The provenance and chemical variation of sandstones associated with the Mid-continent Rift System, U.S.A.

    USGS Publications Warehouse

    Cullers, R.L.; Berendsen, P.

    1998-01-01

    Sandstones along the northern portion of the Precambrian Mid-continent Rift System (MRS) have been petrographically and chemically analyzed for major elements and a variety of trace elements, including the REE. After the initial extrusion of the abundant basalts along the MRS, dominantly volcaniclastic sandstones of the Oronto Group were deposited. These volcaniclastic sandstones are covered by quartzose and subarkosic sandstones of the Bayfield Group. Thus the sandstones of the Oronto Group were derived from previously extruded basalts, whereas, the sandstones of the Bayfield Group were derived from Precambrian granitic gneisses located on the rift flanks. The chemical variation of these sandstones closely reflects the changing detrital modes with time. The elemental composition of the sandstones confirms the source lithologies suggested by the mineralogy and clasts. The Oronto Group sandstones contain lower ratios of elements concentrated in silicic source rocks (La or Th) relative to elements concentrated in basic source rocks (Co, Cr, or Sc) than the Bayfield Group. Also, the average size of the negative Eu anomaly of the sandstones of the Oronto Group is significantly less (Eu/Eu* mean ?? standard deviation = 0.79 ?? 0.13) than that of the Bayfield Group (mean + standard deviation = 0.57 ?? 0.09), also suggesting a more basic source for the former than the latter. Mixing models of elemental ratios give added insight as to the evolution of the rift. These models suggest that the volcanistic sandstones of the lower portion of the Oronto Group are derived from about 80 to 90 percent basalt and 10 to 20 percent granitoids. The rest of the Oronto Group and the lower to middle portion of the Bayfield Group could have formed by mixing of about 30 to 60 percent basalt and 40 to 70 percent granitoids. The upper portion of the Bayfield Group is likely derived from 80 to 100 percent granitoids and zero to 20 percent basalt.

  5. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  6. A stable computation of log-derivatives from noisy drawdown data

    NASA Astrophysics Data System (ADS)

    Ramos, Gustavo; Carrera, Jesus; Gómez, Susana; Minutti, Carlos; Camacho, Rodolfo

    2017-09-01

    Pumping tests interpretation is an art that involves dealing with noise coming from multiple sources and conceptual model uncertainty. Interpretation is greatly helped by diagnostic plots, which include drawdown data and their derivative with respect to log-time, called log-derivative. Log-derivatives are especially useful to complement geological understanding in helping to identify the underlying model of fluid flow because they are sensitive to subtle variations in the response to pumping of aquifers and oil reservoirs. The main problem with their use lies in the calculation of the log-derivatives themselves, which may display fluctuations when data are noisy. To overcome this difficulty, we propose a variational regularization approach based on the minimization of a functional consisting of two terms: one ensuring that the computed log-derivatives honor measurements and one that penalizes fluctuations. The minimization leads to a diffusion-like differential equation in the log-derivatives, and boundary conditions that are appropriate for well hydraulics (i.e., radial flow, wellbore storage, fractal behavior, etc.). We have solved this equation by finite differences. We tested the methodology on two synthetic examples showing that a robust solution is obtained. We also report the resulting log-derivative for a real case.

  7. The role of anthropogenic species in Biogenic aerosol formation

    EPA Science Inventory

    Isoprene is a widely recognized source of organic aerosol in the southeastern United States. Models have traditionally represented isoprene-derived aerosol as semivolatile species formed from the initial isoprene + OH reaction. Recent laboratory and field studies indicate later g...

  8. IDENTIFICATION AND COMPILATION OF UNSATURATED/VADOSE ZONE MODELS

    EPA Science Inventory

    Many ground-water contamination problems are derived from sources at or near the soil surface. Consequently, the physical and (bio-)chemical behavior of contaminants in the shallow subsurface is of critical importance to the development of protection and remediation strategies. M...

  9. A method to derive vegetation distribution maps for pollen dispersion models using birch as an example

    NASA Astrophysics Data System (ADS)

    Pauling, A.; Rotach, M. W.; Gehrig, R.; Clot, B.

    2012-09-01

    Detailed knowledge of the spatial distribution of sources is a crucial prerequisite for the application of pollen dispersion models such as, for example, COSMO-ART (COnsortium for Small-scale MOdeling - Aerosols and Reactive Trace gases). However, this input is not available for the allergy-relevant species such as hazel, alder, birch, grass or ragweed. Hence, plant distribution datasets need to be derived from suitable sources. We present an approach to produce such a dataset from existing sources using birch as an example. The basic idea is to construct a birch dataset using a region with good data coverage for calibration and then to extrapolate this relationship to a larger area by using land use classes. We use the Swiss forest inventory (1 km resolution) in combination with a 74-category land use dataset that covers the non-forested areas of Switzerland as well (resolution 100 m). Then we assign birch density categories of 0%, 0.1%, 0.5% and 2.5% to each of the 74 land use categories. The combination of this derived dataset with the birch distribution from the forest inventory yields a fairly accurate birch distribution encompassing entire Switzerland. The land use categories of the Global Land Cover 2000 (GLC2000; Global Land Cover 2000 database, 2003, European Commission, Joint Research Centre; resolution 1 km) are then calibrated with the Swiss dataset in order to derive a Europe-wide birch distribution dataset and aggregated onto the 7 km COSMO-ART grid. This procedure thus assumes that a certain GLC2000 land use category has the same birch density wherever it may occur in Europe. In order to reduce the strict application of this crucial assumption, the birch density distribution as obtained from the previous steps is weighted using the mean Seasonal Pollen Index (SPI; yearly sums of daily pollen concentrations). For future improvement, region-specific birch densities for the GLC2000 categories could be integrated into the mapping procedure.

  10. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  11. Investigation of Greenhouse Gas Emissions by Surface, Airborne, and Satellite on Local to Continental-Scale

    NASA Astrophysics Data System (ADS)

    Leifer, I.; Tratt, D. M.; Egland, E. T.; Gerilowski, K.; Vigil, S. A.; Buchwitz, M.; Krings, T.; Bovensmann, H.; Krautwurst, S.; Burrows, J. P.

    2013-12-01

    In situ meteorological observations, including 10-m winds (U), in conjunction with greenhouse gas (GHG - methane, carbon dioxide, water vapor) measurements by continuous wave Cavity Enhanced Absorption Spectroscopy (CEAS) were conducted onboard two specialized platforms: MACLab (Mobile Atmospheric Composition Laboratory in a RV) and AMOG Surveyor (AutoMObile Greenhouse gas) - a converted commuter automobile. AMOG Surveyor data were collected for numerous southern California sources including megacity, geology, fossil fuel industrial, animal husbandry, and landfill operations. MACLab investigated similar sources along with wetlands on a transcontinental scale from California to Florida to Nebraska covering more than 15,000 km. Custom software allowing real-time, multi-parameter data visualization (GHGs, water vapor, temperature, U, etc.) improved plume characterization and was applied to large urban area and regional-scale sources. The capabilities demonstrated permit calculation of source emission strength, as well as enable documenting microclimate variability. GHG transect data were compared with airborne HyperSpectral Imaging data to understand temporal and spatial variability and to ground-truth emission strength derived from airborne imagery. These data also were used to validate satellite GHG products from SCIAMACHY (2003-2005) and GOSAT (2009-2013) that are currently being analyzed to identify significant decadal-scale changes in North American GHG emission patterns resulting from changes in anthropogenic and natural sources. These studies lay the foundation for the joint ESA/NASA COMEX campaign that will map GHG plumes by remote sensing and in situ measurements for a range of strong sources to derive emission strength through inverse plume modeling. COMEX is in support of the future GHG monitoring satellites, such as CarbonSat and HyspIRI. GHG transect data were compared with airborne HyperSpectral Imaging data to understand temporal and spatial variability and to ground-truth emission strength derived from airborne imagery. These data also were used to validate satellite GHG products from SCIAMACHY (2003-2005) and GOSAT (2009-2013) that are currently being analyzed to identify significant decadal-scale changes in North American GHG emission patterns resulting from changes in anthropogenic and natural sources. These studies lay the foundation for the joint ESA/NASA COMEX campaign that will map GHG plumes by remote sensing and in situ measurements for a range of strong sources to derive emission strength through inverse plume modeling. COMEX is in support of the future GHG monitoring satellites, such as CarbonSat and HyspIRI.

  12. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  13. Planetary Sources for Reducing Sulfur Compounds for Cyanosulfidic Origins of Life Chemistry

    NASA Astrophysics Data System (ADS)

    Ranjan, S.; Todd, Z. R.; Sutherland, J.; Sasselov, D. D.

    2017-12-01

    A key challenge in origin-of-life studies is understanding the chemistry that lead to the origin of the key biomolecules of life, such as the components of nucleic acids, sugars, lipids, and proteins. Prebiotic reaction networks based upon reductive homologation of nitriles (e.g., Patel et al. 2015), are building a tantalizing picture of sustained abiotic synthesis of activated ribonucleotides, amino acids and lipid precursors under environmental conditions thought to have been available on early Earth. Sulfidic anions in aqueous solution (e.g., HS-, HSO3-) under near-UV irradiation play important roles in these chemical pathways. However, the sources and availability of these anions on early Earth have not yet been quantitatively constrained. Here, we evaluate the potential for the atmosphere to serve as a source of sulfidic anions, via dissolution of volcanically-outgassed SO2 and H2S into water reservoirs. We combine photochemical modeling from the literature (Hu et al. 2013) with equilibrium chemistry calculations to place constraints on the partial pressures of SO2 and H2S required to reach the elevated concentrations of sulfidic anions (≥1 μM) thought to be necessary for prebiotic chemistry. We find that micromolar levels of SO2-derived anions (HSO3-, SO3(2-)) are possible through simple exposure of aqueous reservoirs like shallow lakes to the atmosphere, assuming total sulfur emission flux comparable to today. Millimolar levels of these compounds are available during the epochs of elevated volcanism, due to elevated sulfur emission flux. Radiative transfer modeling suggests the atmospheric sulfur will not block the near-UV radiation also required for the cyanosulfidic chemistry. However, H2S-derived anions (e.g., HS-) reach only sub-micromolar levels from atmospheric sources, meaning that prebiotic chemistry invoking such molecules must invoke specialized, local sources. Prebiotic chemistry invoking SO2-derived anions may be considered more robust than chemistry invoking H2S-derived anions. In general, epochs of moderately high volcanism may have been especially conducive to cyanosulfidic prebiotic chemistry.

  14. Determination of X-ray flux using silicon pin diodes

    PubMed Central

    Owen, Robin L.; Holton, James M.; Schulze-Briese, Clemens; Garman, Elspeth F.

    2009-01-01

    Accurate measurement of photon flux from an X-ray source, a parameter required to calculate the dose absorbed by the sample, is not yet routinely available at macromolecular crystallography beamlines. The development of a model for determining the photon flux incident on pin diodes is described here, and has been tested on the macromolecular crystallography beamlines at both the Swiss Light Source, Villigen, Switzerland, and the Advanced Light Source, Berkeley, USA, at energies between 4 and 18 keV. These experiments have shown that a simple model based on energy deposition in silicon is sufficient for determining the flux incident on high-quality silicon pin diodes. The derivation and validation of this model is presented, and a web-based tool for the use of the macromolecular crystallography and wider synchrotron community is introduced. PMID:19240326

  15. Non-line-of-sight ultraviolet link loss in noncoplanar geometry.

    PubMed

    Wang, Leijie; Xu, Zhengyuan; Sadler, Brian M

    2010-04-15

    Various path loss models have been developed for solar blind non-line-of-sight UV communication links under an assumption of coplanar source beam axis and receiver pointing direction. This work further extends an existing single-scattering coplanar analytical model to noncoplanar geometry. The model is derived as a function of geometric parameters and atmospheric characteristics. Its behavior is numerically studied in different noncoplanar geometric settings.

  16. Patient-derived xenografts as preclinical neuroblastoma models.

    PubMed

    Braekeveldt, Noémie; Bexell, Daniel

    2018-05-01

    The prognosis for children with high-risk neuroblastoma is often poor and survivors can suffer from severe side effects. Predictive preclinical models and novel therapeutic strategies for high-risk disease are therefore a clinical imperative. However, conventional cancer cell line-derived xenografts can deviate substantially from patient tumors in terms of their molecular and phenotypic features. Patient-derived xenografts (PDXs) recapitulate many biologically and clinically relevant features of human cancers. Importantly, PDXs can closely parallel clinical features and outcome and serve as excellent models for biomarker and preclinical drug development. Here, we review progress in and applications of neuroblastoma PDX models. Neuroblastoma orthotopic PDXs share the molecular characteristics, neuroblastoma markers, invasive properties and tumor stroma of aggressive patient tumors and retain spontaneous metastatic capacity to distant organs including bone marrow. The recent identification of genomic changes in relapsed neuroblastomas opens up opportunities to target treatment-resistant tumors in well-characterized neuroblastoma PDXs. We highlight and discuss the features and various sources of neuroblastoma PDXs, methodological considerations when establishing neuroblastoma PDXs, in vitro 3D models, current limitations of PDX models and their application to preclinical drug testing.

  17. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  18. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  19. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  20. Assimilating satellite-based canopy height within an ecosystem model to estimate aboveground forest biomass

    NASA Astrophysics Data System (ADS)

    Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.

    2017-07-01

    Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.

  1. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  2. Electromagnetic sinc Schell-model beams and their statistical properties.

    PubMed

    Mei, Zhangrong; Mao, Yonghua

    2014-09-22

    A class of electromagnetic sources with sinc Schell-model correlations is introduced. The conditions on source parameters guaranteeing that the source generates a physical beam are derived. The evolution behaviors of statistical properties for the electromagnetic stochastic beams generated by this new source on propagating in free space and in atmosphere turbulence are investigated with the help of the weighted superposition method and by numerical simulations. It is demonstrated that the intensity distributions of such beams exhibit unique features on propagating in free space and produce a double-layer flat-top profile of being shape-invariant in the far field. This feature makes this new beam particularly suitable for some special laser processing applications. The influences of the atmosphere turbulence with a non-Kolmogorov power spectrum on statistical properties of the new beams are analyzed in detail.

  3. Prestack reverse time migration for tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Hien, Doan Huy

    2013-04-01

    According to having interest in unconventional resource plays, anisotropy problem is naturally considered as an important step for improving the seismic image quality. Although it is well known prestack depth migration for the seismic reflection data is currently one of the powerful tools for imaging complex geological structures, it may lead to migration error without considering anisotropy. Asymptotic analysis of wave propagation in transversely isotropic (TI) media yields a dispersion relation of couple P- and SV wave modes that can be converted to a fourth order scalar partial differential equation (PDE). By setting the shear wave velocity equal zero, the fourth order PDE, called an acoustic wave equation for TI media, can be reduced to couple of second order PDE systems and we try to solve the second order PDE by the finite difference method (FDM). The result of this P wavefield simulation is kinematically similar to elastic and anisotropic wavefield simulation. We develop prestack depth migration algorithm for tilted transversely isotropic media using reverse time migration method (RTM). RTM is a method for imaging the subsurface using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. We show the subsurface image in TTI media using the inner product of partial derivative wavefield with respect to physical parameters and observation data. Since the partial derivative wavefields with respect to the physical parameters require extremely huge computing time, so we implemented the imaging condition by zero lag crosscorrelation of virtual source and back propagating wavefield instead of partial derivative wavefields. The virtual source is calculated directly by solving anisotropic acoustic wave equation, the back propagating wavefield on the other hand is calculated by the shot gather used as the source function in the anisotropic acoustic wave equation. According to the numerical model test for a simple geological model including syncline and anticline, the prestack depth migration using TTI-RTM in weak anisotropic media shows the subsurface image is similar to the true geological model used to generate the shot gathers.

  4. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  5. New England SPARROW Water-Quality Modeling to Assist with the Development of Total Maximum Daily Loads in the Connecticut River Basin

    NASA Astrophysics Data System (ADS)

    Moore, R. B.; Robinson, K. W.; Simcox, A. C.; Johnston, C. M.

    2002-05-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Environmental Protection Agency (USEPA) and the New England Interstate Water Pollution Control Commission (NEWIPCC), is currently preparing a water-quality model, called SPARROW, to assist in the regional total maximum daily load (TMDL) studies in New England. A model is required to provide estimates of nutrient loads and confidence intervals at unmonitored stream reaches. SPARROW (Spatially Referenced Regressions on Watershed Attributes) is a spatially detailed, statistical model that uses regression equations to relate total phosphorus and nitrogen (nutrient) stream loads to pollution sources and watershed characteristics. These statistical relations are then used to predict nutrient loads in unmonitored streams. The New England SPARROW model is based on a hydrologic network of 42,000 stream reaches and associated watersheds. Point source data are derived from USEPA's Permit Compliance System (PCS). Information about nonpoint sources is derived from data such as fertilizer use, livestock wastes, and atmospheric deposition. Watershed characteristics include land use, streamflow, time-of-travel, stream density, percent wetlands, slope of the land surface, and soil permeability. Preliminary SPARROW results are expected in Spring 2002. The New England SPARROW model is proposed for use in the TMDL determination for nutrients in the Connecticut River Basin, upstream of Connecticut. The model will be used to estimate nitrogen loads from each of the upstream states to Long Island Sound. It will provide estimates and confidence intervals of phosphorus and nitrogen loads, area-weighted yields of nutrients by watershed, sources of nutrients, and the downstream movement of nutrients. This information will be used to (1) understand ranges in nutrient levels in surface waters, (2) identify the environmental factors that affect nutrient levels in streams, (3) evaluate monitoring efforts for better determination of nutrient loads, and (4) evaluate management options for reducing nutrient loads to achieve water-quality goals.

  6. The derived population of luminous supersoft X-ray sources

    NASA Technical Reports Server (NTRS)

    Di Stefano, R.STEFANO; Rappaport, S.

    1994-01-01

    The existence of a new class of astrophysical object, luminous supersoft X-ray sources, has been established through ROSAT satellite observations and analysis during the past approximately 3 yr. Because most of the radiation emitted by supersoft sources spans a range of wavelengths readily absorbed by interstellar gas, a substantial fraction of these sources may not be detectable with present satellite instrumentation. It is therefore important to derive a reliable estimate of the underlying population, based on the approximately 30 sources that have been observed to date. The work reported here combines the observational results with a theoretical analysis, to obtain an estimate of the total number of sources likely to be present in M31, the Magellanic Clouds, and in our own Galaxy. We find that in M31, where approximately 15 supersoft sources have been identified and roughly an equal number of sources are being investigated as supersoft candidates, there are likely to be approximately 2500 active supersoft sources at the present time. In our own Galaxy, where about four supersoft sources have been detected in the Galactic plane, there are likely to be approximately 1000 active sources. Similarly, with about six and about four (nonforeground) sources observed in the Large (LMC) and Small Magellanic Clouds (SMC), respectively, there should be approximately 30 supersoft sources in the LMC, and approximately 20 in the SMC. The likely uncertainties in the numbers quoted above, and the properties of observable sources relative to those of the total underlying population, are also derived in detail. These results can be scaled to estimate the numbers of supersoft sources likely to be present in other galaxies. The results reported here on the underlying population of supersoft X-ray sources are in good agreement with the results of a prior population synthesis study of the white dwarf accretor model for luminous supersoft X-ray sources. It should be emphasized, however, that the questions asked in these two investigations are distinct, that the approaches taken to answer these questions are largely independent and that the findings of these two studies could in principle have been quite different.

  7. Origin of primitive ocean island basalts by crustal gabbro assimilation and multiple recharge of plume-derived melts

    NASA Astrophysics Data System (ADS)

    Borisova, Anastassia Y.; Bohrson, Wendy A.; Grégoire, Michel

    2017-07-01

    Chemical Geodynamics relies on a paradigm that the isotopic composition of ocean island basalt (OIB) represents equilibrium with its primary mantle sources. However, the discovery of huge isotopic heterogeneity within olivine-hosted melt inclusions in primitive basalts from Kerguelen, Iceland, Hawaii and South Pacific Polynesia islands implies open-system behavior of OIBs, where during magma residence and transport, basaltic melts are contaminated by surrounding lithosphere. To constrain the processes of crustal assimilation by OIBs, we employed the Magma Chamber Simulator (MCS), an energy-constrained thermodynamic model of recharge, assimilation and fractional crystallization. For a case study of the 21-19 Ma basaltic series, the most primitive series ever found among the Kerguelen OIBs, we performed sixty-seven simulations in the pressure range from 0.2 to 1.0 GPa using compositions of olivine-hosted melt inclusions as parental magmas, and metagabbro xenoliths from the Kerguelen Archipelago as wallrock. MCS modeling requires that the assimilant is anatectic crustal melts (P2O5 ≤ 0.4 wt.% contents) derived from the Kerguelen oceanic metagabbro wallrock. To best fit the phenocryst assemblage observed in the investigated basaltic series, recharge of relatively large masses of hydrous primitive basaltic melts (H2O = 2-3 wt%; MgO = 7-10 wt.%) into a middle crustal chamber at 0.2 to 0.3 GPa is required. Our results thus highlight the important impact that crustal gabbro assimilation and mantle recharge can have on the geochemistry of mantle-derived olivine-phyric OIBs. The importance of crustal assimilation affecting primitive plume-derived basaltic melts underscores that isotopic and chemical equilibrium between ocean island basalts and associated deep plume mantle source(s) may be the exception rather than the rule.

  8. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  9. Geochemical constraints on the spatial distribution of recycled oceanic crust in the mantle source of late Cenozoic basalts, Vietnam

    NASA Astrophysics Data System (ADS)

    Hoang, Thi Hong Anh; Choi, Sung Hi; Yu, Yongjae; Pham, Trung Hieu; Nguyen, Kim Hoang; Ryu, Jong-Sik

    2018-01-01

    This study presents a comprehensive analysis of the major and trace element, mineral, and Sr, Nd, Pb and Mg isotopic compositions of late Cenozoic intraplate basaltic rocks from central and southern Vietnam. The Sr, Nd, and Pb isotopic compositions of these basalts define a tight linear array between Indian mid-ocean-ridge basalt (MORB)-like mantle and enriched mantle type 2 (EM2) components. These basaltic rocks contain low concentrations of CaO (6.4-9.7 wt%) and have high Fe/Mn ratios (> 60) and FeO/CaO-3MgO/SiO2 values (> 0.54), similar to partial melts derived from pyroxenite/eclogite sources. This similarity is also supported by the composition of olivine within these samples, which contains low concentration of Ca and high concentrations of Ni, and shows high Fe/Mn ratios. The basaltic rocks have elevated Dy/Yb ratios that fall within the range of melts derived from garnet lherzolite material, although their Yb contents are much higher than those of modeled melts derived from only garnet lherzolite material and instead plot near the modeled composition of eclogite-derived melts. The Vietnamese basaltic rocks have lighter δ26Mg values (- 0.38 ± 0.06‰) than is expected for the normal mantle (- 0.25 ± 0.07‰), and these values decrease with decreasing Hf/Hf* and Ti/Ti* ratios, indicating that these basalts were derived from a source containing carbonate material. On primitive mantle-normalized multi-element variation diagrams, the central Vietnamese basalts are characterized by positive Sr, Eu, and Ba anomalies. These basalts also plot within the pelagic sediment field in Pbsbnd Pb isotopic space. This suggests that the mantle source of the basalts contained both garnet peridotite and recycled oceanic crust. A systematic analysis of variations in geochemical composition in basalts from southern to central Vietnam indicates that the recycled oceanic crust (possibly the paleo-Pacific slab) source material contains varying proportions of gabbro, basalt, and sediment. The basalts from south-central Vietnam (12°N-14°N) may be dominated by the lowest portion of the residual slab that contains rutile-bearing plagioclase-rich gabbroic eclogite, whereas the uppermost portion of the recycled slab, including sediment and basaltic material with small amounts of gabbro, may be a major constituent of the source for the basalts within the central region of Vietnam (14°N-16°N). Finally, the southern region (10°N-12°N) contains basalts sourced mainly from recycled upper oceanic crust that is basalt-rich and contains little or no sediment.

  10. Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption

    NASA Astrophysics Data System (ADS)

    Leake, J. E.; Linton, M.; Schuck, P. W.

    2017-12-01

    Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.

  11. A Requirements-Driven Optimization Method for Acoustic Liners Using Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Lopes, Leonard V.

    2017-01-01

    More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. In a previous paper on this subject, a method deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external optimizer. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an optimization process. Analytic derivatives improve the efficiency and accuracy of gradient-based optimization methods and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective optimization.

  12. Updating the USGS seismic hazard maps for Alaska

    USGS Publications Warehouse

    Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.

    2015-01-01

    The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.

  13. Evaluation of linear induction motor characteristics : the Yamamura model

    DOT National Transportation Integrated Search

    1975-04-30

    The Yamamura theory of the double-sided linear induction motor (LIM) excited by a constant current source is discussed in some detail. The report begins with a derivation of thrust and airgap power using the method of vector potentials and theorem of...

  14. Evaluation Of The Potential Of Gravity Anomalies From Satellite Altimetry By Merging With Gravity Data From Various Sources - Application To Coastal Areas

    NASA Astrophysics Data System (ADS)

    Fernandes, M. J.; Bastos, L.; Tomé, P.

    The region of the Azores archipelago is a natural laboratory for gravity field studies, due to its peculiar geodynamic and oceanographic features, related to rough structures in the gravity field. As a consequence, gravity data from various sources have been collected in the scope of various observation campaigns. The available data set comprises marine, airborne and satellite derived gravity anoma- lies. The satellite data have been derived by altimetric inversion of satellite altimeter data (Topex/Poseidon and ERS), to which processing methods tuned for optimal data recovery in coastal areas have been applied. Marine and airborne data along coinci- dent profiles, some of them coincident with satellite tracks, were collected during an observation campaign that took place in the Azores in 1997, in the scope of the Eu- ropean Union project AGMASCO. In addition, gravity anomalies from an integrated GPS/INS system installed aboard an aircraft, have also been computed from the posi- tion and navigation data collected during the AGMASCO campaign. This paper presents a comparison study between all available data sets. In particular, the improvement of the satellite derived anomalies near the shoreline is assessed with respect to existing satellite derived models and with the high resolution geopotential model GPM98. The impact of these data sets in the regional geoid improvement will also be presented.

  15. Near-earth asteroids - Possible sources from reflectance spectroscopy

    NASA Technical Reports Server (NTRS)

    Mcfadden, L. A.; Gaffey, M. J.; Mccord, T. B.

    1985-01-01

    The diversity of reflectance spectra noted among near-earth asteroids that were compared with selected asteroids, planets and satellites to determine possible source regions is indicative of different mineralogical composition and, accordingly, of more than one source region. Spectral signatures that are similar to those of main belt asteroids support models deriving some of these asteroids from the 5:2 Kirkwood gap and the Flora family, by way of gravitational perturbations. The differences in composition found between near-earth asteroids and planetary and satellite surfaces are in keeping with theoretical arguments that such bodies should not be sources. While some near-earth asteroids furnish portions of the earth's meteorite flux, other sources must also contribute.

  16. Interactions of donor sources and media influence the histo-morphological quality of full-thickness skin models.

    PubMed

    Lange, Julia; Weil, Frederik; Riegler, Christoph; Groeber, Florian; Rebhan, Silke; Kurdyn, Szymon; Alb, Miriam; Kneitz, Hermann; Gelbrich, Götz; Walles, Heike; Mielke, Stephan

    2016-10-01

    Human artificial skin models are increasingly employed as non-animal test platforms for research and medical purposes. However, the overall histopathological quality of such models may vary significantly. Therefore, the effects of manufacturing protocols and donor sources on the quality of skin models built-up from fibroblasts and keratinocytes derived from juvenile foreskins is studied. Histo-morphological parameters such as epidermal thickness, number of epidermal cell layers, dermal thickness, dermo-epidermal adhesion and absence of cellular nuclei in the corneal layer are obtained and scored accordingly. In total, 144 full-thickness skin models derived from 16 different donors, built-up in triplicates using three different culture conditions were successfully generated. In univariate analysis both media and donor age affected the quality of skin models significantly. Both parameters remained statistically significant in multivariate analyses. Performing general linear model analyses we could show that individual medium-donor-interactions influence the quality. These observations suggest that the optimal choice of media may differ from donor to donor and coincides with findings where significant inter-individual variations of growth rates in keratinocytes and fibroblasts have been described. Thus, the consideration of individual medium-donor-interactions may improve the overall quality of human organ models thereby forming a reproducible test platform for sophisticated clinical research. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility-only model suggested that differences in the volatility of the precursors were able to explain most of the variability observed in the SOA formation. For aircraft exhaust, the previous methods to simulate SOA formation from SVOC and IVOC performed poorly. A more physically-realistic modeling framework was developed, which was then used to show that SOA formation from aircraft exhaust was (a) higher for petroleum-based than synthetically derived jet fuel and (b) higher at lower engine loads and vice versa. All of the SOA data from combustion emissions experiments were used to determine source-specific parameterizations to model SOA formation from SVOC, IVOC and other unspeciated emissions. The new parameterizations were used to investigate their influence on the OA budget in the United States. Combustion sources were estimated to emit about 2.61 Tg yr-1 of SVOC, 1VOC and other unspeciated emissions (sixth of the total anthropogenic organic emissions), which are predicted to double SOA production from combustion sources in the United States. The contribution of SVOC and IVOC emissions to global SOA formation was assessed using a global climate model. Simulations were performed using a modified version of GISS GCM 11'. The modified model predicted that SVOC and IVOC contributed to half of the OA mass in the atmosphere. Their inclusion improved OA model-measurement comparisons for absolute concentrations, POA-SOA split and volatility (gas-particle partitioning) globally suggesting that atmospheric models need to incorporate SOA formation from SVOC and IVOC if they are to reasonably predict the abundance and properties of aerosols. This thesis demonstrates that SVOC/IVOC and possibly other unspeciated organics emitted by combustion sources are very important precursors of SOA and potentially large contributors to the atmospheric aerosol mass. Models used for research and policy applications need to represent them to improve model-predictions of aerosols on climate and health outcomes. The improved modeling frameworks developed in this dissertation are suitable for implementation into chemical transport models.

  18. Origin and Role of Recycled Crust in Flood Basalt Magmatism: Case Study of the Central East Greenland Rifted Margin

    NASA Astrophysics Data System (ADS)

    Brown, E.; Lesher, C. E.

    2015-12-01

    Continental flood basalts (CFB) are extreme manifestations of mantle melting derived from chemically/isotopically heterogeneous mantle. Much of this heterogeneity comes from lithospheric material recycled into the convecting mantle by a range of mechanisms (e.g. subduction, delamination). The abundance and petrogenetic origins of these lithologies thus provide important constraints on the geodynamical origins of CFB magmatism, and the timescales of lithospheric recycling in the mantle. Basalt geochemistry has long been used to constrain the compositions and mean ages of recycled lithologies in the mantle. Typically, this work assumes the isotopic compositions of the basalts are the same as their mantle source(s). However, because basalts are mixtures of melts derived from different sources (having different fusibilities) generated over ranges of P and T, their isotopic compositions only indirectly represent the isotopic compositions of their mantle sources[1]. Thus, relating basalts compositions to mantle source compositions requires information about the melting process itself. To investigate the nature of lithologic source heterogeneity while accounting for the effects of melting during CFB magmatism, we utilize the REEBOX PRO forward melting model[2], which simulates adiabatic decompression melting in lithologically heterogeneous mantle. We apply the model to constrain the origins and abundance of mantle heterogeneity associated with Paleogene flood basalts erupted during the rift-to-drift transition of Pangea breakup along the Central East Greenland rifted margin of the North Atlantic igneous province. We show that these basalts were derived by melting of a hot, lithologically heterogeneous source containing depleted, subduction-modified lithospheric mantle, and <10% recycled oceanic crust. The Paleozoic mean age we calculate for this recycled crust is consistent with an origin in the region's prior subduction history, and with estimates for the mean age of recycled crust in the modern Iceland plume[3]. These results suggest that this lithospheric material was not recycled into the lower mantle before becoming entrained in the Iceland plume. [1] Rudge et al. (2013). GCA, 114, p112-143; [2] Brown & Lesher (2014). Nat. Geo., 7, p820-824; [3] Thirlwall et al. (2004). GCA, 68, p361-386

  19. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  20. Automatic classification of time-variable X-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, andmore » other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.« less

  1. Trophic coupling between two adjacent benthic food webs within a man-made intertidal area: A stable isotopes evidence

    NASA Astrophysics Data System (ADS)

    Schaal, Gauthier; Riera, Pascal; Leroux, Cédric

    2008-04-01

    This study aimed at establishing the effects of human-made physical modifications on the trophic structure and functioning of an intertidal benthic food web in Arcachon Bay (France). The main food sources and the most representative consumers were sampled on an artificial rocky dyke and its adjacent seagrass meadow. The food sources of consumers were inferred through the use of carbon and nitrogen stable isotopes. The contributions of the different food sources to the diets of the consumers were established using the Isosource mixing model. In order to reduce the range of feasible contributions, additional non-isotopic constraints were added when necessary to the outputs of this model. We observed a more complex food web than previously shown for artificial habitats. Moreover, it appears that several consumers inhabiting the artificial environment base most of their diet on allochtonous eelgrass-derived detritus. In turn, several consumers inhabiting the eelgrass meadow consumed significantly macroalgae-derived material originating from the adjacent artificial rocky area. These results point out that the food webs associated to adjacent habitats can influence each other through the utilisation of exported organic matter.

  2. Speciation and formation of iodinated trihalomethane from microbially derived organic matter during the biological treatment of micro-polluted source water.

    PubMed

    Wei, Yuanyuan; Liu, Yan; Ma, Luming; Wang, Hongwu; Fan, Jinhong; Liu, Xiang; Dai, Rui-Hua

    2013-09-01

    Water sources are micro-polluted by the increasing range of anthropogenic activities around them. Disinfection byproduct (DBP) precursors in water have gradually expanded from humic acid (HA) and fulvic acid to other important sources of potential organic matter. This study aimed to provide further insights into the effects of microbially derived organic matter as precursors on iodinated trihalomethane (I-THM) speciation and formation during the biological treatment of micro-polluted source water. The occurrence of I-THMs in drinking water treated by biological processes was investigated. The results showed for the first time that CHCl2I and CHBrClI are emerging DBPs in China. Biological pre-treatment and biological activated carbon can increase levels of microbes, which could serve as DBP precursors. Chlorination experiments with bovine serum albumin (BSA), starch, HA, deoxyribonucleic acid (DNA), and fish oil, confirmed the close correlation between the I-THM species identified during the treatment processes and those predicted from the model compounds. The effects of iodide and bromide on the I-THM speciation and formation were related to the biochemical composition of microbially derived organic precursors. Lipids produced up to 16.98μgL(-1) of CHCl2I at an initial iodide concentration of 2mgL(-1). HA and starch produced less CHCl2I at 3.88 and 3.54μgL(-1), respectively, followed by BSA (1.50μgL(-1)) and DNA (1.35μgL(-1)). Only fish oil produced I-THMs when iodide and bromide were both present in solution; the four other model compounds formed brominated species. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Formation of South Pole-Aitken Basin as the Result of an Oblique Impact: Implications for Melt Volume and Source of Exposed Materials

    NASA Technical Reports Server (NTRS)

    Petro, N. E.

    2012-01-01

    The South Pole-Aitken Basin (SPA) is the largest, deepest, and oldest identified basin on the Moon and contains surfaces that are unique due to their age, composition, and depth of origin in the lunar crust [1-3] (Figure 1). SPA has been a target of interest as an area for robotic sample return in order to determine the age of the basin and the composition and origin of its interior [3-6]. As part of the investigation into the origin of SPA materials there have been several efforts to estimate the likely provenance of regolith material in central SPA [5, 6]. These model estimates suggest that, despite the formation of basins and craters following SPA, the regolith within SPA is dominated by locally derived material. An assumption inherent in these models has been that the locally derived material is primarily SPA impact-melt as opposed to local basement material (e.g. unmelted lower crust). However, the definitive identification of SPA derived impact melt on the basin floor, either by remote sensing [2, 7] or via photogeology [8] is extremely difficult due to the number of subsequent impacts and volcanic activity [3, 4]. In order to identify where SPA produced impact melt may be located, it is important to constrain both how much melt would have been produced in a basin forming impact and the likely source of such melted material. Models of crater and basin formation [9, 10] present clear rationale for estimating the possible volumes and sources of impact melt produced during SPA formation. However, if SPA formed as the result of an oblique impact [11, 12], the volume and depth of origin of melted material could be distinct from similar material in a vertical impact [13].

  4. Skin-derived neural precursors competitively generate functional myelin in adult demyelinated mice

    PubMed Central

    Mozafari, Sabah; Laterza, Cecilia; Roussel, Delphine; Bachelin, Corinne; Marteyn, Antoine; Deboux, Cyrille; Martino, Gianvito; Evercooren, Anne Baron-Van

    2015-01-01

    Induced pluripotent stem cell–derived (iPS-derived) neural precursor cells may represent the ideal autologous cell source for cell-based therapy to promote remyelination and neuroprotection in myelin diseases. So far, the therapeutic potential of reprogrammed cells has been evaluated in neonatal demyelinating models. However, the repair efficacy and safety of these cells has not been well addressed in the demyelinated adult CNS, which has decreased cell plasticity and scarring. Moreover, it is not clear if these induced pluripotent–derived cells have the same reparative capacity as physiologically committed CNS-derived precursors. Here, we performed a side-by-side comparison of CNS-derived and skin-derived neural precursors in culture and following engraftment in murine models of adult spinal cord demyelination. Grafted induced neural precursors exhibited a high capacity for survival, safe integration, migration, and timely differentiation into mature bona fide oligodendrocytes. Moreover, grafted skin–derived neural precursors generated compact myelin around host axons and restored nodes of Ranvier and conduction velocity as efficiently as CNS-derived precursors while outcompeting endogenous cells. Together, these results provide important insights into the biology of reprogrammed cells in adult demyelinating conditions and support use of these cells for regenerative biomedicine of myelin diseases that affect the adult CNS. PMID:26301815

  5. The X-ray Spectral Evolution of eta Carinae as Seen by ASCA

    NASA Technical Reports Server (NTRS)

    Corcoran, M. F.; Fredericks, A. C.; Petre, R.; Swank, J. H.; Drake, S. A.; White, Nicholas E. (Technical Monitor)

    2000-01-01

    Using data from the ASCA X-ray observatory, we examine the variations in the X-ray spectrum of the supermassive star nu Carinae with an unprecedented combination of spatial and spectral resolution. We include data taken during the recent X-ray eclipse in 1997-1998, after recovery from the eclipse, and during and after an X-ray flare. We show that the eclipse variation in the X-ray spectrum is apparently confined to a decrease in the emission measure of the source. We compare our results with a simple colliding wind binary model and find that the observed spectral variations are only consistent, with the binary model if there is significant high-temperature emission far from the star and/or a substantial change in the temperature distribution of the hot plasma. If contamination in the 2-10 keV band is important, the observed eclipse spectrum requires an absorbing column in excess of 10(exp 24)/sq cm for consistency with the binary model, which may indicate an increase in the first derivative of M from nu Carinae near the time of periastron passage. The flare spectra are consistent with the variability seen in nearly simultaneous RXTE observations and thus confirm that nu Carinae itself is the source of the flare emission. The variation in the spectrum during the flare seems confined to a change in the source emission measure. By comparing 2 observations obtained at the same phase in different X-ray cycles, we find that the current, X-ray brightness of the source is slightly higher than the brightness of the source during the last cycle perhaps indicative of a long-term increase in the first derivative of M, not associated with the X-ray cycle.

  6. A new DG nanoscale TFET based on MOSFETs by using source gate electrode: 2D simulation and an analytical potential model

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2017-08-01

    This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.

  7. Modeling the Rock Glacier Cycle

    NASA Astrophysics Data System (ADS)

    Anderson, R. S.; Anderson, L. S.

    2016-12-01

    Rock glaciers are common in many mountain ranges in which the ELA lies above the peaks. They represent some of the most identifiable components of today's cryosphere in these settings. Their oversteepened snouts pose often-overlooked hazards to travel in alpine terrain. Rock glaciers are supported by avalanches and by rockfall from steep headwalls. The winter's avalanche cone must be sufficiently thick not to melt entirely in the summer. The spatial distribution of rock glaciers reflects this dependence on avalanche sources; they are most common on lee sides of ridges where wind-blown snow augments the avalanche source. In the absence of rockfall, this would support a short, cirque glacier. Depending on the relationship between rockfall and avalanche patterns, "talus-derived" and "glacier-derived" rock glaciers are possible. Talus-derived: If the spatial distribution of rock delivery is similar to the avalanche pattern, the rock-ice mixture will travel an englacial path that is downward through the short accumulation zone before turning upward in the ablation zone. Advected debris is then delivered to the base of a growing surface debris layer that reduces the ice melt rate. The physics is identical to the debris-covered glacier case. Glacier-derived: If on the other hand rockfall from the headwall rolls beyond the avalanche cone, it is added directly to the ablation zone of the glacier. The avalanche accumulation zone then supports a pure ice core to the rock glacier. We have developed numerical models designed to capture the full range of glacier to debris-covered glacier to rock glacier behavior. The hundreds of meter lengths, tens of meters thicknesses, and meter per year speeds of rock glaciers are well described by the models. The model can capture both "talus-derived" and "glacier-derived" rock glaciers. We explore the dependence of glacier behavior on climate histories. As climate warms, a pure ice debris-covered glacier can transform to a much shorter rock glacier, leaving in its wake a thinning ice-cored moraine. Rock glaciers have much longer response times to climate change than their pure ice cousins.

  8. Scattering and; Delay, Scale, and Sum Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S K

    How do we see? What is the mechanism? Consider standing in an open field on a clear sunny day. In the field are a yellow dog and a blue ball. From a wave-based remote sensing point of view the sun is a source of radiation. It is a broadband electromagnetic source which, for the purposes of this introduction, only the visible spectrum is considered (approximately 390 to 750 nanometers or 400 to 769 TeraHertz). The source emits an incident field into the known background environment which, for this example, is free space. The incident field propagates until it strikes anmore » object or target, either the yellow dog or the blue ball. The interaction of the incident field with an object results in a scattered field. The scattered field arises from a mis-match between the background refractive index, considered to be unity, and the scattering object refractive index ('yellow' for the case of the dog, and 'blue' for the ball). This is also known as an impedance mis-match. The scattering objects are referred to as secondary sources of radiation, that radiation being the scattered field which propagates until it is measured by the two receivers known as 'eyes'. The eyes focus the measured scattered field to form images which are processed by the 'wetware' of the brain for detection, identification, and localization. When time series representations of the measured scattered field are available, the image forming focusing process can be mathematically modeled by delayed, scaled, and summed migration. This concept of optical propagation, scattering, and focusing have one-to-one equivalents in the acoustic realm. This document is intended to present the basic concepts of scalar scattering and migration used in wide band wave-based remote sensing and imaging. The terms beamforming and (delayed, scaled, and summed) migration are used interchangeably but are to be distinguished from the narrow band (frequency domain) beamforming to determine the direction of arrival of a signal, and seismic migration in which wide band time series are shifted but not to form images per se. Section 3 presents a mostly graphically-based motivation and summary of delay, scale, and sum beamforming. The model for incident field propagation in free space is derived in Section 4 under specific assumptions. General object scattering is derived in Section 5 and simplified under the Born approximation in Section 6. The model of this section serves as the basis in the derivation of time-domain migration. The Foldy-Lax, full point scatterer scattering, method is derived in Section 7. With the previous forward models in hand, delay, scale, and sum beamforming is derived in Section 8. Finally, proof-of-principle experiments are present in Section 9.« less

  9. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    PubMed

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  10. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves

    PubMed Central

    Ripepe, M.; Barfucci, G.; De Angelis, S.; Delle Donne, D.; Lacanna, G.; Marchetti, E.

    2016-01-01

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models. PMID:27830768

  11. Calculating the Malliavin derivative of some stochastic mechanics problems

    PubMed Central

    Hauseux, Paul; Hale, Jack S.

    2017-01-01

    The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776

  12. Generic Assessment Criteria for human health risk assessment of potentially contaminated land in China.

    PubMed

    Cheng, Yuanyuan; Nathanail, Paul C

    2009-12-20

    Generic Assessment Criteria (GAC) are derived using widely applicable assumptions about the characteristics and behaviour of contaminant sources, pathways and receptors. GAC provide nationally consistent guidance, thereby saving money and time. Currently, there are no human health based Generic Assessment Criteria (GAC) for contaminated sites in China. Protection of human health is therefore difficult to ensure and demonstrate; and the lack of GAC makes it difficult to tell if there is potential significant risk to human health unless site-specific criteria are derived. This paper derived Chinese GAC (GAC) for five inorganic and eight organic substances for three regions in China for three land uses: urban residential without plant uptake, Chinese cultivated land, and commercial/industrial using the SNIFFER model. The SNIFFER model has been further implemented with a dermal absorption algorithm and the model default input values have been changed to reflect the Chinese exposure scenarios. It is envisaged that the modified SNIFFER model could be used to derive GAC for more contaminants, more Regions, and more land uses. Further research to enhance the reliability and acceptability of the GAC is needed in regional/national surveys in diet and working patterns.

  13. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  14. United States‐Mexican border watershed assessment: Modeling nonpoint source pollution in Ambos Nogales

    USGS Publications Warehouse

    Norman, Laura M.

    2007-01-01

    Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.

  15. Urban air quality estimation study, phase 1

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1976-01-01

    Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.

  16. Application of a combined approach including contamination indexes, geographic information system and multivariate statistical models in levels, distribution and sources study of metals in soils in Northern China

    PubMed Central

    Huang, Kuixian; Luo, Xingzhang

    2018-01-01

    The purpose of this study is to recognize the contamination characteristics of trace metals in soils and apportion their potential sources in Northern China to provide a scientific basis for basic of soil environment management and pollution control. The data set of metals for 12 elements in surface soil samples was collected. The enrichment factor and geoaccumulation index were used to identify the general geochemical characteristics of trace metals in soils. The UNMIX and positive matrix factorizations (PMF) models were comparatively applied to apportion their potential sources. Furthermore, geostatistical tools were used to study the spatial distribution of pollution characteristics and to identify the affected regions of sources that were derived from apportionment models. The soils were contaminated by Cd, Hg, Pb and Zn to varying degree. Industrial activities, agricultural activities and natural sources were identified as the potential sources determining the contents of trace metals in soils with contributions of 24.8%–24.9%, 33.3%–37.2% and 38.0%–41.8%, respectively. The slightly different results obtained from UNMIX and PMF might be caused by the estimations of uncertainty and different algorithms within the models. PMID:29474412

  17. New methods for interpretation of magnetic vector and gradient tensor data I: eigenvector analysis and the normalised source strength

    NASA Astrophysics Data System (ADS)

    Clark, David A.

    2012-09-01

    Acquisition of magnetic gradient tensor data is likely to become routine in the near future. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for several elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalised source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetisation direction. In combination the NSS and its vector gradient determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Inversion based on the vector gradient of the NSS over the Tallawang magnetite deposit obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Besides the geological applications, the algorithms for the dipole model are readily applicable to the detection, location and characterisation (DLC) of magnetic objects, such as naval mines, unexploded ordnance, shipwrecks, archaeological artefacts, and buried drums.

  18. Automated identification of stream-channel geomorphic features from high‑resolution digital elevation models in West Tennessee watersheds

    USGS Publications Warehouse

    Cartwright, Jennifer M.; Diehl, Timothy H.

    2017-01-17

    High-resolution digital elevation models (DEMs) derived from light detection and ranging (lidar) enable investigations of stream-channel geomorphology with much greater precision than previously possible. The U.S. Geological Survey has developed the DEM Geomorphology Toolbox, containing seven tools to automate the identification of sites of geomorphic instability that may represent sediment sources and sinks in stream-channel networks. These tools can be used to modify input DEMs on the basis of known locations of stormwater infrastructure, derive flow networks at user-specified resolutions, and identify possible sites of geomorphic instability including steep banks, abrupt changes in channel slope, or areas of rough terrain. Field verification of tool outputs identified several tool limitations but also demonstrated their overall usefulness in highlighting likely sediment sources and sinks within channel networks. In particular, spatial clusters of outputs from multiple tools can be used to prioritize field efforts to assess and restore eroding stream reaches.

  19. An engineering study of hybrid adaptation of wind tunnel walls for three-dimensional testing

    NASA Technical Reports Server (NTRS)

    Brown, Clinton; Kalumuck, Kenneth; Waxman, David

    1987-01-01

    Solid wall tunnels having only upper and lower walls flexing are described. An algorithm for selecting the wall contours for both 2 and 3 dimensional wall flexure is presented and numerical experiments are used to validate its applicability to the general test case of 3 dimensional lifting aircraft models in rectangular cross section wind tunnels. The method requires an initial approximate representation of the model flow field at a given lift with wallls absent. The numerical methods utilized are derived by use of Green's source solutions obtained using the method of images; first order linearized flow theory is employed with Prandtl-Glauert compressibility transformations. Equations are derived for the flexed shape of a simple constant thickness plate wall under the influence of a finite number of jacks in an axial row along the plate centerline. The Green's source methods are developed to provide estimations of residual flow distortion (interferences) with measured wall pressures and wall flow inclinations as inputs.

  20. Transportation Sector Model of the National Energy Modeling System. Volume 2 -- Appendices: Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The attachments contained within this appendix provide additional details about the model development and estimation process which do not easily lend themselves to incorporation in the main body of the model documentation report. The information provided in these attachments is not integral to the understanding of the model`s operation, but provides the reader with opportunity to gain a deeper understanding of some of the model`s underlying assumptions. There will be a slight degree of replication of materials found elsewhere in the documentation, made unavoidable by the dictates of internal consistency. Each attachment is associated with a specific component of themore » transportation model; the presentation follows the same sequence of modules employed in Volume 1. The following attachments are contained in Appendix F: Fuel Economy Model (FEM)--provides a discussion of the FEM vehicle demand and performance by size class models; Alternative Fuel Vehicle (AFV) Model--describes data input sources and extrapolation methodologies; Light-Duty Vehicle (LDV) Stock Model--discusses the fuel economy gap estimation methodology; Light Duty Vehicle Fleet Model--presents the data development for business, utility, and government fleet vehicles; Light Commercial Truck Model--describes the stratification methodology and data sources employed in estimating the stock and performance of LCT`s; Air Travel Demand Model--presents the derivation of the demographic index, used to modify estimates of personal travel demand; and Airborne Emissions Model--describes the derivation of emissions factors used to associate transportation measures to levels of airborne emissions of several pollutants.« less

  1. FUSION++: A New Data Assimilative Model for Electron Density Forecasting

    NASA Astrophysics Data System (ADS)

    Bust, G. S.; Comberiate, J.; Paxton, L. J.; Kelly, M.; Datta-Barua, S.

    2014-12-01

    There is a continuing need within the operational space weather community, both civilian and military, for accurate, robust data assimilative specifications and forecasts of the global electron density field, as well as derived RF application product specifications and forecasts obtained from the electron density field. The spatial scales of interest range from a hundred to a few thousand kilometers horizontally (synoptic large scale structuring) and meters to kilometers (small scale structuring that cause scintillations). RF space weather applications affected by electron density variability on these scales include navigation, communication and geo-location of RF frequencies ranging from 100's of Hz to GHz. For many of these applications, the necessary forecast time periods range from nowcasts to 1-3 hours. For more "mission planning" applications, necessary forecast times can range from hours to days. In this paper we present a new ionosphere-thermosphere (IT) specification and forecast model being developed at JHU/APL based upon the well-known data assimilation algorithms Ionospheric Data Assimilation Four Dimensional (IDA4D) and Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). This new forecast model, "Forward Update Simple IONosphere model Plus IDA4D Plus EMPIRE (FUSION++), ingests data from observations related to electron density, winds, electric fields and neutral composition and provides improved specification and forecast of electron density. In addition, the new model provides improved specification of winds, electric fields and composition. We will present a short overview and derivation of the methodology behind FUSION++, some preliminary results using real observational sources, example derived RF application products such as HF bi-static propagation, and initial comparisons with independent data sources for validation.

  2. Steady induction effects in geomagnetism. Part 1C: Geomagnetic estimation of steady surficial core motions: Application to the definitive geomagnetic reference field models

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    In the source-free mantle/frozen-flux core magnetic earth model, the non-linear inverse steady motional induction problem was solved using the method presented in Part 1B. How that method was applied to estimate steady, broad-scale fluid velocity fields near the top of Earth's core that induce the secular change indicated by the Definitive Geomagnetic Reference Field (DGRF) models from 1945 to 1980 are described. Special attention is given to the derivation of weight matrices for the DGRF models because the weights determine the apparent significance of the residual secular change. The derived weight matrices also enable estimation of the secular change signal-to-noise ratio characterizing the DGRF models. Two types of weights were derived in 1987-88: radial field weights for fitting the evolution of the broad-scale portion of the radial geomagnetic field component at Earth's surface implied by the DGRF's, and general weights for fitting the evolution of the broad-scale portion of the scalar potential specified by these models. The difference is non-trivial because not all the geomagnetic data represented by the DGRF's constrain the radial field component. For radial field weights (or general weights), a quantitatively acceptable explication of broad-scale secular change relative to the 1980 Magsat epoch must account for 99.94271 percent (or 99.98784 percent) of the total weighted variance accumulated therein. Tolerable normalized root-mean-square weighted residuals of 2.394 percent (or 1.103 percent) are less than the 7 percent errors expected in the source-free mantle/frozen-flux core approximation.

  3. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as significantly as expected.

  4. Efficient Generation of β-Globin-Expressing Erythroid Cells Using Stromal Cell-Derived Induced Pluripotent Stem Cells from Patients with Sickle Cell Disease.

    PubMed

    Uchida, Naoya; Haro-Mora, Juan J; Fujita, Atsushi; Lee, Duck-Yeon; Winkler, Thomas; Hsieh, Matthew M; Tisdale, John F

    2017-03-01

    Human embryonic stem (ES) cells and induced pluripotent stem (iPS) cells represent an ideal source for in vitro modeling of erythropoiesis and a potential alternative source for red blood cell transfusions. However, iPS cell-derived erythroid cells predominantly produce ε- and γ-globin without β-globin production. We recently demonstrated that ES cell-derived sacs (ES sacs), known to express hemangioblast markers, allow for efficient erythroid cell generation with β-globin production. In this study, we generated several iPS cell lines derived from bone marrow stromal cells (MSCs) and peripheral blood erythroid progenitors (EPs) from sickle cell disease patients, and evaluated hematopoietic stem/progenitor cell (HSPC) generation after iPS sac induction as well as subsequent erythroid differentiation. MSC-derived iPS sacs yielded greater amounts of immature hematopoietic progenitors (VEGFR2 + GPA-), definitive HSPCs (CD34 + CD45+), and megakaryoerythroid progenitors (GPA + CD41a+), as compared to EP-derived iPS sacs. Erythroid differentiation from MSC-derived iPS sacs resulted in greater amounts of erythroid cells (GPA+) and higher β-globin (and βS-globin) expression, comparable to ES sac-derived cells. These data demonstrate that human MSC-derived iPS sacs allow for more efficient erythroid cell generation with higher β-globin production, likely due to heightened emergence of immature progenitors. Our findings should be important for iPS cell-derived erythroid cell generation. Stem Cells 2017;35:586-596. © 2016 AlphaMed Press.

  5. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  6. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  7. P and S wave Coda Calibration in Central Asia and South Korea

    NASA Astrophysics Data System (ADS)

    Kim, D.; Mayeda, K.; Gok, R.; Barno, J.; Roman-Nieves, J. I.

    2017-12-01

    Empirically derived coda source spectra provide unbiased, absolute moment magnitude (Mw) estimates for events that are normally too small for accurate long-period waveform modeling. In this study, we obtain coda-derived source spectra using data from Central Asia (Kyrgyzstan networks - KN and KR, and Tajikistan - TJ) and South Korea (Korea Meteorological Administration, KMA). We used a recently developed coda calibration module of Seismic WaveForm Tool (SWFT). Seismic activities during this recording period include the recent Gyeongju earthquake of Mw=5.3 and its aftershocks, two nuclear explosions from 2009 and 2013 in North Korea, and a small number of construction and mining-related explosions. For calibration, we calculated synthetic coda envelopes for both P and S waves based on a simple analytic expression that fits the observed narrowband filtered envelopes using the method outlined in Mayeda et al. (2003). To provide an absolute scale of the resulting source spectra, path and site corrections are applied using independent spectral constraints (e.g., Mw and stress drop) from three Kyrgyzstan events and the largest events of the Gyeongju sequence in Central Asia and South Korea, respectively. In spite of major tectonic differences, stable source spectra were obtained in both regions. We validated the resulting spectra by comparing the ratio of raw envelopes and source spectra from calibrated envelopes. Spectral shapes of earthquakes and explosions show different patterns in both regions. We also find (1) the source spectra derived from S-coda is more robust than that from the P-coda at low frequencies; (2) unlike earthquake events, the source spectra of explosions have a large disagreement between P and S waves; and (3) similarity is observed between 2016 Gyeongju and 2011 Virginia earthquake sequence in the eastern U.S.

  8. Modular model for Mercury's magnetospheric magnetic field confined within the average observed magnetopause.

    PubMed

    Korth, Haje; Tsyganenko, Nikolai A; Johnson, Catherine L; Philpott, Lydia C; Anderson, Brian J; Al Asad, Manar M; Solomon, Sean C; McNutt, Ralph L

    2015-06-01

    Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT R M 3 , where R M is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross-tail current sheet. The cross-tail current is described by a disk-shaped current near the planet and a sheet current at larger (≳ 5  R M ) antisunward distances. The tail currents are constrained by minimizing the root-mean-square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause.

  9. Modular model for Mercury's magnetospheric magnetic field confined within the average observed magnetopause

    PubMed Central

    Tsyganenko, Nikolai A.; Johnson, Catherine L.; Philpott, Lydia C.; Anderson, Brian J.; Al Asad, Manar M.; Solomon, Sean C.; McNutt, Ralph L.

    2015-01-01

    Abstract Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT RM 3, where RM is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross‐tail current sheet. The cross‐tail current is described by a disk‐shaped current near the planet and a sheet current at larger (≳ 5 RM) antisunward distances. The tail currents are constrained by minimizing the root‐mean‐square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause. PMID:27656335

  10. Analysis of seismic sources for different mechanisms of fracture growth for microseismic monitoring applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchkov, A. A., E-mail: DuchkovAA@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk, 630090; Stefanov, Yu. P., E-mail: stefanov@ispms.tsc.ru

    2015-10-27

    We have developed and illustrated an approach for geomechanic modeling of elastic wave generation (microsiesmic event occurrence) during incremental fracture growth. We then derived properties of effective point seismic sources (radiation patterns) approximating obtained wavefields. These results establish connection between geomechanic models of hydraulic fracturing and microseismic monitoring. Thus, the results of the moment tensor inversion of microseismic data can be related to different geomechanic scenarios of hydraulic fracture growth. In future, the results can be used for calibrating hydrofrac models. We carried out a series of numerical simulations and made some observations about wave generation during fracture growth. Inmore » particular when the growing fracture hits pre-existing crack then it generates much stronger microseismic event compared to fracture growth in homogeneous medium (radiation pattern is very close to the theoretical dipole-type source mechanism)« less

  11. Importance of seagrass as a carbon source for heterotrophic bacteria in a subtropical estuary (Florida Bay)

    NASA Astrophysics Data System (ADS)

    Williams, Clayton J.; Jaffé, Rudolf; Anderson, William T.; Jochem, Frank J.

    2009-11-01

    A stable carbon isotope approach was taken to identify potential organic matter sources incorporated into biomass by the heterotrophic bacterial community of Florida Bay, a subtropical estuary with a recent history of seagrass loss and phytoplankton blooms. To gain a more complete understanding of bacterial carbon cycling in seagrass estuaries, this study focused on the importance of seagrass-derived organic matter to pelagic, seagrass epiphytic, and sediment surface bacteria. Particulate organic matter (POM), seagrass epiphytic, seagrass ( Thalassia testudinum) leaf, and sediment surface samples were collected from four Florida Bay locations with historically different organic matter inputs, macrophyte densities, and primary productivities. Bulk (observed and those reported previously) and compound-specific bacterial fatty acid δ 13C values were used to determine important carbon sources to the estuary and benthic and pelagic heterotrophic bacteria. The δ 13C values of T. testudinum green leaves with epiphytes removed ranged from -9.9 to -6.9‰. Thalassia testudinum δ 13C values were significant more enriched in 13C than POM, epiphytic, and sediment samples, which ranged from -16.4 to -13.5, -16.2 to -9.6, and -16.7 to -11.0‰, respectively. Bacterial fatty acid δ 13C values (measured for br14:0, 15:0, i15:0, a15:0, br17:0, and 17:0) ranged from -25.5 to -8.2‰. Assuming a -3‰ carbon source fractionation from fatty acid to whole bacteria, pelagic, epiphytic, and sediment bacterial δ 13C values were generally more depleted in 13C than T. testudinum δ 13C values, more enriched in 13C than reported δ 13C values for mangroves, and similar to reported δ 13C values for algae. IsoSource mixing model results indicated that organic matter derived from T. testudinum was incorporated by both benthic and pelagic bacterial communities, where 13-67% of bacterial δ 13C values could arise from consumption of seagrass-derived organic matter. The IsoSource model, however, failed to discriminate clearly the fraction of algal (0-86%) and mangrove (0-42%) organic matter incorporated by bacterial communities. These results indicate that pelagic, epiphytic, and sediment surface bacteria consumed organic matter from a variety of sources. Bacterial communities incorporated consistently seagrass-derived organic matter, the dominant macrophyte in Florida Bay, but seagrass δ 13C values alone could not account fully for bacterial δ 13C values.

  12. Multiregion bicentric-spheres models of the head for the simulation of bioelectric phenomena.

    PubMed

    Vatta, Federica; Bruno, Paolo; Inchingolo, Paolo

    2005-03-01

    Equations are derived for the electric potentials [electroencephalogram (EEG)] produced by dipolar sources in a multiregion bicentric-spheres volume-conductor head model. Being the equations valid for an arbitrary number of regions, our proposal is a generalization of many spherical models presented so far in literature, each of those regarded as a particular case of our multiregion model. Moreover, our approach allows considering new features of the head volume-conductor to better approximate electrical properties of the real head.

  13. Magnetic field of longitudinal gradient bend

    NASA Astrophysics Data System (ADS)

    Aiba, Masamitsu; Böge, Michael; Ehrlichman, Michael; Streun, Andreas

    2018-06-01

    The longitudinal gradient bend is an effective method for reducing the natural emittance in light sources. It is, however, not a common element. We have analyzed its magnetic field and derived a set of formulae. Based on the derivation, we discuss how to model the longitudinal gradient bend in accelerator codes that are used for designing electron storage rings. Strengths of multipole components can also be evaluated from the formulae, and we investigate the impact of higher order multipole components in a very low emittance lattice.

  14. On butterfly effect in higher derivative gravities

    NASA Astrophysics Data System (ADS)

    Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid

    2016-11-01

    We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.

  15. Hf isotope compositions In detrital zircons as a new tool for provenance studies

    NASA Astrophysics Data System (ADS)

    Jacobsen, Y. J.; Münker, C.; Mezger, K.

    2003-04-01

    Identifying the provenance of continental sediments is a major issue in palaeo-tectonic studies, providing important information for paleogeographic reconstructions. Isotope studies, e.g. those of whole rock Sm-Nd or detrital zircon U-Pb dating, have widely been used for this purpose. Here we assess the potential of combined Lu-Hf data and U-Pb ages determined on the same single detrital zircons as a new tool for provenance studies. Due to the low Lu/Hf ratios in zircons the Hf isotope composition of a zircon changes insignificantly after its crystallization. Thus each particular grain preserves information on the Hf-siotpe composition of its source and the age of this source. Provided that both the U-Pb and Lu-Hf isotope systems have not been disturbed, this information can be used to constrain the sources of each individual zircon. In order to demonstrate the capability of Hf isotope studies on detrital zircons for provenance studies, we obtained combined U-Pb ages and Lu-Hf isotope data for zircons from the Cambrian Junction Formation in New Zealand. The Junction Formation was deposited on the (present) SE margin of Gondwana near the Australian continent and consists of turbidites, siltstones and conglomerates [1]. Typical continent derived Paleozoic sediments in SE Gondwana generally show characteristic age maxima at 500-600 Ma, 1000-1200 Ma (Grenvillian) and additional older peaks (early Proterozoic to Archean) [2]. We focused on two groups of detrital zircons with Grenvillian and Proterozoic to Late Archean ages. The initial ɛHf values for these zircons range from 0.7 to -15.5 for the Grenvillian and from -5.2 to -14.1 for the Proterozoic/Archean zircons. Corresponding two stage Hf model ages range from ca. 1500 to 2500 Ma for the Grenvillian and from ca. 3200 to 3600 Ma for the Proterozoic/Archean zircons. Furthermore it can be shown that the Grenvillian zircons must have been derived from recycled Grenvillian provinces. Comparison of these Hf model ages with Nd crustal residence ages from the possible sources in Australia, Antarctica and Laurentia reveals the possible sources of the zircons. Based on the paleogeographic setting in Cambrian time the Grenville-age zircons were most likely derived from Drauning Maud Land (Antarctica), thus confirming earlier models by [1] and [3]. The Archean zircons were most likely derived from W-Australia (Yilgarn or Pilbara Kraton) or E-Antarctica (Miller Range). [1] Wombacher and Münker 2000: J. Geol. 108, [2] Ireland et al. 1998: Geology 26, [3] Flöttmann et al. 1998: J. Geol. Soc. 155.

  16. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  17. The Importance of Electron Source Population to the Remarkable Enhancement of Radiation belt Electrons during the October 2012 Storm

    NASA Astrophysics Data System (ADS)

    Tu, W.; Cunningham, G.; Reeves, G. D.; Chen, Y.; Henderson, M. G.; Blake, J. B.; Baker, D. N.; Spence, H.

    2013-12-01

    During the October 8-9 2012 storm, the MeV electron fluxes in the heart of the outer radiation belt are first wiped out then exhibit a three-orders-of-magnitude increase on the timescale of hours, as observed by the MagEIS and REPT instruments aboard the Van Allen Probes. There is strong observational evidence that the remarkable enhancement is due to local acceleration by chorus waves, as shown in the recent Science paper by Reeves et al.1. However, the importance of the dynamic electron source population transported in from the plasma sheet, to the observed remarkable enhancement, has not been studied. We illustrate the importance of the source population with our simulation of the event using the DREAM 3D diffusion model. Three new modifications have been implemented in the model: 1) incorporating a realistic and time-dependent low-energy boundary condition at 100 keV obtained from the MagEIS data; 2) utilizing event-specific chorus wave distributions derived from the low-energy electron precipitation observed by POES and validated against the in situ wave data from EMFISIS; 3) using an ';open' boundary condition at L*=11 and implementing electron lifetimes on the order of the drift period outside the solar-wind driven last closed drift shell. The model quantitatively reproduces the MeV electron dynamics during this event, including the fast dropout at the start of Oct. 8th, low electron flux during the first Dst dip, and the remarkable enhancement peaked at L*=4.2 during the second Dst dip. By comparing the model results with realistic source population against those with constant low-energy boundary (see figure), we find that the realistic electron source population is critical to reproduce the observed fast and significant increase of MeV electrons. 1Reeves, G. D., et al. (2013), Electron Acceleration in the Heart of the Van Allen Radiation Belts, Science, DOI:10.1126/science.1237743. Comparison between data and model results during the October 2012 storm for electrons at μ=3168 MeV/G and K=0.1 G1/2Re. Top plot is the electron phase space density data measured by the two Van Allen Probes; middle plot shows the results from the DREAM 3D diffusion model with a realistic electron source population derived from MagEIS data; and the bottom plot is the model results with a constant source population.

  18. High-energy neutrino fluxes from AGN populations inferred from X-ray surveys

    NASA Astrophysics Data System (ADS)

    Jacobsen, Idunn B.; Wu, Kinwah; On, Alvina Y. L.; Saxton, Curtis J.

    2015-08-01

    High-energy neutrinos and photons are complementary messengers, probing violent astrophysical processes and structural evolution of the Universe. X-ray and neutrino observations jointly constrain conditions in active galactic nuclei (AGN) jets: their baryonic and leptonic contents, and particle production efficiency. Testing two standard neutrino production models for local source Cen A (Koers & Tinyakov and Becker & Biermann), we calculate the high-energy neutrino spectra of single AGN sources and derive the flux of high-energy neutrinos expected for the current epoch. Assuming that accretion determines both X-rays and particle creation, our parametric scaling relations predict neutrino yield in various AGN classes. We derive redshift-dependent number densities of each class, from Chandra and Swift/BAT X-ray luminosity functions (Silverman et al. and Ajello et al.). We integrate the neutrino spectrum expected from the cumulative history of AGN (correcting for cosmological and source effects, e.g. jet orientation and beaming). Both emission scenarios yield neutrino fluxes well above limits set by IceCube (by ˜4-106 × at 1 PeV, depending on the assumed jet models for neutrino production). This implies that: (i) Cen A might not be a typical neutrino source as commonly assumed; (ii) both neutrino production models overestimate the efficiency; (iii) neutrino luminosity scales with accretion power differently among AGN classes and hence does not follow X-ray luminosity universally; (iv) some AGN are neutrino-quiet (e.g. below a power threshold for neutrino production); (v) neutrino and X-ray emission have different duty cycles (e.g. jets alternate between baryonic and leptonic flows); or (vi) some combination of the above.

  19. Perceived sources of work stress and satisfaction among hospital and community mental health staff, and their relation to mental health, burnout and job satisfaction.

    PubMed

    Prosser, D; Johnson, S; Kuipers, E; Szmukler, G; Bebbington, P; Thornicroft, G

    1997-07-01

    This questionnaire study examined perceived sources of stress and satisfaction at work among 121 mental health staff members. Five factors were derived from principal component analysis of sources of work stress items (stress from: role, poor support, clients, future, and overload), and accounted for 70% of the total variance. Four factors were derived from the items related to sources of job satisfaction (satisfaction from: career, working with people, management, and money), accounting for 68% of the variance. The associations of these factors with sociodemographic and job characteristics were examined, and they were entered as explanatory variables into regression models predicting mental health, burnout, and job satisfaction. Stress from "overload" was associated with being based outside an in-patient ward, and with emotional exhaustion and worse mental health. Stress related to the "future" was associated with not being white. Stress from "clients" was associated with the "depersonalization" component of burnout. Higher job satisfaction was associated with "management" and "working with people" as sources of satisfaction, whereas emotional exhaustion and poorer mental health were associated with less "career" satisfaction.

  20. Chemical transport model simulations of organic aerosol in ...

    EPA Pesticide Factsheets

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data

  1. Improved alternating gradient transport and focusing of neutral molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalnins, Juris; Lambertson, Glen; Gould, Harvey

    2001-12-02

    Polar molecules, in strong-field seeking states, can be transported and focused by an alternating sequence of electric field gradients that focus in one transverse direction while defocusing in the other. We show by calculation and numerical simulation, how one may greatly improve the alternating gradient transport and focusing of molecules. We use a new optimized multipole lens design, a FODO lattice beam transport line, and lenses to match the beam transport line to the beam source and the final focus. We derive analytic expressions for the potentials, fields, and gradients that may be used to design these lenses. We describemore » a simple lens optimization procedure and derive the equations of motion for tracking molecules through a beam transport line. As an example, we model a straight beamline that transports a 560 m/s jet-source beam of methyl fluoride molecules 15 m from its source and focuses it to 2 mm diameter. We calculate the beam transport line acceptance and transmission, for a beam with velocity spread, and estimate the transmitted intensity for specified source conditions. Possible applications are discussed.« less

  2. Investigating Causality Between Interacting Brain Areas with Multivariate Autoregressive Models of MEG Sensor Data

    PubMed Central

    Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim

    2013-01-01

    Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419

  3. Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.

  4. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  5. Identification of Geologic and Anthropogenic Sources of Phosphorus to Streams in California and Portions of Adjacent States, U.S.A., Using SPARROW Modeling

    NASA Astrophysics Data System (ADS)

    Domagalski, J. L.

    2013-12-01

    The SPARROW (Spatially Referenced Regressions On Watershed Attributes) model allows for the simulation of nutrient transport at un-gauged catchments on a regional scale. The model was used to understand natural and anthropogenic factors affecting phosphorus transport in developed, undeveloped, and mixed watersheds. The SPARROW model is a statistical tool that allows for mass balance calculation of constituent sources, transport, and aquatic decay based upon a calibration of a subset of stream networks, where concentrations and discharge have been measured. Calibration is accomplished using potential sources for a given year and may include fertilizer, geological background (based on bed-sediment samples and aggregated with geochemical map units), point source discharge, and land use categories. NHD Plus version 2 was used to model the hydrologic system. Land to water transport variables tested were precipitation, permeability, soil type, tile drains, and irrigation. For this study area, point sources, cultivated land, and geological background are significant phosphorus sources to streams. Precipitation and clay content of soil are significant land to water transport variables and various stream sizes show significance with respect to aquatic decay. Specific rock types result in different levels of phosphorus loading and watershed yield. Some important geological sources are volcanic rocks (andesite and basalt), granodiorite, glacial deposits, and Mesozoic to Cenozoic marine deposits. Marine sediments vary in their phosphorus content, but are responsible for some of the highest natural phosphorus yields, especially along the Central and Southern California coast. The Miocene Monterey Formation was found to be an especially important local source in southern California. In contrast, mixed metamorphic and igneous assemblages such as argillites, peridotite, and shales of the Trinity Mountains of northern California result in some of the lowest phosphorus yields. The agriculturally productive Central Valley of California has a low amount of background phosphorus in spite of inputs from streams draining upland areas. Many years of intensive agriculture may be responsible for the decrease of soil phosphorus in that area. Watersheds with significant background sources of phosphorus and large amounts of cultivated land had some of the highest per hectare yields. Seven different stream systems important for water management, or to describe transport processes, were investigated in detail for downstream changes in sources and loads. For example, the Klamath River (Oregon and California) has intensive agriculture and andesite-derived phosphorus in the upper reach. The proportion of agricultural-derived phosphorus decreases as the river flows into California before discharge to the ocean. The river flows through at least three different types of geological background sources from high to intermediate to very low. Knowledge of the role of natural sources in developed watersheds is critical for developing nutrient management strategies and these model results will have applicability for the establishment of realistic nutrient criteria.

  6. A comparison of scoring weights for the EuroQol derived from patients and the general public.

    PubMed

    Polsky, D; Willke, R J; Scott, K; Schulman, K A; Glick, H A

    2001-01-01

    General health state classification systems, such as the EuroQol instrument, have been developed to improve the systematic measurement and comparability of health state preferences. In this paper we generate valuations for EuroQol health states using responses to this instrument's visual analogue scale made by patients enrolled in a randomized clinical trial evaluating tirilazad mesylate, a new drug used to treat subarachnoid haemorrhage. We then compare these valuations derived from patients with published valuations derived from responses made by a sample from the general public. The data were derived from two sources: (1) responses to the EuroQol instrument from 649 patients 3 months after enrollment in the clinical trial, and (2) from a published study reporting a scoring rule for the EuroQol instrument that was based upon responses made by the general public. We used a linear regression model to develop an additive scoring rule. This rule enables direct valuation of all 243 EuroQol health states using patients' scores for their own health states elicited using a visual analogue scale. We then compared predicted scores generated using our scoring rule with predicted scores derived from a sample from the general public. The predicted scores derived using the additive scoring rules met convergent validity criteria and explained a substantial amount of the variation in visual analogue scale scores (R(2)=0.57). In the pairwise comparison of the predicted scores derived from the study sample with those derived from the general public, we found that the former set of scores were higher for 223 of the 243 states. Despite the low level of correspondence in the pairwise comparison, the overall correlation between the two sets of scores was 87%. The model presented in this paper demonstrated that scoring weights for the EuroQol instrument can be derived directly from patient responses from a clinical trial and that these weights can explain a substantial amount of variation in health valuations. Scoring weights based on patient responses are significantly higher than those derived from the general public. Further research is required to understand the source of these differences. Copyright 2001 John Wiley & Sons, Ltd.

  7. Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project

    USGS Publications Warehouse

    Boore, David

    2015-01-01

    Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.

  8. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  9. Central Compact Objects: some of them could be spinning up?

    NASA Astrophysics Data System (ADS)

    Benli, O.; Ertan, Ü.

    2018-05-01

    Among confirmed central compact objects (CCOs), only three sources have measured period and period derivatives. We have investigated possible evolutionary paths of these three CCOs in the fallback disc model. The model can account for the individual X-ray luminosities and rotational properties of the sources consistently with their estimated supernova ages. For these sources, reasonable model curves can be obtained with dipole field strengths ˜ a few × 109 G on the surface of the star. The model curves indicate that these CCOs were in the spin-up state in the early phase of evolution. The spin-down starts, while accretion is going on, at a time t ˜ 103 - 104 yr depending on the current accretion rate, period and the magnetic dipole moment of the star. This implies that some of the CCOs with relatively long periods, weak dipole fields and high X-ray luminosities could be strong candidates to show spin-up behavior if they indeed evolve with fallback discs.

  10. Analysis of earth rotation solution from Starlette

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Cheng, M. K.; Shum, C. K.; Eanes, R. J.; Tapley, B. D.

    1989-01-01

    Earth rotation parameter (ERP) solutions were derived from the Starlette orbit analysis during the Main MERIT Campaign, using a technique of a consider-covariance analysis to assess the effects of errors on the polar motion solutions. The polar motion solution was then improved through the simultaneous adjustment of some dynamical parameters representing identified dominant perturbing sources (such as the geopotential and ocean-tide coefficients) on the polar motion solutions. Finally, an improved ERP solution was derived using the gravity field model, PTCF1, described by Tapley et al. (1986). The accuracy of the Starlette ERP solution was assessed by a comparison with the LAGEOS-derived ERP solutions.

  11. Fermi Large Area Telescope Second Source Catalog

    NASA Technical Reports Server (NTRS)

    Nolan, P. L.; Abdo, A. A.; Ackermann, M.; Ajello, M; Allafort, A.; Antolini, E; Bonnell, J.; Cannon, A.; Celik O.; Corbet, R.; hide

    2012-01-01

    We present the second catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24-month period. The Second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurements in 5 energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 11eV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely gamma-ray-producing source classes.

  12. A review of catalytic hydrodeoxygenation of lignin-derived phenols from biomass pyrolysis.

    PubMed

    Bu, Quan; Lei, Hanwu; Zacher, Alan H; Wang, Lu; Ren, Shoujie; Liang, Jing; Wei, Yi; Liu, Yupeng; Tang, Juming; Zhang, Qin; Ruan, Roger

    2012-11-01

    Catalytic hydrodeoxygenation (HDO) of lignin-derived phenols which are the lowest reactive chemical compounds in biomass pyrolysis oils has been reviewed. The hydrodeoxygenation (HDO) catalysts have been discussed including traditional HDO catalysts such as CoMo/Al(2)O(3) and NiMo/Al(2)O(3) catalysts and transition metal catalysts (noble metals). The mechanism of HDO of lignin-derived phenols was analyzed on the basis of different model compounds. The kinetics of HDO of different lignin-derived model compounds has been investigated. The diversity of bio-oils leads to the complexities of HDO kinetics. The techno-economic analysis indicates that a series of major technical and economical efforts still have to be investigated in details before scaling up the HDO of lignin-derived phenols in existed refinery infrastructure. Examples of future investigation of HDO include significant challenges of improving catalysts and optimum operation conditions, further understanding of kinetics of complex bio-oils, and the availability of sustainable and cost-effective hydrogen source. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    NASA Astrophysics Data System (ADS)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well with NOAA stations and that MSWEP slightly overestimated precipitation amounts. On a daily basis, there were discrepancies in the peak timing and magnitude between measured precipitation and the global products. A bias between EU-WATCH and WFDEI temperature and potential evaporation was observed and to model the water balance correctly, it was needed to correct EU-WATCH to WFDEI mean monthly values. Overall, the available sources enabled rapid set-up of a hydrological model including the forcing of the model with a relatively good performance to assess water resources in Azerbaijan with a limited calibration effort and allow for a similar set-up anywhere in the world. Timing and quantification of peak volume remains a weakness in global data, making it difficult to be used for some applications (flooding) and for detailed calibration. Selecting and comparing different sources of global meteorological data is important to have a reliable set which improves model performance. - Beck et al., 2016. MSWEP: 3-hourly 0.25° global gridded precipitation (1979-2014) by merging gauge, satellite, and reanalysis data. Hydrol. Earth Syst. Sci. Discuss. - Dai Y. et al. ,2013. Development of a China Dataset of Soil Hydraulic Parameters Using Pedotransfer Functions for Land Surface Modeling. Journal of Hydrometeorology - Harding, R. et al., 2011., WATCH: Current knowledge of the Terrestrial global water cycle, J. Hydrometeorol. - Schellekens, J. et al., 2014. Rapid setup of hydrological and hydraulic models using OpenStreetMap and the SRTM derived digital elevation model. Environmental Modelling&Software - Wang-Erlandsson L. et al., 2016. Global Root Zone Storage Capacity from Satellite-Based Evaporation. Hydrology and Earth System Sciences - Weedon, G. et al., 2014. The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data, Water Resources Research.

  14. Controlled-source seismic interferometry with one way wave fields

    NASA Astrophysics Data System (ADS)

    van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.

    2008-12-01

    In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.

  15. An integrated WRF/HYSPLIT modeling approach for the assessment of PM(2.5) source regions over the Mississippi Gulf Coast region.

    PubMed

    Yerramilli, Anjaneyulu; Dodla, Venkata Bhaskar Rao; Challa, Venkata Srinivas; Myles, Latoya; Pendergrass, William R; Vogel, Christoph A; Dasari, Hari Prasad; Tuluri, Francis; Baham, Julius M; Hughes, Robert L; Patrick, Chuck; Young, John H; Swanier, Shelton J; Hardy, Mark G

    2012-12-01

    Fine particulate matter (PM(2.5)) is majorly formed by precursor gases, such as sulfur dioxide (SO(2)) and nitrogen oxides (NO(x)), which are emitted largely from intense industrial operations and transportation activities. PM(2.5) has been shown to affect respiratory health in humans. Evaluation of source regions and assessment of emission source contributions in the Gulf Coast region of the USA will be useful for the development of PM(2.5) regulatory and mitigation strategies. In the present study, the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model driven by the Weather Research & Forecasting (WRF) model is used to identify the emission source locations and transportation trends. Meteorological observations as well as PM(2.5) sulfate and nitric acid concentrations were collected at two sites during the Mississippi Coastal Atmospheric Dispersion Study, a summer 2009 field experiment along the Mississippi Gulf Coast. Meteorological fields during the campaign were simulated using WRF with three nested domains of 36, 12, and 4 km horizontal resolutions and 43 vertical levels and validated with North American Mesoscale Analysis. The HYSPLIT model was integrated with meteorological fields derived from the WRF model to identify the source locations using backward trajectory analysis. The backward trajectories for a 24-h period were plotted at 1-h intervals starting from two observation locations to identify probable sources. The back trajectories distinctly indicated the sources to be in the direction between south and west, thus to have origin from local Mississippi, neighboring Louisiana state, and Gulf of Mexico. Out of the eight power plants located within the radius of 300 km of the two monitoring sites examined as sources, only Watson, Cajun, and Morrow power plants fall in the path of the derived back trajectories. Forward dispersions patterns computed using HYSPLIT were plotted from each of these source locations using the hourly mean emission concentrations as computed from past annual emission strength data to assess extent of their contribution. An assessment of the relative contributions from the eight sources reveal that only Cajun and Morrow power plants contribute to the observations at the Wiggins Airport to a certain extent while none of the eight power plants contribute to the observations at Harrison Central High School. As these observations represent a moderate event with daily average values of 5-8 μg m(-3) for sulfate and 1-3 μg m(-3) for HNO(3) with differences between the two spatially varied sites, the local sources may also be significant contributors for the observed values of PM(2.5).

  16. Human Oral Mucosa and Gingiva

    PubMed Central

    Zhang, Q.Z.; Nguyen, A.L.; Yu, W.H.; Le, A.D.

    2012-01-01

    Mesenchymal stem cells (MSCs) represent a heterogeneous population of progenitor cells with self-renewal and multipotent differentiation potential. Aside from their regenerative role, extensive in vitro and in vivo studies have demonstrated that MSCs are capable of potent immunomodulatory effects on a variety of innate and adaptive immune cells. In this article, we will review recent experimental studies on the characterization of a unique population of MSCs derived from human oral mucosa and gingiva, especially their immunomodulatory and anti-inflammatory functions and their application in the treatment of several in vivo models of inflammatory diseases. The ease of isolation, accessible tissue source, and rapid ex vivo expansion, with maintenance of stable stem-cell-like phenotypes, render oral mucosa- and gingiva-derived MSCs a promising alternative cell source for MSC-based therapies. PMID:22988012

  17. A new data-based model of the global magnetospheric B-field: Modular structure, parameterization, first results.

    NASA Astrophysics Data System (ADS)

    Tsyganenko, Nikolai

    2013-04-01

    A new advanced model of the dynamical geomagnetosphere is presented, based on a large set of data from Geotail, Cluster, Polar, and Themis missions, taken during 138 storm events with SYM-H from -40 to -487nT over the period from 1996 through 2012 in the range of geocentric distances from ~3Re to ~60Re. The model magnetic field is confined within a realistic magnetopause, based on Lin et al. [JGRA, v.115, A04207, 2010] empirical boundary, driven by the dipole tilt angle, solar wind pressure, and IMF Bz. The magnetic field is modeled as a flexible combination of several modules, representing contributions from principal magnetospheric current systems such as the symmetric and partial ring currents (SRC/PRC), Region 1 and 2 field-aligned currents (FAC), and the equatorial tail current sheet (TCS). In the inner magnetosphere the model field is dominated by contributions from the SRC and PRC, derived from realistic particle pressure models and represented by four modules, providing variable degree of dawn-dusk and noon-midnight asymmetry. The TCS field is comprised of several independent modules, ensuring sufficient flexibility of the model field and correct asymptotic values in the distant tail. The Region 2 FAC is an inherent part of the PRC, derived from the continuity of the azimuthal current. The Region 1 FAC is modulated by the diurnal and seasonal variations of the dipole tilt angle, in agreement with earlier statistical studies [Ohtani et al., JGRA, v.110, A09230, 2005]. Following the approach introduced in our earlier TS05 model [Tsyganenko and Sitnov, JGRA, v.110, A03208, 2005], contributions from all individual field sources are parameterized by the external driving functions, derived from the solar wind/IMF OMNI database as solutions of dynamic equations with source and loss terms in the right-hand side. Global magnetic configurations and their evolution during magnetospheric storms are analyzed and discussed in context of the model results.

  18. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  19. Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)

    NASA Astrophysics Data System (ADS)

    Stevens, Robert K.; Pace, Thompson G.

    On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to improve the resolving capabilities of receptor models.

  20. Neutral and Plasma Sources in the Saturn's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Jurac, S.; Johnson, R. E.

    1999-05-01

    The heavy ion plasma in Saturnian inner magnetosphere is derived from the icy satellites and ring particles imbedded in the plasma. Recent Hubble Space Telescope measurements of the densities of neutral OH molecules which co-exist with and are precursors of the plasma ions have constrained models for the plasma sources. Richardson et al (1998) considered all existing HST observations and derived water-like neutral densities and estimated required sources to maintain equilibrium. Their neutral densities show maximum close to Enceladus (where the E-ring density peaks) and their total neutral source rate needed to maintain neutrals in steady state is for an order of magnitude larger than source rate given by Shi et al (1995). We model the sputtering of water-ice using the recently developed Monte-Carlo collisional transport code, and calculate neutral supply rates from sputtering of Enceladus and the E-ring. This collisional code, used previously to evaluate sputtering from the interstellar grains (Jurac et al, 1998) is modified to include electronic processes relevant to water-ice sputtering, and then applied to the E-ring grains. It is shown that the grain erosion rate increases substantially when the ion penetration depth becomes comparable to the grain radius. The sputtering and collection rates for plasma ions and neutrals are evaluated and it is shown that the E-ring might be the dominant source of water-like neutrals in the Saturnian magnetosphere. We also describe competition between grain growth and erosion and discuss implications to the existing E-ring evolutionary models. References: Jurac S., R. E. Johnson, B. Donn; Astroph. J. 503, 247, 1998 Richardson, J. D., A. Eviatar, M. A. McGrath, V. M. Vasyliunas; J. Geophys. Res., 103, 20245, 1998 Shi, M., R.A. Baragiola, D.E. Grosjean, R.E. Johnson, S. Jurac and J. Schou; J. Geophys. Res., 100, 26387, 1995.

  1. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  2. Uncertainties of fluxes and 13C / 12C ratios of atmospheric reactive-gas emissions

    NASA Astrophysics Data System (ADS)

    Gromov, Sergey; Brenninkmeijer, Carl A. M.; Jöckel, Patrick

    2017-07-01

    We provide a comprehensive review of the proxy data on the 13C / 12C ratios and uncertainties of emissions of reactive carbonaceous compounds into the atmosphere, with a focus on CO sources. Based on an evaluated set-up of the EMAC model, we derive the isotope-resolved data set of its emission inventory for the 1997-2005 period. Additionally, we revisit the calculus required for the correct derivation of uncertainties associated with isotope ratios of emission fluxes. The resulting δ13C of overall surface CO emission in 2000 of -(25. 2 ± 0. 7) ‰ is in line with previous bottom-up estimates and is less uncertain by a factor of 2. In contrast to this, we find that uncertainties of the respective inverse modelling estimates may be substantially larger due to the correlated nature of their derivation. We reckon the δ13C values of surface emissions of higher hydrocarbons to be within -24 to -27 ‰ (uncertainty typically below ±1 ‰), with an exception of isoprene and methanol emissions being close to -30 and -60 ‰, respectively. The isotope signature of ethane surface emission coincides with earlier estimates, but integrates very different source inputs. δ13C values are reported relative to V-PDB.

  3. Derivation and characterization of human fetal MSCs: an alternative cell source for large-scale production of cardioprotective microparticles.

    PubMed

    Lai, Ruenn Chai; Arslan, Fatih; Tan, Soon Sim; Tan, Betty; Choo, Andre; Lee, May May; Chen, Tian Sheng; Teh, Bao Ju; Eng, John Kun Long; Sidik, Harwin; Tanavde, Vivek; Hwang, Wei Sek; Lee, Chuen Neng; El Oakley, Reida Menshawe; Pasterkamp, Gerard; de Kleijn, Dominique P V; Tan, Kok Hian; Lim, Sai Kiang

    2010-06-01

    The therapeutic effects of mesenchymal stem cells (MSCs) transplantation are increasingly thought to be mediated by MSC secretion. We have previously demonstrated that human ESC-derived MSCs (hESC-MSCs) produce cardioprotective microparticles in pig model of myocardial ischemia/reperfusion (MI/R) injury. As the safety and availability of clinical grade human ESCs remain a concern, MSCs from fetal tissue sources were evaluated as alternatives. Here we derived five MSC cultures from limb, kidney and liver tissues of three first trimester aborted fetuses and like our previously described hESC-derived MSCs; they were highly expandable and had similar telomerase activities. Each line has the potential to generate at least 10(16-19) cells or 10(7-10) doses of cardioprotective secretion for a pig model of MI/R injury. Unlike previously described fetal MSCs, they did not express pluripotency-associated markers such as Oct4, Nanog or Tra1-60. They displayed a typical MSC surface antigen profile and differentiated into adipocytes, osteocytes and chondrocytes in vitro. Global gene expression analysis by microarray and qRT-PCR revealed a typical MSC gene expression profile that was highly correlated among the five fetal MSC cultures and with that of hESC-MSCs (r(2)>0.90). Like hESC-MSCs, they produced secretion that was cardioprotective in a mouse model of MI/R injury. HPLC analysis of the secretion revealed the presence of a population of microparticles with a hydrodynamic radius of 50-65 nm. This purified population of microparticles was cardioprotective at approximately 1/10 dosage of the crude secretion. (c) 2009 Elsevier Ltd. All rights reserved.

  4. Modeling and identifying the sources of radiocesium contamination in separate sewerage systems.

    PubMed

    Pratama, Mochamad Adhiraga; Yoneda, Minoru; Yamashiki, Yosuke; Shimada, Yoko; Matsui, Yasuto

    2018-05-01

    The Fukushima Dai-ichi nuclear power plant accident released radiocesium in large amounts. The released radionuclides contaminated much of the surrounding environment, including sewers in urban areas of Fukushima prefecture. In this study we attempted to identify and quantify the sources of radiocesium contamination in separate sewerage systems and developed a compartment model based on the Radionuclide Migration in Urban Environments and Drainage Systems (MUD) model. Measurements of the time-dependent radiocesium concentration in sewer sludge combined with meteorological, demographic, and radiocesium dietary intake data indicated that rainfall-derived inflow and infiltration (RDII) and human excretion were the chief contributors of radiocesium contamination in a separate sewerage system. The quantities of contamination derived from RDII and human excretion were calculated and used in the modified MUD model to simulate radiocesium contamination in sewers in three urban areas in Fukushima prefecture: Fukushima, Koriyama, and Nihonmatsu Cities. The Nash efficiency coefficient (0.88-0.92) and determination coefficient (0.89-0.93) calculated in an evaluation of our compartment model indicated that the model produced satisfactory results. We also used the model to estimate the total volume of sludge with radiocesium concentrations in excess of the clearance level, based on the number of months elapsed after the accident. Estimations by our model suggested that wastewater treatment plants (WWTPs) in Fukushima, Koriyama, and Nihonmatsu generated about 1,750,000m 3 of radioactive sludge in total, a level in good agreement with the real data. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Simulation of the pulse propagation by the interacting mode parabolic equation method

    NASA Astrophysics Data System (ADS)

    Trofimov, M. Yu.; Kozitskiy, S. B.; Zakharenko, A. D.

    2018-07-01

    A broadband modeling of pulses has been performed by using the previously derived interacting mode parabolic equation through the Fourier synthesis. Test examples on the wedge with the angle 2.86∘ (known as the ASA benchmark) show excellent agreement with the source images method.

  6. Habituation Is Not Enough: Novelty Preferences, Search, and Memory in Infancy.

    ERIC Educational Resources Information Center

    Sophian, Catherine

    1980-01-01

    Critically evaluates habituation and related models for studying infant memory, focusing on methodological and substantive limitations which restrict the derivation of information from them. The essay considers existing research on the development of object permanence as an alternative source of information about infant memory. (Author/DB)

  7. Formulation of consumables management models: Consumables flight planning worksheet update. [space shuttles

    NASA Technical Reports Server (NTRS)

    Newman, C. M.

    1977-01-01

    The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.

  8. [Marriage and divorce in Japan].

    PubMed

    Haderka, J

    1986-01-01

    Marriage patterns in Japan are analyzed using data from secondary sources. The author notes that although legislation affecting marriage and the family is derived from European models, traditional Japanese attitudes concerning the subservient role of women have a significant impact. The problems faced by women experiencing divorce are noted. (SUMMARY IN ENG AND RUS)

  9. Hybrid 3D printing: a game-changer in personalized cardiac medicine?

    PubMed

    Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J

    2015-12-01

    Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.

  10. Irrigation water demand: A meta-analysis of price elasticities

    NASA Astrophysics Data System (ADS)

    Scheierling, Susanne M.; Loomis, John B.; Young, Robert A.

    2006-01-01

    Metaregression models are estimated to investigate sources of variation in empirical estimates of the price elasticity of irrigation water demand. Elasticity estimates are drawn from 24 studies reported in the United States since 1963, including mathematical programming, field experiments, and econometric studies. The mean price elasticity is 0.48. Long-run elasticities, those that are most useful for policy purposes, are likely larger than the mean estimate. Empirical results suggest that estimates may be more elastic if they are derived from mathematical programming or econometric studies and calculated at a higher irrigation water price. Less elastic estimates are found to be derived from models based on field experiments and in the presence of high-valued crops.

  11. Radiative transfer of HCN: interpreting observations of hyperfine anomalies

    NASA Astrophysics Data System (ADS)

    Mullins, A. M.; Loughnane, R. M.; Redman, M. P.; Wiles, B.; Guegan, N.; Barrett, J.; Keto, E. R.

    2016-07-01

    Molecules with hyperfine splitting of their rotational line spectra are useful probes of optical depth, via the relative line strengths of their hyperfine components. The hyperfine splitting is particularly advantageous in interpreting the physical conditions of the emitting gas because with a second rotational transition, both gas density and temperature can be derived. For HCN however, the relative strengths of the hyperfine lines are anomalous. They appear in ratios which can vary significantly from source to source, and are inconsistent with local thermodynamic equilibrium (LTE). This is the HCN hyperfine anomaly, and it prevents the use of simple LTE models of HCN emission to derive reliable optical depths. In this paper, we demonstrate how to model HCN hyperfine line emission, and derive accurate line ratios, spectral line shapes and optical depths. We show that by carrying out radiative transfer calculations over each hyperfine level individually, as opposed to summing them over each rotational level, the anomalous hyperfine emission emerges naturally. To do this requires not only accurate radiative rates between hyperfine states, but also accurate collisional rates. We investigate the effects of different sets of hyperfine collisional rates, derived via the proportional method and through direct recoupling calculations. Through an extensive parameter sweep over typical low-mass star-forming conditions, we show the HCN line ratios to be highly variable to optical depth. We also reproduce an observed effect whereby the red-blue asymmetry of the hyperfine lines (an infall signature) switches sense within a single rotational transition.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less

  13. AMI-LA observations of the SuperCLASS supercluster

    NASA Astrophysics Data System (ADS)

    Riseley, C. J.; Grainge, K. J. B.; Perrott, Y. C.; Scaife, A. M. M.; Battye, R. A.; Beswick, R. J.; Birkinshaw, M.; Brown, M. L.; Casey, C. M.; Demetroullas, C.; Hales, C. A.; Harrison, I.; Hung, C.-L.; Jackson, N. J.; Muxlow, T.; Watson, B.; Cantwell, T. M.; Carey, S. H.; Elwood, P. J.; Hickish, J.; Jin, T. Z.; Razavi-Ghods, N.; Scott, P. F.; Titterington, D. J.

    2018-03-01

    We present a deep survey of the Super-Cluster Assisted Shear Survey (SuperCLASS) supercluster - a region of sky known to contain five Abell clusters at redshift z ˜ 0.2 - performed using the Arcminute Microkelvin Imager (AMI) Large Array (LA) at 15.5 GHz. Our survey covers an area of approximately 0.9 deg2. We achieve a nominal sensitivity of 32.0 μJy beam-1 towards the field centre, finding 80 sources above a 5σ threshold. We derive the radio colour-colour distribution for sources common to three surveys that cover the field and identify three sources with strongly curved spectra - a high-frequency-peaked source and two GHz-peaked-spectrum sources. The differential source count (i) agrees well with previous deep radio source counts, (ii) exhibits no evidence of an emerging population of star-forming galaxies, down to a limit of 0.24 mJy, and (iii) disagrees with some models of the 15 GHz source population. However, our source count is in agreement with recent work that provides an analytical correction to the source count from the Square Kilometre Array Design Study (SKADS) Simulated Sky, supporting the suggestion that this discrepancy is caused by an abundance of flat-spectrum galaxy cores as yet not included in source population models.

  14. LCS-1: a high-resolution global model of the lithospheric magnetic field derived from CHAMP and Swarm satellite observations

    NASA Astrophysics Data System (ADS)

    Olsen, Nils; Ravat, Dhananjay; Finlay, Christopher C.; Kother, Livia K.

    2017-12-01

    We derive a new model, named LCS-1, of Earth's lithospheric field based on four years (2006 September-2010 September) of magnetic observations taken by the CHAMP satellite at altitudes lower than 350 km, as well as almost three years (2014 April-2016 December) of measurements taken by the two lower Swarm satellites Alpha and Charlie. The model is determined entirely from magnetic 'gradient' data (approximated by finite differences): the north-south gradient is approximated by first differences of 15 s along-track data (for CHAMP and each of the two Swarm satellites), while the east-west gradient is approximated by the difference between observations taken by Swarm Alpha and Charlie. In total, we used 6.2 mio data points. The model is parametrized by 35 000 equivalent point sources located on an almost equal-area grid at a depth of 100 km below the surface (WGS84 ellipsoid). The amplitudes of these point sources are determined by minimizing the misfit to the magnetic satellite 'gradient' data together with the global average of |Br| at the ellipsoid surface (i.e. applying an L1 model regularization of Br). In a final step, we transform the point-source representation to a spherical harmonic expansion. The model shows very good agreement with previous satellite-derived lithospheric field models at low degree (degree correlation above 0.8 for degrees n ≤ 133). Comparison with independent near-surface aeromagnetic data from Australia yields good agreement (coherence >0.55) at horizontal wavelengths down to at least 250 km, corresponding to spherical harmonic degree n ≈ 160. The LCS-1 vertical component and field intensity anomaly maps at Earth's surface show similar features to those exhibited by the WDMAM2 and EMM2015 lithospheric field models truncated at degree 185 in regions where they include near-surface data and provide unprecedented detail where they do not. Example regions of improvement include the Bangui anomaly region in central Africa, the west African cratons, the East African Rift region, the Bay of Bengal, the southern 90°E ridge, the Cretaceous quiet zone south of the Walvis Ridge and the younger parts of the South Atlantic.

  15. Simulation of ground-water flow in the St Peter aquifer in an area contaminated by coal-tar derivatives, St Louis Park, Minnesota

    USGS Publications Warehouse

    Lorenz, D.L.; Stark, J.R.

    1990-01-01

    Model simulations also indicated that drawdown caused by pumping two wells, each pumping at 75 gallons per minute and located about 1 mile southeast of the source of contamination, would be effective in controlling movement and volume of contaminated ground water in the immediate area of the source of contamination. Some contamination may already have moved beyond the influence of these wells, however, because of a complex set of hydraulic conditions.

  16. Cosmogenic 36Cl in karst waters: Quantifying contributions from atmospheric and bedrock sources

    NASA Astrophysics Data System (ADS)

    Johnston, V. E.; McDermott, F.

    2009-12-01

    Improved reconstructions of cosmogenic isotope production through time are crucial to understand past solar variability. As a preliminary step to derive atmospheric 36Cl/Cl solar proxy time-series from speleothems, we quantify 36Cl sources in cave dripwaters. Atmospheric 36Cl fallout rates are a potential proxy for solar output; however extraneous 36Cl derived from in-situ production in cave host-rocks could complicate the solar signal. Results from numerical modeling and preliminary geochemical data presented here show that the atmospheric 36Cl source dominates in many, but not all cave dripwaters. At favorable low elevation, mid-latitude sites, 36Cl based speleothem solar irradiance reconstructions could extend back to 500 ka, with a possible centennial scale temporal resolution. This would represent a marginal improvement in resolution compared with existing polar ice core records, with the added advantages of a wider geographic range, independent U-series constrained chronology, and the potential for contemporaneous climate signals within the same speleothem material.

  17. Modeling and Assimilating Ocean Color Radiances

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2012-01-01

    Radiances are the source of information from ocean color sensors to produce estimates of biological and geochemical constituents. They potentially provide information on various other aspects of global biological and chemical systems, and there is considerable work involved in deriving new information from these signals. Each derived product, however, contains errors that are derived from the application of the radiances, above and beyond the radiance errors. A global biogeochemical model with an explicit spectral radiative transfer model is used to investigate the potential of assimilating radiances. The results indicate gaps in our understanding of radiative processes in the oceans and their relationships with biogeochemical variables. Most important, detritus optical properties are not well characterized and produce important effects of the simulated radiances. Specifically, there does not appear to be a relationship between detrital biomass and its optical properties, as there is for chlorophyll. Approximations are necessary to get beyond this problem. In this reprt we will discuss the challenges in modeling and assimilation water-leaving radiances and the prospects for improving our understanding of biogeochemical process by utilizing these signals.

  18. A 1D-2D Shallow Water Equations solver for discontinuous porosity field based on a Generalized Riemann Problem

    NASA Astrophysics Data System (ADS)

    Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo

    2017-09-01

    A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.

  19. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  20. Transcriptome comparison of human neurons generated using induced pluripotent stem cells derived from dental pulp and skin fibroblasts.

    PubMed

    Chen, Jian; Lin, Mingyan; Foxe, John J; Pedrosa, Erika; Hrabovsky, Anastasia; Carroll, Reed; Zheng, Deyou; Lachman, Herbert M

    2013-01-01

    Induced pluripotent stem cell (iPSC) technology is providing an opportunity to study neuropsychiatric disorders through the capacity to grow patient-specific neurons in vitro. Skin fibroblasts obtained by biopsy have been the most reliable source of cells for reprogramming. However, using other somatic cells obtained by less invasive means would be ideal, especially in children with autism spectrum disorders (ASD) and other neurodevelopmental conditions. In addition to fibroblasts, iPSCs have been developed from cord blood, lymphocytes, hair keratinocytes, and dental pulp from deciduous teeth. Of these, dental pulp would be a good source for neurodevelopmental disorders in children because obtaining material is non-invasive. We investigated its suitability for disease modeling by carrying out gene expression profiling, using RNA-seq, on differentiated neurons derived from iPSCs made from dental pulp extracted from deciduous teeth (T-iPSCs) and fibroblasts (F-iPSCs). This is the first RNA-seq analysis comparing gene expression profiles in neurons derived from iPSCs made from different somatic cells. For the most part, gene expression profiles were quite similar with only 329 genes showing differential expression at a nominally significant p-value (p<0.05), of which 63 remained significant after correcting for genome-wide analysis (FDR <0.05). The most striking difference was the lower level of expression detected for numerous members of the all four HOX gene families in neurons derived from T-iPSCs. In addition, an increased level of expression was seen for several transcription factors expressed in the developing forebrain (FOXP2, OTX1, and LHX2, for example). Overall, pathway analysis revealed that differentially expressed genes that showed higher levels of expression in neurons derived from T-iPSCs were enriched for genes implicated in schizophrenia (SZ). The findings suggest that neurons derived from T-iPSCs are suitable for disease-modeling neuropsychiatric disorder and may have some advantages over those derived from F-iPSCs.

  1. Preliminary Spreadsheet of Eruption Source Parameters for Volcanoes of the World

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne; Ewert, John W.; Spiegel, Jessica

    2009-01-01

    Volcanic eruptions that spew tephra into the atmosphere pose a hazard to jet aircraft. For this reason, the International Civil Aviation Organization (ICAO) has designated nine Volcanic Ash and Aviation Centers (VAACs) around the world whose purpose is to track ash clouds from eruptions and notify aircraft so that they may avoid these ash clouds. During eruptions, VAACs and their collaborators run volcanic-ashtransport- and-dispersion (VATD) models that forecast the location and movement of ash clouds. These models require as input parameters the plume height H, the mass-eruption rate , duration D, erupted volume V (in cubic kilometers of bubble-free or 'dense rock equivalent' [DRE] magma), and the mass fraction of erupted tephra with a particle size smaller than 63 um (m63). Some parameters, such as mass-eruption rate and mass fraction of fine debris, are not obtainable by direct observation; others, such as plume height or duration, are obtainable from observations but may be unavailable in the early hours of an eruption when VATD models are being initiated. For this reason, ash-cloud modelers need to have at their disposal source parameters for a particular volcano that are based on its recent eruptive history and represent the most likely anticipated eruption. They also need source parameters that encompass the range of uncertainty in eruption size or characteristics. In spring of 2007, a workshop was held at the U.S. Geological Survey (USGS) Cascades Volcano Observatory to derive a protocol for assigning eruption source parameters to ash-cloud models during eruptions. The protocol derived from this effort was published by Mastin and others (in press), along with a world map displaying the assigned eruption type for each of the world's volcanoes. Their report, however, did not include the assigned eruption types in tabular form. Therefore, this Open-File Report presents that table in the form of an Excel spreadsheet. These assignments are preliminary and will be modified to follow upcoming recommendations by the volcanological and aviation communities.

  2. Vector image method for the derivation of elastostatic solutions for point sources in a plane layered medium. Part 1: Derivation and simple examples

    NASA Technical Reports Server (NTRS)

    Fares, Nabil; Li, Victor C.

    1986-01-01

    An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.

  3. Application of positive matrix factorization to identify potential sources of PAHs in soil of Dalian, China.

    PubMed

    Wang, Degao; Tian, Fulin; Yang, Meng; Liu, Chenlin; Li, Yi-Fan

    2009-05-01

    Soil derived sources of polycyclic aromatic hydrocarbons (PAHs) in the region of Dalian, China were investigated using positive matrix factorization (PMF). Three factors were separated based on PMF for the statistical investigation of the datasets both in summer and winter. These factors were dominated by the pattern of single sources or groups of similar sources, showing seasonal and regional variations. The main sources of PAHs in Dalian soil in summer were the emissions from coal combustion average (46%), diesel engine (30%), and gasoline engine (24%). In winter, the main sources were the emissions from coal-fired boiler (72%), traffic average (20%), and gasoline engine (8%). These factors with strong seasonality indicated that coal combustion in winter and traffic exhaust in summer dominated the sources of PAHs in soil. These results suggested that PMF model was a proper approach to identify the sources of PAHs in soil.

  4. Inversion of sonobuoy data from shallow-water sites with simulated annealing.

    PubMed

    Lindwall, Dennis; Brozena, John

    2005-02-01

    An enhanced simulated annealing algorithm is used to invert sparsely sampled seismic data collected with sonobuoys to obtain seafloor geoacoustic properties at two littoral marine environments as well as for a synthetic data set. Inversion of field data from a 750-m water-depth site using a water-gun sound source found a good solution which included a pronounced subbottom reflector after 6483 iterations over seven variables. Field data from a 250-m water-depth site using an air-gun source required 35,421 iterations for a good inversion solution because 30 variables had to be solved for, including the shot-to-receiver offsets. The sonobuoy derived compressional wave velocity-depth (Vp-Z) models compare favorably with Vp-Z models derived from nearby, high-quality, multichannel seismic data. There are, however, substantial differences between seafloor reflection coefficients calculated from field models and seafloor reflection coefficients based on commonly used Vp regression curves (gradients). Reflection loss is higher at one field site and lower at the other than predicted from commonly used Vp gradients for terrigenous sediments. In addition, there are strong effects on reflection loss due to the subseafloor interfaces that are also not predicted by Vp gradients.

  5. Tectonic isolation from regional sediment sourcing of the Paradox Basin

    NASA Astrophysics Data System (ADS)

    Smith, T. M.; Saylor, J.; Sundell, K. E.; Lapen, T. J.

    2017-12-01

    The Appalachian and Ouachita-Marathon mountain ranges were created by a series of tectonic collisions that occurred through the middle and late Paleozoic along North America's eastern and southern margins, respectively. Previous work employing detrital zircon U-Pb geochronology has demonstrated that fluvial and eolian systems transported Appalachian-derived sediment across the continent to North America's Paleozoic western margin. However, contemporaneous intraplate deformation of the Ancestral Rocky Mountains (ARM) compartmentalized much of the North American western interior and mid-continent. We employ lithofacies characterization, stratigraphic thickness, paleocurrent data, sandstone petrography, and detrital zircon U-Pb geochronology to evaluate source-sink relationships of the Paradox Basin, which is one of the most prominent ARM basins. Evaluation of provenance is conducted through quantitative comparison of detrital zircon U-Pb distributions from basin samples and potential sources via detrital zircon mixture modeling, and is augmented with sandstone petrography. Mixing model results provide a measure of individual source contributions to basin stratigraphy, and are combined with outcrop and subsurface data (e.g., stratigraphic thickness and facies distributions) to create tectonic isolation maps. These maps elucidate drainage networks and the degree to which local versus regional sources influence sediment character within a single basin, or multiple depocenters. Results show that despite the cross-continental ubiquity of Appalachian-derived sediment, fluvial and deltaic systems throughout much of the Paradox Basin do not record their influence. Instead, sediment sourcing from the Uncompahgre Uplift, which has been interpreted to drive tectonic subsidence and formation of the Paradox Basin, completely dominated its sedimentary record. Further, the strong degree of tectonic isolation experienced by the Paradox Basin appears to be an emerging, yet common feature among other intraplate, tectonically active basins.

  6. Force on Force Modeling with Formal Task Structures and Dynamic Geometry

    DTIC Science & Technology

    2017-03-24

    task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test

  7. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    Treesearch

    Barry T. Wilson; Joseph F. Knight; Ronald E. McRoberts

    2018-01-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several...

  8. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  9. Modeling of Dual Gate Material Hetero-dielectric Strained PNPN TFET for Improved ON Current

    NASA Astrophysics Data System (ADS)

    Kumari, Tripty; Saha, Priyanka; Dash, Dinesh Kumar; Sarkar, Subir Kumar

    2018-01-01

    The tunnel field effect transistor (TFET) is considered to be a promising alternative device for future low-power VLSI circuits due to its steep subthreshold slope, low leakage current and its efficient performance at low supply voltage. However, the main challenging issue associated with realizing TFET for wide scale applications is its low ON current. To overcome this, a dual gate material with the concept of dielectric engineering has been incorporated into conventional TFET structure to tune the tunneling width at source-channel interface allowing significant flow of carriers. In addition to this, N+ pocket is implanted at source-channel junction of the proposed structure and the effect of strain is added for exploring the performance of the model in nanoscale regime. All these added features upgrade the device characteristics leading to higher ON current, low leakage and low threshold voltage. The present work derives the surface potential, electric field expression and drain current by solving 2D Poisson's equation at different boundary conditions. A comparative analysis of proposed model with conventional TFET has been done to establish the superiority of the proposed structure. All analytical results have been compared with the results obtained in SILVACO ATLAS device simulator to establish the accuracy of the derived analytical model.

  10. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less

  11. Case study of dust event sources from the Gobi and Taklamakan deserts: An investigation of the horizontal evolution and topographical effect using numerical modeling and remote sensing.

    PubMed

    Fan, Jin; Yue, Xiaoying; Sun, Qinghua; Wang, Shigong

    2017-06-01

    A severe dust event occurred from April 23 to April 27, 2014, in East Asia. A state-of-the-art online atmospheric chemistry model, WRF/Chem, was combined with a dust model, GOCART, to better understand the entire process of this event. The natural color images and aerosol optical depth (AOD) over the dust source region are derived from datasets of moderate resolution imaging spectroradiometer (MODIS) loaded on a NASA Aqua satellite to trace the dust variation and to verify the model results. Several meteorological conditions, such as pressure, temperature, wind vectors and relative humidity, are used to analyze meteorological dynamic. The results suggest that the dust emission occurred only on April 23 and 24, although this event lasted for 5days. The Gobi Desert was the main source for this event, and the Taklamakan Desert played no important role. This study also suggested that the landform of the source region could remarkably interfere with a dust event. The Tarim Basin has a topographical effect as a "dust reservoir" and can store unsettled dust, which can be released again as a second source, making a dust event longer and heavier. Copyright © 2016. Published by Elsevier B.V.

  12. Preliminary Report on U-Th-Pb Isotope Systematics of the Olivine-Phyric Shergottite Tissint

    NASA Technical Reports Server (NTRS)

    Moriwaki, R.; Usui, T.; Yokoyama, T.; Simon, J. I.; Jones, J. H.

    2014-01-01

    Geochemical studies of shergottites suggest that their parental magmas reflect mixtures between at least two distinct geochemical source reservoirs, producing correlations between radiogenic isotope compositions, and trace element abundances.. These correlations have been interpreted as indicating the presence of a reduced, incompatible-element- depleted reservoir and an oxidized, incompatible-element-rich reservoir. The former is clearly a depleted mantle source, but there has been a long debate regarding the origin of the enriched reservoir. Two contrasting models have been proposed regarding the location and mixing process of the two geochemical source reservoirs: (1) assimilation of oxidized crust by mantle derived, reduced magmas, or (2) mixing of two distinct mantle reservoirs during melting. The former clearly requires the ancient martian crust to be the enriched source (crustal assimilation), whereas the latter requires a long-lived enriched mantle domain that probably originated from residual melts formed during solidification of a magma ocean (heterogeneous mantle model). This study conducts Pb isotope and U-Th-Pb concentration analyses of the olivine-phyric shergottite Tissint because U-Th-Pb isotope systematics have been intensively used as a powerful radiogenic tracer to characterize old crust/sediment components in mantle- derived, terrestrial oceanic island basalts. The U-Th-Pb analyses are applied to sequential acid leaching fractions obtained from Tissint whole-rock powder in order to search for Pb isotopic source components in Tissint magma. Here we report preliminary results of the U-Th-Pb analyses of acid leachates and a residue, and propose the possibility that Tissint would have experienced minor assimilation of old martian crust.

  13. A behavioral choice model of the use of car-sharing and ride-sourcing services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Felipe F.; Lavieri, Patrícia S.; Garikapati, Venu M.

    There are a number of disruptive mobility services that are increasingly finding their way into the marketplace. Two key examples of such services are car-sharing services and ride-sourcing services. In an effort to better understand the influence of various exogenous socio-economic and demographic variables on the frequency of use of ride-sourcing and car-sharing services, this paper presents a bivariate ordered probit model estimated on a survey data set derived from the 2014-2015 Puget Sound Regional Travel Study. Model estimation results show that users of these services tend to be young, well-educated, higher-income, working individuals residing in higher-density areas. There aremore » significant interaction effects reflecting the influence of children and the built environment on disruptive mobility service usage. The model developed in this paper provides key insights into factors affecting market penetration of these services, and can be integrated in larger travel forecasting model systems to better predict the adoption and use of mobility-on-demand services.« less

  14. High-accuracy 3D Fourier forward modeling of gravity field based on the Gauss-FFT technique

    NASA Astrophysics Data System (ADS)

    Zhao, Guangdong; Chen, Bo; Chen, Longwei; Liu, Jianxin; Ren, Zhengyong

    2018-03-01

    The 3D Fourier forward modeling of 3D density sources is capable of providing 3D gravity anomalies coincided with the meshed density distribution within the whole source region. This paper firstly derives a set of analytical expressions through employing 3D Fourier transforms for calculating the gravity anomalies of a 3D density source approximated by right rectangular prisms. To reduce the errors due to aliasing and imposed periodicity as well as edge effects in the Fourier domain modeling, we develop the 3D Gauss-FFT technique to the 3D gravity anomalies forward modeling. The capability and adaptability of this scheme are tested by simple synthetic models. The results show that the accuracy of the Fourier forward methods using the Gauss-FFT with 4 Gaussian-nodes (or more) is comparable to that of the spatial modeling. In addition, the "ghost" source effects in the 3D Fourier forward gravity field due to imposed periodicity of the standard FFT algorithm are remarkably depressed by the application of the 3D Gauss-FFT algorithm. More importantly, the execution times of the 4 nodes Gauss-FFT modeling are reduced by two orders of magnitude compared with the spatial forward method. It demonstrates that the improved Fourier method is an efficient and accurate forward modeling tool for the gravity field.

  15. In Vitro Enzymatic Depolymerization of Lignin with Release of Syringyl, Guaiacyl, and Tricin Units

    PubMed Central

    Gall, Daniel L.; Kontur, Wayne S.; Lan, Wu; Kim, Hoon; Li, Yanding; Ralph, John

    2017-01-01

    ABSTRACT New environmentally sound technologies are needed to derive valuable compounds from renewable resources. Lignin, an abundant polymer in terrestrial plants comprised predominantly of guaiacyl and syringyl monoaromatic phenylpropanoid units, is a potential natural source of aromatic compounds. In addition, the plant secondary metabolite tricin is a recently discovered and moderately abundant flavonoid in grasses. The most prevalent interunit linkage between guaiacyl, syringyl, and tricin units is the β-ether linkage. Previous studies have shown that bacterial β-etherase pathway enzymes catalyze glutathione-dependent cleavage of β-ether bonds in dimeric β-ether lignin model compounds. To date, however, it remains unclear whether the known β-etherase enzymes are active on lignin polymers. Here we report on enzymes that catalyze β-ether cleavage from bona fide lignin, under conditions that recycle the cosubstrates NAD+ and glutathione. Guaiacyl, syringyl, and tricin derivatives were identified as reaction products when different model compounds or lignin fractions were used as substrates. These results demonstrate an in vitro enzymatic system that can recycle cosubstrates while releasing aromatic monomers from model compounds as well as natural and engineered lignin oligomers. These findings can improve the ability to produce valuable aromatic compounds from a renewable resource like lignin. IMPORTANCE Many bacteria are predicted to contain enzymes that could convert renewable carbon sources into substitutes for compounds that are derived from petroleum. The β-etherase pathway present in sphingomonad bacteria could cleave the abundant β–O–4-aryl ether bonds in plant lignin, releasing a biobased source of aromatic compounds for the chemical industry. However, the activity of these enzymes on the complex aromatic oligomers found in plant lignin is unknown. Here we demonstrate biodegradation of lignin polymers using a minimal set of β-etherase pathway enzymes, the ability to recycle needed cofactors (glutathione and NAD+) in vitro, and the release of guaiacyl, syringyl, and tricin as depolymerized products from lignin. These observations provide critical evidence for the use and future optimization of these bacterial β-etherase pathway enzymes for industrial-level biotechnological applications designed to derive high-value monomeric aromatic compounds from lignin. PMID:29180366

  16. Second-order singular pertubative theory for gravitational lenses

    NASA Astrophysics Data System (ADS)

    Alard, C.

    2018-03-01

    The extension of the singular perturbative approach to the second order is presented in this paper. The general expansion to the second order is derived. The second-order expansion is considered as a small correction to the first-order expansion. Using this approach, it is demonstrated that in practice the second-order expansion is reducible to a first order expansion via a re-definition of the first-order pertubative fields. Even if in usual applications the second-order correction is small the reducibility of the second-order expansion to the first-order expansion indicates a potential degeneracy issue. In general, this degeneracy is hard to break. A useful and simple second-order approximation is the thin source approximation, which offers a direct estimation of the correction. The practical application of the corrections derived in this paper is illustrated by using an elliptical NFW lens model. The second-order pertubative expansion provides a noticeable improvement, even for the simplest case of thin source approximation. To conclude, it is clear that for accurate modelization of gravitational lenses using the perturbative method the second-order perturbative expansion should be considered. In particular, an evaluation of the degeneracy due to the second-order term should be performed, for which the thin source approximation is particularly useful.

  17. On the evolution of high-B radio pulsars with measured braking indices

    NASA Astrophysics Data System (ADS)

    Benli, O.; Ertan, Ü.

    2017-11-01

    We have investigated the long-term evolutions of the high-magnetic field radio pulsars (HBRPs) with measured braking indices in the same model that was applied earlier to individual anomalous X-ray pulsars (AXPs), soft gamma repeaters (SGRs) and dim isolated neutron stars (XDINs). We have shown that the rotational properties (period, period derivative and braking index) and the X-ray luminosity of individual HBRPs can be acquired simultaneously by the neutron stars evolving with fallback discs. The model sources reach the observed properties of HBRPs in the propeller phases, when pulsed radio emission is allowed, at ages consistent with the estimated ages of the supernova remnants of the sources. Our results indicate that the strength of magnetic dipole fields of HBRPs are comparable to and even greater than those of AXP/SGRs and XDINs, but still one or two orders of magnitude smaller than the values inferred from the magnetic dipole torque formula. The possible evolutionary paths of the sources imply that they will lose their seemingly HBRP property after about a few 104 yr, because either their rapidly decreasing period derivatives will lead them into the normal radio pulsar population or they will evolve into the accretion phase switching off the radio pulses.

  18. Origin and evolution of the Nakhla meteorite inferred from the Sm-Nd and U-Pb systematics and REE, Ba, Sr, Rb and K abundances

    NASA Technical Reports Server (NTRS)

    Nakamura, N.; Unruh, D. M.; Tatsumoto, M.; Hutchison, R.

    1982-01-01

    Analyses of whole rock and mineral separates from the Nakhla meteorite are carried out by means of Sm-Nd and U-Tn-Pb systematics and by determining their REE, Ba, Sr, Rb, and K concentrations. Results show that the Sm-Nd age of the meteorite is 1.26 + or - 0.7 b.y., while the high initial epsilon(Nd) value of +16 suggests that Nakhla was derived from a light REE-depleted, old planetary mantle source. A three-stage Sm-Nd evolution model is developed and used in combination with LIL element data and estimated partition coefficients in order to test partial melting and fractional crystallization models and to estimate LIL abundances in a possible Nakhla source. The calculations indicate that partial melting of the source followed by extensive fractional crystallization of the partial melt could account for the REE abundances in the Nakhla constituent minerals. It is concluded that the significantly younger age of Nakhla than the youngest lunar rock, the young differentiation age inferred from U-Th-Pb data, and the estimated LIL abundances suggest that this meteorite may have been derived from a relatively large, well-differentiated planetary body such as Mars.

  19. A review of human pluripotent stem cell-derived cardiomyocytes for high-throughput drug discovery, cardiotoxicity screening, and publication standards.

    PubMed

    Mordwinkin, Nicholas M; Burridge, Paul W; Wu, Joseph C

    2013-02-01

    Drug attrition rates have increased in past years, resulting in growing costs for the pharmaceutical industry and consumers. The reasons for this include the lack of in vitro models that correlate with clinical results and poor preclinical toxicity screening assays. The in vitro production of human cardiac progenitor cells and cardiomyocytes from human pluripotent stem cells provides an amenable source of cells for applications in drug discovery, disease modeling, regenerative medicine, and cardiotoxicity screening. In addition, the ability to derive human-induced pluripotent stem cells from somatic tissues, combined with current high-throughput screening and pharmacogenomics, may help realize the use of these cells to fulfill the potential of personalized medicine. In this review, we discuss the use of pluripotent stem cell-derived cardiomyocytes for drug discovery and cardiotoxicity screening, as well as current hurdles that must be overcome for wider clinical applications of this promising approach.

  20. Divergence correction schemes in finite difference method for 3D tensor CSAMT in axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong; Zhang, Zhiyong; Li, Zhiqiang; Cao, Meng

    2017-05-01

    Resistivity anisotropy and full-tensor controlled-source audio-frequency magnetotellurics (CSAMT) have gradually become hot research topics. However, much of the current anisotropy research for tensor CSAMT only focuses on the one-dimensional (1D) solution. As the subsurface is rarely 1D, it is necessary to study three-dimensional (3D) model response. The staggered-grid finite difference method is an effective simulation method for 3D electromagnetic forward modelling. Previous studies have suggested using the divergence correction to constrain the iterative process when using a staggered-grid finite difference model so as to accelerate the 3D forward speed and enhance the computational accuracy. However, the traditional divergence correction method was developed assuming an isotropic medium. This paper improves the traditional isotropic divergence correction method and derivation process to meet the tensor CSAMT requirements for anisotropy using the volume integral of the divergence equation. This method is more intuitive, enabling a simple derivation of a discrete equation and then calculation of coefficients related to the anisotropic divergence correction equation. We validate the result of our 3D computational results by comparing them to the results computed using an anisotropic, controlled-source 2.5D program. The 3D resistivity anisotropy model allows us to evaluate the consequences of using the divergence correction at different frequencies and for two orthogonal finite length sources. Our results show that the divergence correction plays an important role in 3D tensor CSAMT resistivity anisotropy research and offers a solid foundation for inversion of CSAMT data collected over an anisotropic body.

  1. Collaborative mining of graph patterns from multiple sources

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Colonna-Romanoa, John

    2016-05-01

    Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.

  2. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  3. Transport and dispersion of pollutants in surface impoundments: a finite difference model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model by finite-difference (SIMFD) has been developed. SIMFD computes the flow rate, velocity field, and the concentration distribution of pollutants in surface impoundments with any number of islands located within the region of interest. Theoretical derivations and numerical algorithm are described in detail. Instructions for the application of SIMFD and listings of the FORTRAN IV source program are provided. Two sample problems are given to illustrate the application and validity of the model.

  4. Fourier plane modeling of the jet in the galaxy M81

    NASA Astrophysics Data System (ADS)

    Ramessur, Arvind; Bietenholz, Michael F.; Leeuw, Lerothodi L.; Bartel, Norbert

    2015-03-01

    The nearby spiral galaxy M81 has a low-luminosity Active Galactic Nucleus in its center with a core and a one-sided curved jet, dubbed M81*, that is barely resolved with VLBI. To derive basic parameters such as the length of the jet, its orientation and curvature, the usual method of model-fitting with point sources and elliptical Gaussians may not always be the most appropriate one. We are developing Fourier-plane models for such sources, in particular an asymmetric triangle model to fit the extensive set of VLBI data of M81* in the u-v plane. This method may have an advantage over conventional ones in extracting information close to the resolution limit to provide us with a more comprehensive picture of the structure and evolution of the jet. We report on preliminary results.

  5. Recognition and source memory as multivariate decision processes.

    PubMed

    Banks, W P

    2000-07-01

    Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.

  6. The effect of nonlinear propagation on heating of tissue: A numerical model of diagnostic ultrasound beams

    NASA Astrophysics Data System (ADS)

    Cahill, Mark D.; Humphrey, Victor F.; Doody, Claire

    2000-07-01

    Thermal safety indices for diagnostic ultrasound beams are calculated under the assumption that the sound propagates under linear conditions. A non-axisymmetric finite difference model is used to solve the KZK equation, and so to model the beam of a diagnostic scanner in pulsed Doppler mode. Beams from both a uniform focused rectangular source and a linear array are considered. Calculations are performed in water, and in attenuating media with tissue-like characteristics. Attenuating media are found to exhibit significant nonlinear effects for finite-amplitude beams. The resulting loss of intensity by the beam is then used as the source term in a model of tissue heating to estimate the maximum temperature rises. These are compared with the thermal indices, derived from the properties of the water-propagated beams.

  7. Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins

    PubMed Central

    Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.

    2011-01-01

    Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199

  8. Space based inverse modeling of seasonal variations of anthropogenic and natural emissions of nitrogen oxides over China and effects of uncertainties in model meteorology and chemistry

    NASA Astrophysics Data System (ADS)

    Lin, J.

    2011-12-01

    Nitrogen oxides (NOx ≡ NO + NO2) are important atmospheric constituents affecting the tropospheric chemistry, surface air quality and climatic forcing. They are emitted both from anthropogenic and from natural (soil, lightning, biomass burning, etc.) sources, which can be estimated inversely from satellite remote sensing of the vertical column densities (VCDs) of nitrogen dioxide (NO2) in the troposphere. Based on VCDs of NO2 retrieved from OMI, a novel approach is developed in this study to separate anthropogenic emissions of NOx from natural sources over East China for 2006. It exploits the fact that anthropogenic and natural emissions vary with seasons with distinctive patterns. The global chemical transport model (CTM) GEOS-Chem is used to establish the relationship between VCDs of NO2 and emissions of NOx for individual sources. Derived soil emissions are compared to results from a newly developed bottom-up approach. Effects of uncertainties in model meteorology and chemistry over China, an important source of errors in the emission inversion, are evaluated systematically for the first time. Meteorological measurements from space and the ground are used to analyze errors in meteorological parameters driving the CTM.

  9. Use of MODIS Satellite Images and an Atmospheric Dust Transport Model to Evaluate Juniperus spp. Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Myers, O. B.; Budge, A. M.; hide

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al. reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen release will be estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities will be used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  10. High Upward Fluxes of Formic Acid from a Boreal Forest Canopy

    NASA Technical Reports Server (NTRS)

    Schobesberger, Siegfried; Lopez-Hilifiker, Felipe D.; Taipale, Ditte; Millet, Dylan B.; D'Ambro, Emma L.; Rantala, Pekka; Mammarella, Ivan; Zhou, Putian; Wolfe, Glenn M.; Lee, Ben H.; hide

    2016-01-01

    Eddy covariance fluxes of formic acid, HCOOH, were measured over a boreal forest canopy in spring/summer 2014. The HCOOH fluxes were bidirectional but mostly upward during daytime, in contrast to studies elsewhere that reported mostly downward fluxes. Downward flux episodes were explained well by modeled dry deposition rates. The sum of net observed flux and modeled dry deposition yields an upward gross flux of HCOOH, which could not be quantitatively explained by literature estimates of direct vegetative soil emissions nor by efficient chemical production from other volatile organic compounds, suggesting missing or greatly underestimated HCOOH sources in the boreal ecosystem. We implemented a vegetative HCOOH source into the GEOS-Chem chemical transport model to match our derived gross flux and evaluated the updated model against airborne and spaceborne observations. Model biases in the boundary layer were substantially reduced based on this revised treatment, but biases in the free troposphere remain unexplained.

  11. Seismic hazard analysis for Jayapura city, Papua

    NASA Astrophysics Data System (ADS)

    Robiana, R.; Cipta, A.

    2015-04-01

    Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.

  12. Modeling Local Interactions during the Motion of Cyanobacteria

    PubMed Central

    Galante, Amanda; Wisen, Susanne; Bhaya, Devaki; Levy, Doron

    2012-01-01

    Synechocystis sp., a common unicellular freshwater cyanobacterium, has been used as a model organism to study phototaxis, an ability to move in the direction of a light source. This microorganism displays a number of additional characteristics such as delayed motion, surface dependence, and a quasi-random motion, where cells move in a seemingly disordered fashion instead of in the direction of the light source, a global force on the system. These unexplained motions are thought to be modulated by local interactions between cells such as intercellular communication. In this paper, we consider only local interactions of these phototactic cells in order to mathematically model this quasi-random motion. We analyze an experimental data set to illustrate the presence of quasi-random motion and then derive a stochastic dynamic particle system modeling interacting phototactic cells. The simulations of our model are consistent with experimentally observed phototactic motion. PMID:22713858

  13. Combining virtual observatory and equivalent source dipole approaches to describe the geomagnetic field with Swarm measurements

    NASA Astrophysics Data System (ADS)

    Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Civet, François; Mandea, Mioara; Beucler, Éric

    2018-03-01

    A detailed description of the main geomagnetic field and of its temporal variations (i.e., the secular variation or SV) is crucial to understanding the geodynamo. Although the SV is known with high accuracy at ground magnetic observatory locations, the globally uneven distribution of the observatories hampers the determination of a detailed global pattern of the SV. Over the past two decades, satellites have provided global surveys of the geomagnetic field which have been used to derive global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. However, discrepancies remain between ground measurements and field predictions by these models; indeed the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose to directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. We follow a Virtual Observatory (VO) approach and define a global mesh of VOs at satellite altitude. For each VO and each given time interval we apply an Equivalent Source Dipole (ESD) technique to reduce all measurements to a unique location. Synthetic data are first used to validate the new VO-ESD approach. Then, we apply our scheme to data from the first two years of the Swarm mission. For the first time, a 2.5° resolution global mesh of VO time series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. Our approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are then used to derive global spherical harmonic models. For a simple SH parametrization the model describes well the secular trend of the magnetic field both at satellite altitude and at the surface. As more data will be made available, longer VO-ESD time series can be derived and consequently used to study sharp temporal variation features, such as geomagnetic jerks.

  14. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  15. Unique effects and moderators of effects of sources on self-efficacy: A model-based meta-analysis.

    PubMed

    Byars-Winston, Angela; Diestelmann, Jacob; Savoy, Julia N; Hoyt, William T

    2017-11-01

    Self-efficacy beliefs are strong predictors of academic pursuits, performance, and persistence, and in theory are developed and maintained by 4 classes of experiences Bandura (1986) referred to as sources: performance accomplishments (PA), vicarious learning (VL), social persuasion (SP), and affective arousal (AA). The effects of sources on self-efficacy vary by performance domain and individual difference factors. In this meta-analysis (k = 61 studies of academic self-efficacy; N = 8,965), we employed B. J. Becker's (2009) model-based approach to examine cumulative effects of the sources as a set and unique effects of each source, controlling for the others. Following Becker's recommendations, we used available data to create a correlation matrix for the 4 sources and self-efficacy, then used these meta-analytically derived correlations to test our path model. We further examined moderation of these associations by subject area (STEM vs. non-STEM), grade, sex, and ethnicity. PA showed by far the strongest unique association with self-efficacy beliefs. Subject area was a significant moderator, with sources collectively predicting self-efficacy more strongly in non-STEM (k = 14) compared with STEM (k = 47) subjects (R2 = .37 and .22, respectively). Within studies of STEM subjects, grade level was a significant moderator of the coefficients in our path model, as were 2 continuous study characteristics (percent non-White and percent female). Practical implications of the findings and future research directions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Source analysis of MEG activities during sleep (abstract)

    NASA Astrophysics Data System (ADS)

    Ueno, S.; Iramina, K.

    1991-04-01

    The present study focuses on magnetic fields of the brain activities during sleep, in particular on K-complexes, vertex waves, and sleep spindles in human subjects. We analyzed these waveforms based on both topographic EEG (electroencephalographic) maps and magnetic fields measurements, called MEGs (magnetoencephalograms). The components of magnetic fields perpendicular to the surface of the head were measured using a dc SQUID magnetometer with a second derivative gradiometer. In our computer simulation, the head is assumed to be a homogeneous spherical volume conductor, with electric sources of brain activity modeled as current dipoles. Comparison of computer simulations with the measured data, particularly the MEG, suggests that the source of K-complexes can be modeled by two current dipoles. A source for the vertex wave is modeled by a single current dipole which orients along the body axis out of the head. By again measuring the simultaneous MEG and EEG signals, it is possible to uniquely determine the orientation of this dipole, particularly when it is tilted slightly off-axis. In sleep stage 2, fast waves of magnetic fields consistently appeared, but EEG spindles appeared intermittently. The results suggest that there exist sources which are undetectable by electrical measurement but are detectable by magnetic-field measurement. Such source can be described by a pair of opposing dipoles of which directions are oppositely oriented.

  17. Model falsifiability and climate slow modes

    NASA Astrophysics Data System (ADS)

    Essex, Christopher; Tsonis, Anastasios A.

    2018-07-01

    The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.

  18. Examining the effects of anthropogenic emissions on isoprene-derived secondary organic aerosol formation during the 2013 Southern Oxidant and Aerosol Study (SOAS) at the Look Rock, Tennessee ground site

    NASA Astrophysics Data System (ADS)

    Budisulistiorini, S. H.; Li, X.; Bairai, S. T.; Renfro, J.; Liu, Y.; Liu, Y. J.; McKinney, K. A.; Martin, S. T.; McNeill, V. F.; Pye, H. O. T.; Nenes, A.; Neff, M. E.; Stone, E. A.; Mueller, S.; Knote, C.; Shaw, S. L.; Zhang, Z.; Gold, A.; Surratt, J. D.

    2015-08-01

    A suite of offline and real-time gas- and particle-phase measurements was deployed at Look Rock, Tennessee (TN), during the 2013 Southern Oxidant and Aerosol Study (SOAS) to examine the effects of anthropogenic emissions on isoprene-derived secondary organic aerosol (SOA) formation. High- and low-time-resolution PM2.5 samples were collected for analysis of known tracer compounds in isoprene-derived SOA by gas chromatography/electron ionization-mass spectrometry (GC/EI-MS) and ultra performance liquid chromatography/diode array detection-electrospray ionization-high-resolution quadrupole time-of-flight mass spectrometry (UPLC/DAD-ESI-HR-QTOFMS). Source apportionment of the organic aerosol (OA) was determined by positive matrix factorization (PMF) analysis of mass spectrometric data acquired on an Aerodyne Aerosol Chemical Speciation Monitor (ACSM). Campaign average mass concentrations of the sum of quantified isoprene-derived SOA tracers contributed to ~ 9 % (up to 28 %) of the total OA mass, with isoprene-epoxydiol (IEPOX) chemistry accounting for ~ 97 % of the quantified tracers. PMF analysis resolved a factor with a profile similar to the IEPOX-OA factor resolved in an Atlanta study and was therefore designated IEPOX-OA. This factor was strongly correlated (r2 > 0.7) with 2-methyltetrols, C5-alkene triols, IEPOX-derived organosulfates, and dimers of organosulfates, confirming the role of IEPOX chemistry as the source. On average, IEPOX-derived SOA tracer mass was ~ 26 % (up to 49 %) of the IEPOX-OA factor mass, which accounted for 32 % of the total OA. A low-volatility oxygenated organic aerosol (LV-OOA) and an oxidized factor with a profile similar to 91Fac observed in areas where emissions are biogenic-dominated were also resolved by PMF analysis, whereas no primary organic aerosol (POA) sources could be resolved. These findings were consistent with low levels of primary pollutants, such as nitric oxide (NO ~ 0.03 ppb), carbon monoxide (CO ~ 116 ppb), and black carbon (BC ~ 0.2 μg m-3). Particle-phase sulfate is fairly correlated (r2 ~ 0.3) with both methacrylic acid epoxide (MAE)/hydroxymethyl-methyl-α-lactone (HMML)- (henceforth called methacrolein (MACR)-derived SOA tracers) and IEPOX-derived SOA tracers, and more strongly correlated (r2 ~ 0.6) with the IEPOX-OA factor, in sum suggesting an important role of sulfate in isoprene SOA formation. Moderate correlation between the MACR-derived SOA tracer 2-methylglyceric acid with sum of reactive and reservoir nitrogen oxides (NOy; r2 = 0.38) and nitrate (r2 = 0.45) indicates the potential influence of anthropogenic emissions through long-range transport. Despite the lack of a clear association of IEPOX-OA with locally estimated aerosol acidity and liquid water content (LWC), box model calculations of IEPOX uptake using the simpleGAMMA model, accounting for the role of acidity and aerosol water, predicted the abundance of the IEPOX-derived SOA tracers 2-methyltetrols and the corresponding sulfates with good accuracy (r2 ~ 0.5 and ~ 0.7, respectively). The modeling and data combined suggest an anthropogenic influence on isoprene-derived SOA formation through acid-catalyzed heterogeneous chemistry of IEPOX in the southeastern US. However, it appears that this process was not limited by aerosol acidity or LWC at Look Rock during SOAS. Future studies should further explore the extent to which acidity and LWC as well as aerosol viscosity and morphology becomes a limiting factor of IEPOX-derived SOA, and their modulation by anthropogenic emissions.

  19. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  20. Using an Altimeter-Derived Internal Tide Model to Remove Tides from in Situ Data

    NASA Technical Reports Server (NTRS)

    Zaron, Edward D.; Ray, Richard D.

    2017-01-01

    Internal waves at tidal frequencies, i.e., the internal tides, are a prominent source of variability in the ocean associated with significant vertical isopycnal displacements and currents. Because the isopycnal displacements are caused by ageostrophic dynamics, they contribute uncertainty to geostrophic transport inferred from vertical profiles in the ocean. Here it is demonstrated that a newly developed model of the main semidiurnal (M2) internal tide derived from satellite altimetry may be used to partially remove the tide from vertical profile data, as measured by the reduction of steric height variance inferred from the profiles. It is further demonstrated that the internal tide model can account for a component of the near-surface velocity as measured by drogued drifters. These comparisons represent a validation of the internal tide model using independent data and highlight its potential use in removing internal tide signals from in situ observations.

  1. Source and tectonic implications of tonalite-trondhjemite magmatism in the Klamath Mountains

    USGS Publications Warehouse

    Barnes, C.G.; Petersen, S.W.; Kistler, R.W.; Murray, R.; Kays, M.A.

    1996-01-01

    In the Klamath Mountains, voluminous tonalite-trondhjemite magmatism was characteristic of a short period of time from about 144 to 136 Ma (Early Cretaceous). It occurred about 5 to l0 m.y. after the ??? 165 to 159 Ma Josephine ophiolite was thrust beneath older parts of the province during the Nevadan orogeny (thrusting from ??? 155 to 148 Ma). The magmatism also corresponds to a period of slow or no subduction. Most of the plutons crop out in the south-central Klamath Mountains in California, but one occurs in Oregon at the northern end of the province. Compositionally extended members of the suite consist of precursor gabbroic to dioritic rocks followed by later, more voluminous tonalitic and trondhjemitic intrusions. Most plutons consist almost entirely of tonalite and trondhjemite. Poorlydefined concentric zoning is common. Tonalitic rocks are typically of the Iow-Al type but trondhjemites are generally of the high-Al type, even those that occur in the same pluton as low-Al tonalite??. The suite is characterized by low abundances of K2O, Rb, Zr, and heavy rare earth elements. Sr contents are generally moderate ( ???450 ppm) by comparison with Sr-rich arc lavas interpreted to be slab melts (up to 2000 ppm). Initial 87Sr/ 86Sr, ??18O, and ??Nd are typical of mantle-derived magmas or of crustally-derived magmas with a metabasic source. Compositional variation within plutons can be modeled by variable degrees of partial melting of a heterogeneous metabasaltic source (transitional mid-ocean ridge to island arc basalt), but not by fractional crystallyzation of a basaltic parent. Melting models require a residual assemblage of clinopyroxene+garnet??plagioclase??amphibole; residual plagioclase suggests a deep crustal origin rather than melting of a subducted slab. Such models are consistent with the metabasic part of the Josephine ophiolite as the source. Because the Josephine ophiolite was at low T during Nevadan thrusting, an external heat source was probably necessary to achieve significant degrees of melting; heat was probably extracted from mantle-derived basaltic melts, which were parental to the mafic precursors of the tonalite-trondhjemite suite. Thus, under appropriate tectonic and thermal conditions, heterogeneous mafic crustal rocks can melt to form both low- and high-Al tonalitic and trondhjemitic magmas; slab melting is not necessary.

  2. Updates on Force Limiting Improvements

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Scharton, Terry

    2013-01-01

    The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.

  3. BOREAS RSS-4 1994 Jack Pine Leaf Biochemistry and Modeled Spectra in the SSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Plummer, Stephen; Lucas, Neil; Dawson, Terry

    2000-01-01

    The BOREAS RSS-4 team focused its efforts on deriving estimates of LAI and leaf chlorophyll and nitrogen concentrations from remotely sensed data for input into the Forest BGC model. This data set contains measurements of jack pine (Pinus banksiana) needle biochemistry from the BOREAS SSA in July and August 1994. The data contain measurements of current and year-1 needle chlorophyll, nitrogen, lignin, cellulose, and water content for the OJP flux tower and nearby auxiliary sites. The data have been used to test a needle reflectance and transmittance model, LIBERTY (Dawson et al., in press). The source code for the model and modeled needle spectra for each of the sampled tower and auxiliary sites are provided as part of this data set. The LIBERTY model was developed and the predicted spectral data generated to parameterize a canopy reflectance model (North, 1996) for comparison with AVIRIS, POLDER, and PARABOLA data. The data and model source code are stored in ASCII files.

  4. Factors and processes governing the C-14 content of carbonate in desert soils

    NASA Technical Reports Server (NTRS)

    Amundson, Ronald; Wang, Yang; Chadwick, Oliver; Trumbore, Susan; Mcfadden, Leslie; Mcdonald, Eric; Wells, Steven; Deniro, Michael

    1994-01-01

    A model is presented describing the factors and processes which determine the measured C-14 ages of soil calcium carbonate. Pedogenic carbonate forms in isotopic equilium with soil CO2. Carbon dioxide in soils is a mixture of CO2 derived from two biological sources: respiration by living plant roots and respiration of microorganisms decomposing soil humus. The relative proportion of these two CO2 sources can greatly affect the initial C-14 content of pedogenic carbonate: the greater the contribution of humus-derived CO2, the greater the initial C-14 age of the carbonate mineral. For any given mixture of CO2 sources, the steady-state (14)CO2 distribution vs. soil depth can be described by a production/diffusion model. As a soil ages, the C-14 age of soil humus increases, as does the steady-state C-14 age of soil CO2 and the initial C-14 age of any pedogenic carbonate which forms. The mean C-14 age of a complete pedogenic carbonate coating or nodule will underestimate the true age of the soil carbonate. This discrepancy increases the older a soil becomes. Partial removal of outer (and younger) carbonate coatings greatly improves the relationship between measured C-14 age and true age. Although the production/diffusion model qualitatively explains the C-14 age of pedogenic carbonate vs. soil depth in many soils, other factors, such as climate change, may contribute to the observed trends, particularily in soils older than the Holocene.

  5. GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.

    2017-12-01

    On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)

  6. Combined use of stable isotopes and hydrologic modeling to better understand nutrient sources and cycling in highly altered systems (Invited)

    NASA Astrophysics Data System (ADS)

    Young, M. B.; Kendall, C.; Guerin, M.; Stringfellow, W. T.; Silva, S. R.; Harter, T.; Parker, A.

    2013-12-01

    The Sacramento and San Joaquin Rivers provide the majority of freshwater for the San Francisco Bay Delta. Both rivers are important sources of drinking and irrigation water for California, and play critical roles in the health of California fisheries. Understanding the factors controlling water quality and primary productivity in these rivers and the Delta is essential for making sound economic and environmental water management decisions. However, these highly altered surface water systems present many challenges for water quality monitoring studies due to factors such as multiple potential nutrient and contaminant inputs, dynamic source water inputs, and changing flow regimes controlled by both natural and engineered conditions. The watersheds for both rivers contain areas of intensive agriculture along with many other land uses, and the Sacramento River receives significant amounts of treated wastewater from the large population around the City of Sacramento. We have used a multi-isotope approach combined with mass balance and hydrodynamic modeling in order to better understand the dominant nutrient sources for each of these rivers, and to track nutrient sources and cycling within the complex Delta region around the confluence of the rivers. High nitrate concentrations within the San Joaquin River fuel summer algal blooms, contributing to low dissolved oxygen conditions. High δ15N-NO3 values combined with the high nitrate concentrations suggest that animal manure is a significant source of nitrate to the San Joaquin River. In contrast, the Sacramento River has lower nitrate concentrations but elevated ammonium concentrations from wastewater discharge. Downstream nitrification of the ammonium can be clearly traced using δ15N-NH4. Flow conditions for these rivers and the Delta have strong seasonal and inter-annual variations, resulting in significant changes in nutrient delivery and cycling. Isotopic measurements and estimates of source water contributions derived from the DSM2-HYDRO hydrologic model demonstrate that mixing between San Joaquin and Sacramento River water can occur as far as 30 miles upstream of the confluence within the San Joaquin channel, and that San Joaquin-derived nitrate only reaches the western Delta during periods of high flow.

  7. Electrical performance characteristics of high power converters for space power applications

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.; King, Roger J.

    1989-01-01

    The first goal of this project was to investigate various converters that would be suitable for processing electric power derived from a nuclear reactor. The implementation is indicated of a 20 kHz system that includes a source converter, a ballast converter, and a fixed frequency converter for generating the 20 kHz output. This system can be converted to dc simply by removing the fixed frequency converter. This present study emphasized the design and testing of the source and ballast converters. A push-pull current-fed (PPCF) design was selected for the source converter, and a 2.7 kW version of this was implemented using three 900 watt modules in parallel. The characteristic equation for two converters in parallel was derived, but this analysis did not yield any experimental methods for measuring relative stability. The three source modules were first tested individually and then in parallel as a 2.7 kW system. All tests proved to be satisfactory; the system was stable; efficiency and regulation were acceptable; and the system was fault tolerant. The design of a ballast-load converter, which was operated as a shunt regulator, was investigated. The proposed power circuit is suitable for use with BJTs because proportional base drive is easily implemented. A control circuit which minimizes switching frequency ripple and automatically bypasses a faulty shunt section was developed. A nonlinear state-space-averaged model of the shunt regulator was developed and shown to produce an accurate incremental (small-signal) dynamic model, even though the usual state-space-averaging assumptions were not met. The nonlinear model was also shown to be useful for large-signal dynamic simulation using PSpice.

  8. Analysis the Source model of the 2009 Mw 7.6 Padang Earthquake in Sumatra Region using continuous GPS data

    NASA Astrophysics Data System (ADS)

    Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.

    2017-12-01

    A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.

  9. High-resolution observations of low-luminosity gigahertz-peaked spectrum and compact steep-spectrum sources

    NASA Astrophysics Data System (ADS)

    Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.

    2018-06-01

    We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.

  10. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.

  11. Sublimation rates of carbon monoxide and carbon dioxide from comets at large heliocentric distances

    NASA Technical Reports Server (NTRS)

    Sekanina, Zdenek

    1992-01-01

    Using a simple model for outgassing from a small flat surface area, the sublimation rates of carbon monoxide and carbon dioxide, two species more volatile than water ice that are known to be present in comets, are calculated for a suddenly activated discrete source on the rotating nucleus. The instantaneous sublimation rate depends upon the comet's heliocentric distance and the Sun's zenith angle at the location of the source. The values are derived for the constants of CO and CO2 in an expression that yields the local rotation-averaged sublimation rate as a function of the comet's spin parameters and the source's cometocentric latitude.

  12. Addendum to foundations of multidimensional wave field signal theory: Gaussian source function

    NASA Astrophysics Data System (ADS)

    Baddour, Natalie

    2018-02-01

    Many important physical phenomena are described by wave or diffusion-wave type equations. Recent work has shown that a transform domain signal description from linear system theory can give meaningful insight to multi-dimensional wave fields. In N. Baddour [AIP Adv. 1, 022120 (2011)], certain results were derived that are mathematically useful for the inversion of multi-dimensional Fourier transforms, but more importantly provide useful insight into how source functions are related to the resulting wave field. In this short addendum to that work, it is shown that these results can be applied with a Gaussian source function, which is often useful for modelling various physical phenomena.

  13. Global Inventory of Gas Geochemistry Data from Fossil Fuel, Microbial and Burning Sources, version 2017

    NASA Astrophysics Data System (ADS)

    Sherwood, Owen A.; Schwietzke, Stefan; Arling, Victoria A.; Etiope, Giuseppe

    2017-08-01

    The concentration of atmospheric methane (CH4) has more than doubled over the industrial era. To help constrain global and regional CH4 budgets, inverse (top-down) models incorporate data on the concentration and stable carbon (δ13C) and hydrogen (δ2H) isotopic ratios of atmospheric CH4. These models depend on accurate δ13C and δ2H end-member source signatures for each of the main emissions categories. Compared with meticulous measurement and calibration of isotopic CH4 in the atmosphere, there has been relatively less effort to characterize globally representative isotopic source signatures, particularly for fossil fuel sources. Most global CH4 budget models have so far relied on outdated source signature values derived from globally nonrepresentative data. To correct this deficiency, we present a comprehensive, globally representative end-member database of the δ13C and δ2H of CH4 from fossil fuel (conventional natural gas, shale gas, and coal), modern microbial (wetlands, rice paddies, ruminants, termites, and landfills and/or waste) and biomass burning sources. Gas molecular compositional data for fossil fuel categories are also included with the database. The database comprises 10 706 samples (8734 fossil fuel, 1972 non-fossil) from 190 published references. Mean (unweighted) δ13C signatures for fossil fuel CH4 are significantly lighter than values commonly used in CH4 budget models, thus highlighting potential underestimation of fossil fuel CH4 emissions in previous CH4 budget models. This living database will be updated every 2-3 years to provide the atmospheric modeling community with the most complete CH4 source signature data possible. Database digital object identifier (DOI): https://doi.org/10.15138/G3201T.

  14. Bounds on the dynamics of sink populations with noisy immigration.

    PubMed

    Eager, Eric Alan; Guiver, Chris; Hodgson, Dave; Rebarber, Richard; Stott, Iain; Townley, Stuart

    2014-03-01

    Sink populations are doomed to decline to extinction in the absence of immigration. The dynamics of sink populations are not easily modelled using the standard framework of per capita rates of immigration, because numbers of immigrants are determined by extrinsic sources (for example, source populations, or population managers). Here we appeal to a systems and control framework to place upper and lower bounds on both the transient and future dynamics of sink populations that are subject to noisy immigration. Immigration has a number of interpretations and can fit a wide variety of models found in the literature. We apply the results to case studies derived from published models for Chinook salmon (Oncorhynchus tshawytscha) and blowout penstemon (Penstemon haydenii). Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Theory of cosmological perturbations with cuscuton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boruah, Supranta S.; Kim, Hyung J.; Geshnizjani, Ghazal, E-mail: ssarmabo@uwaterloo.ca, E-mail: h268kim@uwaterloo.ca, E-mail: ggeshniz@uwaterloo.ca

    2017-07-01

    This paper presents the first derivation of the quadratic action for curvature perturbations, ζ, within the framework of cuscuton gravity. We study the scalar cosmological perturbations sourced by a canonical single scalar field in the presence of cuscuton field. We identify ζ as comoving curvature with respect to the source field and we show that it retains its conservation characteristic on super horizon scales. The result provides an explicit proof that cuscuton modification of gravity around Friedmann-Lemaitre-Robertson-Walker (FLRW) metric is ghost free. We also investigate the potential development of other instabilities in cuscuton models. We find that in a largemore » class of these models, there is no generic instability problem. However, depending on the details of slow-roll parameters, specific models may display gradient instabilities.« less

  16. Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media

    NASA Astrophysics Data System (ADS)

    Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen

    2018-06-01

    A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.

  17. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  18. Biomedical data integration - capturing similarities while preserving disparities.

    PubMed

    Bianchi, Stefano; Burla, Anna; Conti, Costanza; Farkash, Ariel; Kent, Carmel; Maman, Yonatan; Shabo, Amnon

    2009-01-01

    One of the challenges of healthcare data processing, analysis and warehousing is the integration of data gathered from disparate and diverse data sources. Promoting the adoption of worldwide accepted information standards along with common terminologies and the use of technologies derived from semantic web representation, is a suitable path to achieve that. To that end, the HL7 V3 Reference Information Model (RIM) [1] has been used as the underlying information model coupled with the Web Ontology Language (OWL) [2] as the semantic data integration technology. In this paper we depict a biomedical data integration process and demonstrate how it was used for integrating various data sources, containing clinical, environmental and genomic data, within Hypergenes, a European Commission funded project exploring the Essential Hypertension [3] disease model.

  19. Fermi large area telescope second source catalog

    DOE PAGES

    Nolan, P. L.; Abdo, A. A.; Ackermann, M.; ...

    2012-03-28

    Here, we present the second catalog of high-energy γ-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are fluxmore » measurements in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. Furthermore, we provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. Finally, the 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely γ-ray-producing source classes.« less

  20. FERMI LARGE AREA TELESCOPE SECOND SOURCE CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nolan, P. L.; Ajello, M.; Allafort, A.

    We present the second catalog of high-energy {gamma}-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurementsmore » in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely {gamma}-ray-producing source classes.« less

  1. Coupling chemical transport model source attributions with positive matrix factorization: application to two IMPROVE sites impacted by wildfires.

    PubMed

    Sturtz, Timothy M; Schichtel, Bret A; Larson, Timothy V

    2014-10-07

    Source contributions to total fine particle carbon predicted by a chemical transport model (CTM) were incorporated into the positive matrix factorization (PMF) receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in the Multilinear Engine (ME-2). The methodology provides the ability to separate features that would not be identified using PMF alone, without sacrificing fit to observations. The hybrid model was applied to IMPROVE data taken from 2006 through 2008 at Monture and Sula Peak, Montana. It was able to separately identify major contributions of total carbon (TC) from wildfires and minor contributions from biogenic sources. The predictions of TC had a lower cross-validated RMSE than those from either PMF or CTM alone. Two unconstrained, minor features were identified at each site, a soil derived feature with elevated summer impacts and a feature enriched in sulfate and nitrate with significant, but sporadic contributions across the sampling period. The respective mean TC contributions from wildfires, biogenic emissions, and other sources were 1.18, 0.12, and 0.12 ugC/m(3) at Monture and 1.60, 0.44, and 0.06 ugC/m(3) at Sula Peak.

  2. Assessment of the Unintentional Reuse of Municipal Wastewater

    NASA Astrophysics Data System (ADS)

    Okasaki, S.; Fono, L.; Sedlak, D. L.; Dracup, J. A.

    2002-12-01

    Many surface waters that receive wastewater effluent also serve as source waters for drinking water treatment plants. Recent research has shown that a number of previously undiscovered wastewater-derived contaminants are present in these surface waters, including pharmaceuticals and human hormones, several of which are suspected carcinogens or endocrine disrupters and are, as of yet, unregulated through drinking water standards. This research has been designed to determine the extent of contamination of specific wastewater-derived contaminants in surface water bodies that both receive wastewater effluent and serve as a source of drinking water to a sizeable population. We are testing the hypothesis that surface water supplies during low flow are potentially of worse quality than carefully monitored reclaimed water. The first phase of our research involves: (1) the selection of sites for study; (2) a hydrologic analysis of the selected sites to determine average flow of the source water during median- and low-flow conditions; and (3) the development and testing of chemical analyses, including both conservative and reactive tracers that have been studied in microcosms and wetlands for attenuation rates. The second phase involves the development and use of the hydrologic model QUAL2E to simulate each of the selected watersheds in order to estimate potential stream water quality impairments at the drinking water intake at each site. The results of the model are verified with field sampling at designated locations at each site. We expect to identify several critical river basins where surface water at the drinking water intake contains sufficient wastewater-derived contaminants to warrant concern. If wastewater-derived contaminants are detected, we will estimate the average annual exposure of consumers of this water. We will compare these expected and actual concentrations with typical constituent concentrations found in wastewater that has undergone advanced treatment for reclamation. We may demonstrate that the surface water supplies during low flow are actually of worse quality than carefully monitored reclaimed water.

  3. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.

    PubMed

    Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C

    2017-08-15

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation

    PubMed Central

    Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.

    2018-01-01

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130

  5. New (125)I brachytherapy source IsoSeed I25.S17plus: Monte Carlo dosimetry simulation and comparison to sources of similar design.

    PubMed

    Pantelis, Evaggelos; Papagiannis, Panagiotis; Anagnostopoulos, Giorgos; Baltas, Dimos

    2013-12-01

    To determine the relative dose rate distribution around the new (125)I brachytherapy source IsoSeed I25.S17plus and report results in a form suitable for clinical use. Results for the new source are also compared to corresponding results for other commercially available (125)I sources of similar design. Monte Carlo simulations were performed using the MCNP5 v.1.6 general purpose code. The model of the new source was prepared from information provided by the manufacturer and verified by imaging a sample of ten non-radioactive sources. Corresponding simulations were also performed for the 6711 (125)I brachytherapy source, using updated geometric information presented recently in the literature. The uncertainty of the dose distribution around the new source, as well as the dosimetric quantities derived from it according to the Task Group 43 formalism, were determined from the standard error of the mean of simulations for a sample of fifty source models. These source models were prepared by randomly selecting values of geometric parameters from uniform distributions defined by manufacturer stated tolerances. Results are presented in the form of the quantities defined in the update of the Task Group 43 report, as well as a relative dose rate table in Cartesian coordinates. The dose rate distribution of the new source is comparable to that of sources of similar design (IsoSeed I25.S17, Oncoseed 6711, SelectSeed 130.002, Advantage IAI-125A, I-Seed AgX100, Thinseed 9011). Noticeable differences were observed only for the IsoSeed I25.S06 and Best 2301 sources.

  6. Well-to-refinery emissions and net-energy analysis of China's crude-oil supply

    NASA Astrophysics Data System (ADS)

    Masnadi, Mohammad S.; El-Houjeiri, Hassan M.; Schunack, Dominik; Li, Yunpo; Roberts, Samori O.; Przesmitzki, Steven; Brandt, Adam R.; Wang, Michael

    2018-03-01

    Oil is China's second-largest energy source, so it is essential to understand the country's greenhouse gas emissions from crude-oil production. Chinese crude supply is sourced from numerous major global petroleum producers. Here, we use a per-barrel well-to-refinery life-cycle analysis model with data derived from hundreds of public and commercial sources to model the Chinese crude mix and the upstream carbon intensities and energetic productivity of China's crude supply. We generate a carbon-denominated supply curve representing Chinese crude-oil supply from 146 oilfields in 20 countries. The selected fields are estimated to emit between 1.5 and 46.9 g CO2eq MJ-1 of oil, with volume-weighted average emissions of 8.4 g CO2eq MJ-1. These estimates are higher than some existing databases, illustrating the importance of bottom-up models to support life-cycle analysis databases. This study provides quantitative insight into China's energy policy and the economic and environmental implications of China's oil consumption.

  7. Systems biology derived source-sink mechanism of BMP gradient formation

    PubMed Central

    Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei

    2017-01-01

    A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development. PMID:28826472

  8. Systems biology derived source-sink mechanism of BMP gradient formation.

    PubMed

    Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei; Umulis, David; Mullins, Mary C

    2017-08-09

    A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development.

  9. Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2018-01-01

    A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.

  10. New Hafnium Isotope and Trace Element Constraints on the Role of a Plume in Genesis of the Eastern Snake River Plain Basalts, Idaho

    NASA Astrophysics Data System (ADS)

    Taylor, R. D.; Reid, M. R.; Blichert-Toft, J.

    2009-12-01

    Bimodal volcanism associated with the eastern Snake River Plain (ESRP)-Yellowstone Plateau province has persisted since approximately 16 Ma. A time-transgressive track of rhyolitic eruptions which young progressively to the east and parallel the motion of the North American plate are overlain by younger basalts with no age progression. Interpretations for the origin of these basalts range from a thermo-chemical mantle plume to incipient melting of the shallow upper mantle, and remain controversial. The enigmatic ESRP basalts are characterized by high 3He/4He, diagnostic of a plume source, but also by lithophile radiogenic isotope signatures that are more enriched than expected for plume-derived OIBs. These features could possibly be caused by isotopic decoupling associated with shallow melting of a hybridized upper mantle, or derivation from an atypical mantle plume, or both by way of mixing. New Hf isotope and trace element data further constrain potential sources for the ESRP basalts. Their Hf isotopic signatures (ɛHf = +0.1 to -5.8) are moderately enriched and consistently fall above or in the upper part of the field of OIBs, with similar Nd isotope signatures (ɛNd = -2.0 to -5.8), indicating a source with high time-integrated Lu/Hf compared with Sm/Nd. The isotopic compositions of the basalts lie between those of Archean SCML and a more depleted end-member source, suggestive of contributions from at least two sources. The grouping of isotopic characteristics is compact compared to other regional volcanism, implying that the hybridization process is highly reproducible within the ESRP. Minor localized differences in isotopic composition may signify local variations in the relative proportions of the end-members. Trace element patterns also support genesis of the ESRP basalts from an enriched source. Our data detect evidence of deeper contributions derived from the garnet-stability field, and a greater affinity of the trace element signatures to plume sources than to sources in the mantle lithosphere. The Hf isotope and trace element characteristics of the ESRP basalts thus support a model of derivation from a deep mantle plume with additional melt contributions and isotopic overprinting from SCML.

  11. THE FIRST BENT DOUBLE LOBE RADIO SOURCE IN A KNOWN CLUSTER FILAMENT: CONSTRAINTS ON THE INTRAFILAMENT MEDIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Louise O. V.; Fadda, Dario; Frayer, David T., E-mail: louise@ipac.caltech.ed

    2010-12-01

    We announce the first discovery of a bent double lobe radio source (DLRS) in a known cluster filament. The bent DLRS is found at a distance of 3.4 Mpc from the center of the rich galaxy cluster, A1763. We derive a bend angle {alpha} = 25{sup 0}, and infer that the source is most likely seen at a viewing angle of {Phi} = 10{sup 0}. From measuring the flux in the jet between the core and further lobe and assuming a spectral index of 1, we calculate the minimum pressure in the jet, (8.0 {+-} 3.2) x 10{sup -13} dynmore » cm{sup -2}, and derive constraints on the intrafilament medium (IFM) assuming the bend of the jet is due to ram pressure. We constrain the IFM to be between (1-20) x 10{sup -29} gm cm{sup -3}. This is consistent with recent direct probes of the IFM and theoretical models. These observations justify future searches for bent double lobe radio sources located several megaparsecs from cluster cores, as they may be good markers of super cluster filaments.« less

  12. A weighted adjustment of a similarity transformation between two point sets containing errors

    NASA Astrophysics Data System (ADS)

    Marx, C.

    2017-10-01

    For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.

  13. Isotopic insights into sources of acid driving weathering across a mountain-floodplain transition in the Amazon headwaters of Peru

    NASA Astrophysics Data System (ADS)

    Torres, M. A.; Clark, K.; Paris, G.; Adkins, J. F.; West, A.

    2012-12-01

    The carbon budget associated with mineral weathering depends on the extent to which weathering is driven by strong acids (e.g., H2SO4, HNO3) as opposed to weak acids derived from atmospheric CO2 (e.g., H2CO3, organic acids). It has remained difficult to accurately partition acid sources associated with carbonate and silicate weathering, presenting an obstacle to quantifying weathering drawdown of CO2. Moreover, little is known about how acid sources change along material pathways from mountains, where rocks are eroded, producing reactive carbonate and silicate minerals, but also sulfides that generate H2SO4, and floodplains, where the resulting sediment is transported, deposited, and chemically reworked. Such mountain-floodplain transitions are increasingly recognized as important weathering reactors, making it important to quantify any associated variation in acid sources. In this study, these questions are addressed using the dissolved major element geochemistry, the carbon isotopic composition of dissolved inorganic carbon (δ13C DIC), and the sulfur isotopic composition of dissolved sulfate (δ34S) of rivers draining the Peruvian Andes and Madre de Dios floodplain. The dissolved major element geochemistry of the Andean headwater catchments suggests inputs of sulfuric acid (from the oxidation of sulfide minerals) but is also consistent with the weathering of sulfate minerals. The δ13C DIC values of river water samples from the Andean catchments provide key constraints and range from -18 to -5 ‰, which is consistent with the mixing of DIC derived from the weathering of silicates by respired CO2 and from the weathering of carbonates by either atmospheric CO2 or sulfuric acid. In order to distinguish between the two possible carbonate weathering agents, we calculated the fraction of carbonate-derived DIC both using an isotope mass balance model and a mineral mass balance model. These results were compared assuming either pure sulfuric acid or atmospheric CO2 weathering. The results of the two models match only if carbonate weathering is driven by sulfuric acid, and if a significant portion of silicate mineral weathering is also driven by sulfuric acid. In the floodplain, low δ13C DIC values in river waters indicate that respired CO2 is the dominant weathering agent of both carbonate and silicate minerals. This indicates that there is a major change in the sources of acidity between the Andes and the Madre de Dios floodplain, which suggests that not only do floodplains promote silicate mineral weathering, as recently identified elsewhere, but this floodplain weathering is also driven to a greater extent by acids derived from CO2, when compared to weathering in the Andes. To further constrain the importance of sulfuric acid weathering in this system, the δ34S of sulfate will be measured and used to determine the source of sulfate and its role in mineral dissolution independently of the major element and δ13C DIC data.

  14. Broadband Spectral Modeling of the Extreme Gigahertz-peaked Spectrum Radio Source PKS B0008-421

    NASA Astrophysics Data System (ADS)

    Callingham, J. R.; Gaensler, B. M.; Ekers, R. D.; Tingay, S. J.; Wayth, R. B.; Morgan, J.; Bernardi, G.; Bell, M. E.; Bhat, R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kudrayvtseva, N.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Srivani, K. S.; Subrahmanyan, R.; Udaya Shankar, N.; Webster, R. L.; Williams, A.; Williams, C. L.

    2015-08-01

    We present broadband observations and spectral modeling of PKS B0008-421 and identify it as an extreme gigahertz-peaked spectrum (GPS) source. PKS B0008-421 is characterized by the steepest known spectral slope below the turnover, close to the theoretical limit of synchrotron self-absorption, and the smallest known spectral width of any GPS source. Spectral coverage of the source spans from 0.118 to 22 GHz, which includes data from the Murchison Widefield Array and the wide bandpass receivers on the Australia Telescope Compact Array. We have implemented a Bayesian inference model fitting routine to fit the data with internal free-free absorption (FFA), single- and double-component FFA in an external homogeneous medium, FFA in an external inhomogeneous medium, or single- and double-component synchrotron self-absorption models, all with and without a high-frequency exponential break. We find that without the inclusion of a high-frequency break these models cannot accurately fit the data, with significant deviations above and below the peak in the radio spectrum. The addition of a high-frequency break provides acceptable spectral fits for the inhomogeneous FFA and double-component synchrotron self-absorption models, with the inhomogeneous FFA model statistically favored. The requirement of a high-frequency spectral break implies that the source has ceased injecting fresh particles. Additional support for the inhomogeneous FFA model as being responsible for the turnover in the spectrum is given by the consistency between the physical parameters derived from the model fit and the implications of the exponential spectral break, such as the necessity of the source being surrounded by a dense ambient medium to maintain the peak frequency near the gigahertz region. This implies that PKS B0008-421 should display an internal H i column density greater than 1020 cm-2. The discovery of PKS B0008-421 suggests that the next generation of low radio frequency surveys could reveal a large population of GPS sources that have ceased activity, and that a portion of the ultra-steep-spectrum source population could be composed of these GPS sources in a relic phase.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharif, M., E-mail: msharif.math@pu.edu.pk; Manzoor, Rubab, E-mail: rubab.manzoor@umt.edu.pk; Department of Mathematics, University of Management and Technology, Johar Town Campus, Lahore-54782

    This paper explores the influences of dark energy on the shear-free axially symmetric evolution by considering self-interacting Brans–Dicke gravity as a dark energy candidate. We describe energy source of the model and derive all the effective dynamical variables as well as effective structure scalars. It is found that scalar field is one of the sources of anisotropy and dissipation. The resulting effective structure scalars help to study the dynamics associated with dark energy in any axial configuration. In order to investigate shear-free evolution, we formulate a set of governing equations along with heat transport equation. We discuss consequences of shear-freemore » condition upon different SBD fluid models like dissipative non-geodesic and geodesic models. For dissipative non-geodesic case, the rotational distribution turns out to be the necessary and sufficient condition for radiating model. The dissipation depends upon inhomogeneous expansion. The geodesic model is found to be irrotational and non-radiating. The non-dissipative geodesic model leads to FRW model for positive values of the expansion parameter.« less

  16. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  17. Flood extent and water level estimation from SAR using data-model integration

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  18. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  19. Combining archeomagnetic and volcanic data with historical geomagnetic observations to reconstruct global field evolution over the past 1000 years, including new paleomagnetic data from historical lava flows on Fogo, Cape Verde

    NASA Astrophysics Data System (ADS)

    Korte, M. C.; Senftleben, R.; Brown, M. C.; Finlay, C. C.; Feinberg, J. M.; Biggin, A. J.

    2016-12-01

    Geomagnetic field evolution of the recent past can be studied using different data sources: Jackson et al. (2000) combined historical observations with modern field measurements to derive a global geomagnetic field model (gufm1) spanning 1590 to 1990. Several published young archeo- and volcanic paleomagnetic data fall into this time interval. Here, we directly combine data from these different sources to derive a global field model covering the past 1000 years. We particularly focus on reliably recovering dipole moment evolution prior to the times of the first direct absolute intensity observations at around 1840. We first compared the different data types and their agreement with the gufm1 model to assess their compatibility and reliability. We used these results, in combination with statistical modelling tests, to obtain suitable uncertainty estimates as weighting factors for the data in the final model. In addition, we studied samples from seven lava flows from the island of Fogo, Cape Verde, erupted between 1664 and 1857. Oriented samples were available for two of them, providing declination and inclination results. Due to the complicated mineralogy of three of the flows, microwave paleointensity experiments using a modified version of the IZZI protocol were carried out on flows erupted in 1664, 1769, 1816 and 1847. The new directional results are compared with nearby historical data and the influence on, and agreement with, the new model are discussed.

  20. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  1. Derivation and application of the reciprocity relations for radiative transfer with internal illumination

    NASA Technical Reports Server (NTRS)

    Cogley, A. C.

    1975-01-01

    A Green's function formulation is used to derive basic reciprocity relations for planar radiative transfer in a general medium with internal illumination. Reciprocity (or functional symmetry) allows an explicit and generalized development of the equivalence between source and probability functions. Assuming similar symmetry in three-dimensional space, a general relationship is derived between planar-source intensity and point-source total directional energy. These quantities are expressed in terms of standard (universal) functions associated with the planar medium, while all results are derived from the differential equation of radiative transfer.

  2. Advanced relativistic VLBI model for geodesy

    NASA Astrophysics Data System (ADS)

    Soffel, Michael; Kopeikin, Sergei; Han, Wen-Biao

    2017-07-01

    Our present relativistic part of the geodetic VLBI model for Earthbound antennas is a consensus model which is considered as a standard for processing high-precision VLBI observations. It was created as a compromise between a variety of relativistic VLBI models proposed by different authors as documented in the IERS Conventions 2010. The accuracy of the consensus model is in the picosecond range for the group delay but this is not sufficient for current geodetic purposes. This paper provides a fully documented derivation of a new relativistic model having an accuracy substantially higher than one picosecond and based upon a well accepted formalism of relativistic celestial mechanics, astrometry and geodesy. Our new model fully confirms the consensus model at the picosecond level and in several respects goes to a great extent beyond it. More specifically, terms related to the acceleration of the geocenter are considered and kept in the model, the gravitational time-delay due to a massive body (planet, Sun, etc.) with arbitrary mass and spin-multipole moments is derived taking into account the motion of the body, and a new formalism for the time-delay problem of radio sources located at finite distance from VLBI stations is presented. Thus, the paper presents a substantially elaborated theoretical justification of the consensus model and its significant extension that allows researchers to make concrete estimates of the magnitude of residual terms of this model for any conceivable configuration of the source of light, massive bodies, and VLBI stations. The largest terms in the relativistic time delay which can affect the current VLBI observations are from the quadrupole and the angular momentum of the gravitating bodies that are known from the literature. These terms should be included in the new geodetic VLBI model for improving its consistency.

  3. Modelling urban δ13C variations in the Greater Toronto Area

    NASA Astrophysics Data System (ADS)

    Pugliese, S.; Vogel, F. R.; Murphy, J. G.; Worthy, D. E. J.; Zhang, J.; Zheng, Q.; Moran, M. D.

    2015-12-01

    Even in urbanized regions, carbon dioxide (CO2) emissions are derived from a variety of biogenic and anthropogenic sources and are influenced by atmospheric transport across borders. As policies are introduced to reduce the emission of CO2, there is a need for independent verification of emissions reporting. In this work, we aim to use carbon isotope (13CO2 and 12CO2) simulations in combination with atmospheric measurements to distinguish between CO2 sources in the Greater Toronto Area (GTA), Canada. This is being done by developing an urban δ13C framework based on existing CO2 emission data and forward modelling using a chemistry transport model, CHIMERE. The framework is designed to use region specific δ13C signatures of the dominant CO2 sources together with a CO2 inventory at a fine spatial and temporal resolution; the product is compared against highly accurate 13CO2 and 12CO2 ambient data. The strength of this framework is its potential to estimate both locally produced and regionally transported CO­2. Locally, anthropogenic CO­2 in urban areas is often derived from natural gas combustion (for heating) and gasoline/diesel combustion (for transportation); the isotopic signatures of these processes are significantly different (approximately d13CVPDB = -40 ‰ and -26 ‰ respectively) and can be used to infer their relative contributions. Furthermore, the contribution of transported CO2 can also be estimated as nearby regions often rely on other sources of heating (e.g. coal combustion), which has a very different signature (approximately d13CVPDB = -23 ‰). We present an analysis of the GTA in contrast to Paris, France where atmospheric observations are also available and 13CO2 has been studied. Utilizing our δ13C framework and differences in sectoral isotopic signatures, we quantify the relative contribution of CO2 sources on the overall measured concentration and assess the ability of this framework as a tool for tracing the evolution of sector-specific emissions.

  4. Petrogenetic modeling of a potential uranium source rock, Granite Mountains, Wyoming

    USGS Publications Warehouse

    Stuckless, J.S.; Miesch, A.T.

    1981-01-01

    Previous studies of the granite of Lankin Dome have led to the conclusion that this granite was a source for the sandstone-type uranium deposits in the basins that surround the Granite Mountains, Wyo. Q-mode factor analysis of 29 samples of this granite shows that five bulk compositions are required to explain the observed variances of 33 constituents in these samples. Models presented in this paper show that the origin of the granite can be accounted for by the mixing of a starting liquid with two ranges of solid compositions such that all five compositions are granitic. There are several features of the granite of Lankin Dome that suggest derivation by partial melting and, because the proposed source region was inhomogeneous, that more than one of the five end members may have been a liquid. Data for the granite are compatible with derivation from rocks similar to those of the metamorphic complex that the granite intrudes. Evidence for crustal derivation by partial melting includes a strongly peraluminous nature, extremely high differentiation indices, high contents of incompatible elements, generally large negative Eu anomalies, and high initial lead and strontium isotopic ratios. If the granite of Lankin Dome originated by partial melting of a heterogeneous metamorphic complex, the initial magma could reasonably have been composed of a range of granitic liquids. Five variables were not well accounted for by a five-end-member model. Water, CO 2 , and U0 2 contents and the oxidation state of iron are all subject to variations caused by near-surface processes. The Q-mode factor analysis suggests that these four variables have a distribution determined by postmagmatic processes. The reason for failure of Cs0 2 to vary systematically with the other 33 variables is not known. Other granites that have lost large amounts of uranium possibly can be identified by Q-mode factor analysis.

  5. Evaluation of anticancer agents using patient-derived tumor organoids characteristically similar to source tissues.

    PubMed

    Tamura, Hirosumi; Higa, Arisa; Hoshi, Hirotaka; Hiyama, Gen; Takahashi, Nobuhiko; Ryufuku, Masae; Morisawa, Gaku; Yanagisawa, Yuka; Ito, Emi; Imai, Jun-Ichi; Dobashi, Yuu; Katahira, Kiyoaki; Soeda, Shu; Watanabe, Takafumi; Fujimori, Keiya; Watanabe, Shinya; Takagi, Motoki

    2018-06-18

    Patient-derived tumor xenograft models represent a promising preclinical cancer model that better replicates disease, compared with traditional cell culture; however, their use is low-throughput and costly. To overcome this limitation, patient-derived tumor organoids (PDOs) were established from human lung, ovarian and uterine tumor tissues, among others, to accurately and efficiently recapitulate the tissue architecture and function. PDOs were able to be cultured for >6 months, and formed cell clusters with similar morphologies to their source tumors. Comparative histological and comprehensive gene expression analyses proved that the characteristics of PDOs were similar to those of their source tumors, even following long-term expansion in culture. At present, 53 PDOs have been established by the Fukushima Translational Research Project, and were designated as Fukushima PDOs (F‑PDOs). In addition, the in vivo tumorigenesis of certain F‑PDOs was confirmed using a xenograft model. The present study represents a detailed analysis of three F‑PDOs (termed REME9, 11 and 16) established from endometrial cancer tissues. These were used for cell growth inhibition experiments using anticancer agents. A suitable high-throughput assay system, with 96- or 384‑well plates, was designed for each F‑PDO, and the efficacy of the anticancer agents was subsequently evaluated. REME9 and 11 exhibited distinct responses and increased resistance to the drugs, as compared with conventional cancer cell lines (AN3 CA and RL95-2). REME9 and 11, which were established from tumors that originated in patients who did not respond to paclitaxel and carboplatin (the standard chemotherapy for endometrial cancer), exhibited high resistance (half-maximal inhibitory concentration >10 µM) to the two agents. Therefore, assay systems using F‑PDOs may be utilized to evaluate anticancer agents using conditions that better reflect clinical conditions, compared with conventional methods using cancer cell lines, and to discover markers that identify the pharmacological effects of anticancer agents.

  6. Generation of functional cardiomyocytes from rat embryonic and induced pluripotent stem cells using feeder-free expansion and differentiation in suspension culture.

    PubMed

    Dahlmann, Julia; Awad, George; Dolny, Carsten; Weinert, Sönke; Richter, Karin; Fischer, Klaus-Dieter; Munsch, Thomas; Leßmann, Volkmar; Volleth, Marianne; Zenker, Martin; Chen, Yaoyao; Merkl, Claudia; Schnieke, Angelika; Baraki, Hassina; Kutschka, Ingo; Kensah, George

    2018-01-01

    The possibility to generate cardiomyocytes from pluripotent stem cells in vitro has enormous significance for basic research, disease modeling, drug development and heart repair. The concept of heart muscle reconstruction has been studied and optimized in the rat model using rat primary cardiovascular cells or xenogeneic pluripotent stem cell derived-cardiomyocytes for years. However, the lack of rat pluripotent stem cells (rPSCs) and their cardiovascular derivatives prevented the establishment of an authentic clinically relevant syngeneic or allogeneic rat heart regeneration model. In this study, we comparatively explored the potential of recently available rat embryonic stem cells (rESCs) and induced pluripotent stem cells (riPSCs) as a source for cardiomyocytes (CMs). We developed feeder cell-free culture conditions facilitating the expansion of undifferentiated rPSCs and initiated cardiac differentiation by embryoid body (EB)-formation in agarose microwell arrays, which substituted the robust but labor-intensive hanging drop (HD) method. Ascorbic acid was identified as an efficient enhancer of cardiac differentiation in both rPSC types by significantly increasing the number of beating EBs (3.6 ± 1.6-fold for rESCs and 17.6 ± 3.2-fold for riPSCs). These optimizations resulted in a differentiation efficiency of up to 20% cTnTpos rPSC-derived CMs. CMs showed spontaneous contractions, expressed cardiac markers and had typical morphological features. Electrophysiology of riPSC-CMs revealed different cardiac subtypes and physiological responses to cardio-active drugs. In conclusion, we describe rPSCs as a robust source of CMs, which is a prerequisite for detailed preclinical studies of myocardial reconstruction in a physiologically and immunologically relevant small animal model.

  7. Examining the effects of anthropogenic emissions on isoprene-derived secondary organic aerosol formation during the 2013 Southern Oxidant and Aerosol Study (SOAS) at the Look Rock, Tennessee, ground site

    NASA Astrophysics Data System (ADS)

    Budisulistiorini, S. H.; Li, X.; Bairai, S. T.; Renfro, J.; Liu, Y.; Liu, Y. J.; McKinney, K. A.; Martin, S. T.; McNeill, V. F.; Pye, H. O. T.; Nenes, A.; Neff, M. E.; Stone, E. A.; Mueller, S.; Knote, C.; Shaw, S. L.; Zhang, Z.; Gold, A.; Surratt, J. D.

    2015-03-01

    A suite of offline and real-time gas- and particle-phase measurements was deployed at Look Rock, Tennessee (TN), during the 2013 Southern Oxidant and Aerosol Study (SOAS) to examine the effects of anthropogenic emissions on isoprene-derived secondary organic aerosol (SOA) formation. High- and low-time resolution PM2.5 samples were collected for analysis of known tracer compounds in isoprene-derived SOA by gas chromatography/electron ionization-mass spectrometry (GC/EI-MS) and ultra performance liquid chromatography/diode array detection-electrospray ionization-high-resolution quadrupole time-of-flight mass spectrometry (UPLC/DAD-ESI-HR-QTOFMS). Source apportionment of the organic aerosol (OA) was determined by positive matrix factorization (PMF) analysis of mass spectrometric data acquired on an Aerodyne Aerosol Chemical Speciation Monitor (ACSM). Campaign average mass concentrations of the sum of quantified isoprene-derived SOA tracers contributed to ~9% (up to 26%) of the total OA mass, with isoprene-epoxydiol (IEPOX) chemistry accounting for ~97% of the quantified tracers. PMF analysis resolved a factor with a profile similar to the IEPOX-OA factor resolved in an Atlanta study and was therefore designated IEPOX-OA. This factor was strongly correlated (r2>0.7) with 2-methyltetrols, C5-alkene triols, IEPOX-derived organosulfates, and dimers of organosulfates, confirming the role of IEPOX chemistry as the source. On average, IEPOX-derived SOA tracer mass was ~25% (up to 47%) of the IEPOX-OA factor mass, which accounted for 32% of the total OA. A low-volatility oxygenated organic aerosol (LV-OOA) and an oxidized factor with a profile similar to 91Fac observed in areas where emissions are biogenic-dominated were also resolved by PMF analysis, whereas no primary organic aerosol (POA) sources could be resolved. These findings were consistent with low levels of primary pollutants, such as nitric oxide (NO~0.03ppb), carbon monoxide (CO~116 ppb), and black carbon (BC~0.2 μg m-3). Particle-phase sulfate is fairly correlated (r2~0.3) with both MAE- and IEPOX-derived SOA tracers, and more strongly correlated (r2~0.6) with the IEPOX-OA factor, in sum suggesting an important role of sulfate in isoprene SOA formation. Moderate correlation between the methacrylic acid epoxide (MAE)-derived SOA tracer 2-methylglyceric acid with sum of reactive and reservoir nitrogen oxides (NOy; r2=0.38) and nitrate (r2=0.45) indicates the potential influence of anthropogenic emissions through long-range transport. Despite the lack of a~clear association of IEPOX-OA with locally estimated aerosol acidity and liquid water content (LWC), box model calculations of IEPOX uptake using the simpleGAMMA model, accounting for the role of acidity and aerosol water, predicted the abundance of the IEPOX-derived SOA tracers 2-methyltetrols and the corresponding sulfates with good accuracy (r2~0.5 and ~0.7, respectively). The modeling and data combined suggest an anthropogenic influence on isoprene-derived SOA formation through acid-catalyzed heterogeneous chemistry of IEPOX in the southeastern US. However, it appears that this process was not limited by aerosol acidity or LWC at Look Rock during SOAS. Future studies should further explore the extent to which acidity and LWC becomes a limiting factor of IEPOX-derived SOA, and their modulation by anthropogenic emissions.

  8. Occurrence of urea-based soluble epoxide hydrolase inhibitors from the plants in the order Brassicales

    PubMed Central

    Kitamura, Seiya; Morisseau, Christophe; Harris, Todd R.; Inceoglu, Bora

    2017-01-01

    Recently, dibenzylurea-based potent soluble epoxide hydrolase (sEH) inhibitors were identified in Pentadiplandra brazzeana, a plant in the order Brassicales. In an effort to generalize the concept, we hypothesized that plants that produce benzyl glucosinolates and corresponding isothiocyanates also produce these dibenzylurea derivatives. Our overall aim here was to examine the occurrence of urea derivatives in Brassicales, hoping to find biologically active urea derivatives from plants. First, plants in the order Brassicales were analyzed for the presence of 1, 3-dibenzylurea (compound 1), showing that three additional plants in the order Brassicales produce the urea derivatives. Based on the hypothesis, three dibenzylurea derivatives with sEH inhibitory activity were isolated from maca (Lepidium meyenii) roots. Topical application of one of the identified compounds (compound 3, human sEH IC50 = 222 nM) effectively reduced pain in rat inflammatory pain model, and this compound was bioavailable after oral administration in mice. The biosynthetic pathway of these urea derivatives was investigated using papaya (Carica papaya) seed as a model system. Finally, a small collection of plants from the Brassicales order was grown, collected, extracted and screened for sEH inhibitory activity. Results show that several plants of the Brassicales order could be potential sources of urea-based sEH inhibitors. PMID:28472063

  9. Human adipose stem cell and ASC-derived cardiac progenitor cellular therapy improves outcomes in a murine model of myocardial infarction

    PubMed Central

    Davy, Philip MC; Lye, Kevin D; Mathews, Juanita; Owens, Jesse B; Chow, Alice Y; Wong, Livingston; Moisyadi, Stefan; Allsopp, Richard C

    2015-01-01

    Background Adipose tissue is an abundant and potent source of adult stem cells for transplant therapy. In this study, we present our findings on the potential application of adipose-derived stem cells (ASCs) as well as induced cardiac-like progenitors (iCPs) derived from ASCs for the treatment of myocardial infarction. Methods and results Human bone marrow (BM)-derived stem cells, ASCs, and iCPs generated from ASCs using three defined cardiac lineage transcription factors were assessed in an immune-compromised mouse myocardial infarction model. Analysis of iCP prior to transplant confirmed changes in gene and protein expression consistent with a cardiac phenotype. Endpoint analysis was performed 1 month posttransplant. Significantly increased endpoint fractional shortening, as well as reduction in the infarct area at risk, was observed in recipients of iCPs as compared to the other recipient cohorts. Both recipients of iCPs and ASCs presented higher myocardial capillary densities than either recipients of BM-derived stem cells or the control cohort. Furthermore, mice receiving iCPs had a significantly higher cardiac retention of transplanted cells than all other groups. Conclusion Overall, iCPs generated from ASCs outperform BM-derived stem cells and ASCs in facilitating recovery from induced myocardial infarction in mice. PMID:26604802

  10. Constraining Source Locations of Shallow Subduction Megathrust Earthquakes in 1-D and 3-D Velocity Models - A Case Study of the 2002 Mw=6.4 Osa Earthquake, Costa Rica

    NASA Astrophysics Data System (ADS)

    Grevemeyer, I.; Arroyo, I. G.

    2015-12-01

    Earthquake source locations are generally routinely constrained using a global 1-D Earth model. However, the source location might be associated with large uncertainties. This is definitively the case for earthquakes occurring at active continental margins were thin oceanic crust subducts below thick continental crust and hence large lateral changes in crustal thickness occur as a function of distance to the deep-sea trench. Here, we conducted a case study of the 2002 Mw 6.4 Osa thrust earthquake in Costa Rica that was followed by an aftershock sequence. Initial relocations indicated that the main shock occurred fairly trenchward of most large earthquakes along the Middle America Trench off central Costa Rica. The earthquake sequence occurred while a temporary network of ocean-bottom-hydrophones and land stations 80 km to the northwest were deployed. By adding readings from permanent Costa Rican stations, we obtain uncommon P wave coverage of a large subduction zone earthquake. We relocated this catalog using a nonlinear probabilistic approach using a 1-D and two 3-D P-wave velocity models. The 3-D model was either derived from 3-D tomography based on onshore stations and a priori model based on seismic refraction data. All epicentres occurred close to the trench axis, but depth estimates vary by several tens of kilometres. Based on the epicentres and constraints from seismic reflection data the main shock occurred 25 km from the trench and probably along the plate interface at 5-10 km depth. The source location that agreed best with the geology was based on the 3-D velocity model derived from a priori data. Aftershocks propagated downdip to the area of a 1999 Mw 6.9 sequence and partially overlapped it. The results indicate that underthrusting of the young and buoyant Cocos Ridge has created conditions for interpolate seismogenesis shallower and closer to the trench axis than elsewhere along the central Costa Rica margin.

  11. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2004-01-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  12. Using cGPS to estimate the magma budget for Soufrière Hills volcano, Montserrat, West Indies

    NASA Astrophysics Data System (ADS)

    Collinson, Amy; Neuberg, Jurgen; Pascal, Karen

    2017-04-01

    For over 20 years, Soufrière Hills Volcano, Montserrat has been in a state of volcanic unrest. Intermittent periods of dome building have been punctuated by explosive eruptions and dome collapse events, endangering the lives of the inhabitants of the island. The last episode of active magma extrusion was in February 2010, and the last explosive event (ash venting) in March 2012. Despite a lack of recent eruptive activity, the volcano continues to emit significant volumes of SO2 and shows an ongoing trend of island inflation as indicated by cGPS. Through the aid of three-dimensional numerical modelling, using a finite element method, we model the cGPS data to explore the potential sources of the ongoing island deformation. We consider both magmatic (dykes and chambers) and tectonic sources which result in entirely different interpretations: Whilst a magmatic source suggests the possibility for further eruption, a tectonic source may indicate cessation of volcanic activity. We investigate the effects that different sources (shapes, characters and depths) have on the surface displacement. We demonstrate that whilst a tectonic contribution cannot be completely discounted, the dominant source is magmatic. Consequently, we define a best-fit model which we use to assess the source volume change, and therefore, the potential current magma budget. Based on the similarity in the relative displacement between the cGPS stations at every episode of the eruption, we assume that the displacement for all Phases and Pauses can be explained by the same basic source. Therefore, we interpret the cGPS data with our source model for all the preceding Pauses and Phases to estimate the magma budget feeding the entire eruption. Subsequently, we derive important insights into the potential future eruptive behaviour of the volcano.

  13. In vitro generation of three-dimensional substrate-adherent embryonic stem cell-derived neural aggregates for application in animal models of neurological disorders.

    PubMed

    Hargus, Gunnar; Cui, Yi-Fang; Dihné, Marcel; Bernreuther, Christian; Schachner, Melitta

    2012-05-01

    In vitro-differentiated embryonic stem (ES) cells comprise a useful source for cell replacement therapy, but the efficiency and safety of a translational approach are highly dependent on optimized protocols for directed differentiation of ES cells into the desired cell types in vitro. Furthermore, the transplantation of three-dimensional ES cell-derived structures instead of a single-cell suspension may improve graft survival and function by providing a beneficial microenvironment for implanted cells. To this end, we have developed a new method to efficiently differentiate mouse ES cells into neural aggregates that consist predominantly (>90%) of postmitotic neurons, neural progenitor cells, and radial glia-like cells. When transplanted into the excitotoxically lesioned striatum of adult mice, these substrate-adherent embryonic stem cell-derived neural aggregates (SENAs) showed significant advantages over transplanted single-cell suspensions of ES cell-derived neural cells, including improved survival of GABAergic neurons, increased cell migration, and significantly decreased risk of teratoma formation. Furthermore, SENAs mediated functional improvement after transplantation into animal models of Parkinson's disease and spinal cord injury. This unit describes in detail how SENAs are efficiently derived from mouse ES cells in vitro and how SENAs are isolated for transplantation. Furthermore, methods are presented for successful implantation of SENAs into animal models of Huntington's disease, Parkinson's disease, and spinal cord injury to study the effects of stem cell-derived neural aggregates in a disease context in vivo.

  14. 3-D acoustic waveform simulation and inversion at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Austin, A.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Kennedy, B.; Fitzgerald, R.; Key, N.

    2016-12-01

    Acoustic waveform inversion shows promise for improved eruption characterization that may inform volcano monitoring. Well-constrained inversion can provide robust estimates of volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be thought of as the combination of pressure fluctuations from a volume change, directionality, and turbulence. This infrasound source could not be well constrained up to this time due to infrasound sensors only being deployed on Earth's surface, so the assumption of no vertical dipole component has been made. In this study we deploy a high-density seismo-acoustic network, including multiple acoustic sensors along a tethered balloon around Yasur Volcano, Vanuatu. Yasur has frequent strombolian eruptions from any one of its three active vents within a 400 m diameter crater. The third dimension (vertical) of pressure sensor coverage allows us to begin to constrain the acoustic source components in a profound way, primarily the horizontal and vertical components and their previously uncharted contributions to volcano infrasound. The deployment also has a geochemical and visual component, including FLIR, FTIR, two scanning FLYSPECs, and a variety of visual imagery. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a digital elevation model created using structure-from-motion techniques. We then invert for the source location and source-time function, constraining the contribution of the vertical sound radiation to the source. The final outcome of this inversion is an infrasound-derived volume flux as a function of time, which we then compare to those derived independently from geochemical techniques as well as the inversion of seismic data. Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249

  15. Human RPE Stem Cells Grown into Polarized RPE Monolayers on a Polyester Matrix Are Maintained after Grafting into Rabbit Subretinal Space

    PubMed Central

    Stanzel, Boris V.; Liu, Zengping; Somboonthanakij, Sudawadee; Wongsawad, Warapat; Brinken, Ralf; Eter, Nicole; Corneo, Barbara; Holz, Frank G.; Temple, Sally; Stern, Jeffrey H.; Blenkinsop, Timothy A.

    2014-01-01

    Summary Transplantation of the retinal pigment epithelium (RPE) is being developed as a cell-replacement therapy for age-related macular degeneration. Human embryonic stem cell (hESC) and induced pluripotent stem cell (iPSC)-derived RPE are currently translating toward clinic. We introduce the adult human RPE stem cell (hRPESC) as an alternative RPE source. Polarized monolayers of adult hRPESC-derived RPE grown on polyester (PET) membranes had near-native characteristics. Trephined pieces of RPE monolayers on PET were transplanted subretinally in the rabbit, a large-eyed animal model. After 4 days, retinal edema was observed above the implant, detected by spectral domain optical coherence tomography (SD-OCT) and fundoscopy. At 1 week, retinal atrophy overlying the fetal or adult transplant was observed, remaining stable thereafter. Histology obtained 4 weeks after implantation confirmed a continuous polarized human RPE monolayer on PET. Taken together, the xeno-RPE survived with retained characteristics in the subretinal space. These experiments support that adult hRPESC-derived RPE are a potential source for transplantation therapies. PMID:24511471

  16. A Review of Magnetic Anomaly Field Data for the Arctic Region: Geological Implications

    NASA Technical Reports Server (NTRS)

    Taylor, Patrick T.; vonFrese, Ralph; Roman, Daniel; Frawley, James J.

    1999-01-01

    Due to its inaccessibility and hostile physical environment remote sensing data, both airborne and satellite measurements, has been the main source of geopotential data over the entire Arctic region. Ubiquitous and significant external fields, however, hinder crustal magnetic field studies These potential field data have been used to derive tectonic models for the two major tectonic sectors of this region, the Amerasian and Eurasian Basins. The latter is dominated by the Nansen-Gakkel or Mid-Arctic Ocean Ridge and is relatively well known. The origin and nature of the Alpha and Mendeleev Ridges, Chukchi Borderland and Canada Basin of the former are less well known and a subject of controversy. The Lomonosov Ridge divides these large provinces. In this report we will present a summary of the Arctic geopotential anomaly data derived from various sources by various groups in North America and Europe and show how these data help us unravel the last remaining major puzzle of the global plate tectonic framework. While magnetic anomaly data represent the main focus of this study recently derived satellite gravity data are playing a major role in Arctic studies.

  17. Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm

    NASA Astrophysics Data System (ADS)

    Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy

    2016-01-01

    The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.

  18. La Isla de Gorgona, Colombia: A petrological enigma?

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew C.

    2005-09-01

    A wide range of intrusive (wehrlite, dunite, gabbro and olivine gabbro) and extrusive (komatiites picrites and basalts) igneous rocks are found on the small pacific island of Gorgona. The island is best known for its ˜90 Ma spinifex-textured komatiites: the only true Phanerozoic komatiites yet discovered. Early work led to suggestions that the rocks of the island formed at a mid-ocean ridge, however more recent research supports an origin as part of a hot mantle plume-derived oceanic plateau. One of the main lines of evidence for this origin stems from the inferred high mantle source temperatures required to form the high-MgO (> 15 wt.%) komatiites and picrites. Another remarkable feature of the island, considering its small size (8 × 2.5 km), is the degree of chemical and radiogenic isotopic heterogeneity shown by the rocks. This heterogeneity requires a mantle source region with at least three isotopically distinctive source regions (two depleted and one enriched). Although these mantle source regions appear to be derived in significant part from recycled oceanic crust and lithosphere, enrichments in 187Os, 186Os and 3He in Gorgona lavas and intrusive rocks, suggest some degree of transfer of material from the outer core to the plume source region at D″. Modelling reveals that the komatiites probably formed by dynamic melting at an average pressure of 3-4 GPa leaving residual harzburgite. Trace element depletion in Gorgona ultramafic rocks appears to be the result of earlier, deeper melting which produced high-MgO trace element-enriched magmas. The discovery of a trace-element enriched picrite on the island has confirmed this model. Gorgona accreted onto the palaeocontinental margin of northwestern South America in the Eocene and palaeomagnetic work reveals that it was formed at ˜26 °S. It has been proposed that Gorgona is a part of the Caribbean-Colombian Oceanic Plateau (CCOP), however, the CCOP accreted in the Late Cretaceous and was derived from a more equatorial palaeolatitude. This evidence, and differing geochemical signatures, strongly suggests that Gorgona and probably other coastal oceanic plateau sequences in Colombia and Ecuador, belong to a completely different oceanic plateau to the CCOP.

  19. Triton: A hot potato

    NASA Technical Reports Server (NTRS)

    Kirk, R. L.; Brown, R. H.

    1991-01-01

    The effect of sunlight on the surface of Triton was studied. Widely disparate models of the active geysers observed during Voyager 2 flyby were proposed, with a solar energy source almost their only feature. Yet Triton derives more of its heat from internal sources (energy released by the radioactive decay) than any other icy satellite. The effect of this relatively large internal heat on the observable behavior of volatiles on Triton's surface is investigated. The following subject areas are covered: the Global Energy Budget; insulation polar caps; effect on frost stability; mantle convection; and cryovolcanism.

  20. AiResearch QCGAT engine: Acoustic test results

    NASA Technical Reports Server (NTRS)

    Kisner, L. S.

    1980-01-01

    The noise levels of the quiet, general aviation turbofan (QCGAT) engine were measured in ground static noise tests. The static noise levels were found to be markedly lower than the demonstrably quiet AiResearch model TFE731 engine. The measured QCGAT noise levels were correlated with analytical noise source predictions to derive free-field component noise predictions. These component noise sources were used to predict the QCGAT flyover noise levels at FAR Part 36 conditions. The predicted flyover noise levels are about 10 decibels lower than the current quietest business jets.

  1. 3XMM J185246.6+003317: Another Low Magnetic Field Magnetar

    NASA Astrophysics Data System (ADS)

    Rea, N.; Viganò, D.; Israel, G. L.; Pons, J. A.; Torres, D. F.

    2014-01-01

    We study the outburst of the newly discovered X-ray transient 3XMM J185246.6+003317, re-analyzing all available XMM-Newton observations of the source to perform a phase-coherent timing analysis, and derive updated values of the period and period derivative. We find the source rotating at P = 11.55871346(6) s (90% confidence level; at epoch MJD 54728.7) but no evidence for a period derivative in the seven months of outburst decay spanned by the observations. This translates to a 3σ upper limit for the period derivative of \\dot{P}< 1.4\\times 10^{-13} s s-1, which, assuming the classical magneto-dipolar braking model, gives a limit on the dipolar magnetic field of B dip < 4.1 × 1013 G. The X-ray outburst and spectral characteristics of 3XMM J185246.6+003317 confirm its identification as a magnetar, but the magnetic field upper limit we derive defines it as the third "low-B" magnetar discovered in the past 3 yr, after SGR 0418+5729 and Swift J1822.3-1606. We have also obtained an upper limit to the quiescent luminosity (<4 × 1033 erg s-1), in line with the expectations for an old magnetar. The discovery of this new low field magnetar reaffirms the prediction of about one outburst per year from the hidden population of aged magnetars.

  2. You Can Run, But You Can't Hide Juniper Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.

    2013-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modified the DREAM model to incorporate pollen transport. Pollen release is estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities are used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  3. Improving Agent Based Models and Validation through Data Fusion

    PubMed Central

    Laskowski, Marek; Demianyk, Bryan C.P.; Friesen, Marcia R.; McLeod, Robert D.; Mukhi, Shamir N.

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level. PMID:23569606

  4. Improving Agent Based Models and Validation through Data Fusion.

    PubMed

    Laskowski, Marek; Demianyk, Bryan C P; Friesen, Marcia R; McLeod, Robert D; Mukhi, Shamir N

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level.

  5. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.

  6. PDEPTH—A computer program for the geophysical interpretation of magnetic and gravity profiles through Fourier filtering, source-depth analysis, and forward modeling

    USGS Publications Warehouse

    Phillips, Jeffrey D.

    2018-01-10

    PDEPTH is an interactive, graphical computer program used to construct interpreted geological source models for observed potential-field geophysical profile data. The current version of PDEPTH has been adapted to the Windows platform from an earlier DOS-based version. The input total-field magnetic anomaly and vertical gravity anomaly profiles can be filtered to produce derivative products such as reduced-to-pole magnetic profiles, pseudogravity profiles, pseudomagnetic profiles, and upward-or-downward-continued profiles. A variety of source-location methods can be applied to the original and filtered profiles to estimate (and display on a cross section) the locations and physical properties of contacts, sheet edges, horizontal line sources, point sources, and interface surfaces. Two-and-a-half-dimensional source bodies having polygonal cross sections can be constructed using a mouse and keyboard. These bodies can then be adjusted until the calculated gravity and magnetic fields of the source bodies are close to the observed profiles. Auxiliary information such as the topographic surface, bathymetric surface, seismic basement, and geologic contact locations can be displayed on the cross section using optional input files. Test data files, used to demonstrate the source location methods in the report, and several utility programs are included.

  7. Constraining Biomarkers of Dissolved Organic Matter Sourcing Using Microbial Incubations of Vascular Plant Leachates of the California landscape

    NASA Astrophysics Data System (ADS)

    Harfmann, J.; Hernes, P.; Chuang, C. Y.; Kaiser, K.; Spencer, R. G.; Guillemette, F.

    2017-12-01

    Source origin of dissolved organic matter (DOM) is crucial in determining reactivity, driving chemical and biological processing of carbon. DOM source biomarkers such as lignin (a vascular plant marker) and D-amino acids (bacterial markers) are well-established tools in tracing DOM origin and fate. The development of high-resolution mass spectrometry and optical studies has expanded our toolkit; yet despite these advances, our understanding of DOM sources and fate remains largely qualitative. Quantitative data on DOM pools and fluxes become increasingly necessary as we refine our comprehension of its composition. In this study, we aim to calibrate and quantify DOM source endmembers by performing microbial incubations of multiple vascular plant leachates, where total DOM is constrained by initial vascular plant input and microbial production. Derived endmembers may be applied to endmember mixing models to quantify DOM source contributions in aquatic systems.

  8. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  9. The acoustic field of a point source in a uniform boundary layer over an impedance plane

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Willshire, W. L., Jr.

    1986-01-01

    The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.

  10. Variations in AmLi source spectra and their estimation utilizing the 5 Ring Multiplicity Counter

    DOE PAGES

    Weinmann-Smith, Robert; Beddingfield, David H.; Enqvist, Andreas; ...

    2017-02-28

    Active-mode assay systems are widely used for the safeguards of uranium items to verify compliance with the Non-Proliferation Treaty. Systems such as the Active-Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL) use americium-lithium (AmLi) neutron sources to induce fissions which are measured to determine the sample mass. These systems have historically relied on calibrations derived from well-defined standards. Recently, restricted access to standards or more difficult measurements have resulted in a reliance on modeling and simulation for the calibration of systems, which introduces potential simulation biases. Furthermore, the AmLi source energy spectra commonly used in the safeguardsmore » community do not accurately represent measurement results and the spectrum uncertainty can represent a large contribution to the total modeling uncertainty in active-mode systems.« less

  11. Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples

    NASA Astrophysics Data System (ADS)

    Rath, V.; Wolf, A.; Bücker, H. M.

    2006-10-01

    Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.

  12. Quantitative Structure-Activity Relationship Modeling Coupled with Molecular Docking Analysis in Screening of Angiotensin I-Converting Enzyme Inhibitory Peptides from Qula Casein Hydrolysates Obtained by Two-Enzyme Combination Hydrolysis.

    PubMed

    Lin, Kai; Zhang, Lanwei; Han, Xue; Meng, Zhaoxu; Zhang, Jianming; Wu, Yifan; Cheng, Dayou

    2018-03-28

    In this study, Qula casein derived from yak milk casein was hydrolyzed using a two-enzyme combination approach, and high angiotensin I-converting enzyme (ACE) inhibitory activity peptides were screened by quantitative structure-activity relationship (QSAR) modeling integrated with molecular docking analysis. Hydrolysates (<3 kDa) derived from combinations of thermolysin + alcalase and thermolysin + proteinase K demonstrated high ACE inhibitory activities. Peptide sequences in hydrolysates derived from these two combinations were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). On the basis of the QSAR modeling prediction, a total of 16 peptides were selected for molecular docking analysis. The docking study revealed that four of the peptides (KFPQY, MPFPKYP, MFPPQ, and QWQVL) bound the active site of ACE. These four novel peptides were chemically synthesized, and their IC 50 was determined. Among these peptides, KFPQY showed the highest ACE inhibitory activity (IC 50 = 12.37 ± 0.43 μM). Our study indicated that Qula casein presents an excellent source to produce ACE inhibitory peptides.

  13. Optimum load distribution between heat sources based on the Cournot model

    NASA Astrophysics Data System (ADS)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  14. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.

  15. Variations in AmLi source spectra and their estimation utilizing the 5 Ring Multiplicity Counter

    NASA Astrophysics Data System (ADS)

    Weinmann-Smith, R.; Beddingfield, D. H.; Enqvist, A.; Swinhoe, M. T.

    2017-06-01

    Active-mode assay systems are widely used for the safeguards of uranium items to verify compliance with the Non-Proliferation Treaty. Systems such as the Active-Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL) use americium-lithium (AmLi) neutron sources to induce fissions which are measured to determine the sample mass. These systems have historically relied on calibrations derived from well-defined standards. Recently, restricted access to standards or more difficult measurements have resulted in a reliance on modeling and simulation for the calibration of systems, which introduces potential simulation biases. The AmLi source energy spectra commonly used in the safeguards community do not accurately represent measurement results and the spectrum uncertainty can represent a large contribution to the total modeling uncertainty in active-mode systems. The 5-Ring Multiplicity Counter (5RMC) has been used to measure 17 AmLi sources. The measurements showed a significant spectral variation between different sources. Utilization of a spectrum that is specific to an individual source or a series of sources will give improved results over historical general spectra when modeling AmLi sources. Candidate AmLi neutron spectra were calculated in MCNP and SOURCES4C for a range of physical AmLi characteristics. The measurement and simulation data were used to fit reliable and accurate AmLi spectra for use in the simulation of active-mode systems. Spectra were created for average Gammatron C, Gammatron N, and MRC series sources, and for individual sources. The systematic uncertainty introduced by physical aspects of the AmLi source were characterized through simulations. The accuracy of spectra from the literature was compared.

  16. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  17. Isotopic evidence for the diversity of late Quaternary loess in Nebraska: Glaciogenic and nonglaciogenic sources

    USGS Publications Warehouse

    Aleinikoff, John N.; Muhs, Daniel R.; Bettis, E. Arthur; Johnson, William C.; Fanning, C. Mark; Benton, Rachel

    2008-01-01

    Pb isotope compositions of detrital K-feldspars and U-Pb ages of detrital zircons are used as indicators for determining the sources of Peoria Loess deposited during the last glacial period (late Wisconsin, ca. 25–14 ka) in Nebraska and western Iowa. Our new data indicate that only loess adjacent to the Platte River has Pb isotopic characteristics suggesting derivation from this river. Most Peoria Loess in central Nebraska (up to 20 m thick) is non-glaciogenic, on the basis of Pb isotope ratios in K-feldspars and the presence of 34-Ma detrital zircons. These isotopic characteristics suggest derivation primarily from the Oligocene White River Group in southern South Dakota, western Nebraska, southeastern Wyoming, and northeastern Colorado. The occurrence of 10–25 Ma detrital zircons suggests additional minor contributions of silt from the Oligocene-Miocene Arikaree Group and Miocene Ogallala Group. Isotopic data from detrital K-feldspar and zircon grains from Peoria Loess deposits in eastern Nebraska and western Iowa suggest that the immediate source of this loess was alluvium of the Missouri River. We conclude that this silt probably is of glaciogenic origin, primarily derived from outwash from the western margin of the Laurentide Ice Sheet. Identification of the White River Group as the main provenance of Peoria Loess of central Nebraska and the Missouri River valley as the immediate source of western Iowa Peoria Loess indicates that paleowind directions during the late Wisconsin were primarily from the northwest and west, in agreement with earlier studies of particle size and loess thickness variation. In addition, the results are in agreement with recent simulations of non-glaciogenic dust sources from linked climate-vegetation modeling, suggesting dry, windy, and minimally vegetated areas in parts of the Great Plains during the last glacial period.

  18. An Update to the NASA Reference Solar Sail Thrust Model

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Artusio-Glimpse, Alexandra B.

    2015-01-01

    An optical model of solar sail material originally derived at JPL in 1978 has since served as the de facto standard for NASA and other solar sail researchers. The optical model includes terms for specular and diffuse reflection, thermal emission, and non-Lambertian diffuse reflection. The standard coefficients for these terms are based on tests of 2.5 micrometer Kapton sail material coated with 100 nm of aluminum on the front side and chromium on the back side. The original derivation of these coefficients was documented in an internal JPL technical memorandum that is no longer available. Additionally more recent optical testing has taken place and different materials have been used or are under consideration by various researchers for solar sails. Here, where possible, we re-derive the optical coefficients from the 1978 model and update them to accommodate newer test results and sail material. The source of the commonly used value for the front side non-Lambertian coefficient is not clear, so we investigate that coefficient in detail. Although this research is primarily designed to support the upcoming NASA NEA Scout and Lunar Flashlight solar sail missions, the results are also of interest to the wider solar sail community.

  19. Profiles of equilibrium constants for self-association of aromatic molecules

    NASA Astrophysics Data System (ADS)

    Beshnova, Daria A.; Lantushenko, Anastasia O.; Davies, David B.; Evstigneev, Maxim P.

    2009-04-01

    Analysis of the noncovalent, noncooperative self-association of identical aromatic molecules assumes that the equilibrium self-association constants are either independent of the number of molecules (the EK-model) or change progressively with increasing aggregation (the AK-model). The dependence of the self-association constant on the number of molecules in the aggregate (i.e., the profile of the equilibrium constant) was empirically derived in the AK-model but, in order to provide some physical understanding of the profile, it is proposed that the sources for attenuation of the equilibrium constant are the loss of translational and rotational degrees of freedom, the ordering of molecules in the aggregates and the electrostatic contribution (for charged units). Expressions are derived for the profiles of the equilibrium constants for both neutral and charged molecules. Although the EK-model has been widely used in the analysis of experimental data, it is shown in this work that the derived equilibrium constant, KEK, depends on the concentration range used and hence, on the experimental method employed. The relationship has also been demonstrated between the equilibrium constant KEK and the real dimerization constant, KD, which shows that the value of KEK is always lower than KD.

  20. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Prasad, A.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Budge, A. M.; hide

    2013-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention s National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts

  1. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Prasad, A.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Budge, A. M.; hide

    2012-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  2. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Astrophysics Data System (ADS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A. R.; Nickovic, S.; Prasad, A. K.; Pejanovic, G.; Vukovic, A.; Van De Water, P. K.; Budge, A.; Hudspeth, W. B.; Krapfl, H.; Toth, B.; Zelicoff, A.; Myers, O.; Bunderson, L.; Ponce-Campos, G.; Menache, M.; Crimmins, T. M.; Vujadinovic, M.

    2012-12-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  3. Analysis of Seismic Moment Tensor and Finite-Source Scaling During EGS Resource Development at The Geysers, CA

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Dreger, D. S.; Gritto, R.

    2015-12-01

    Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.

  4. Rfam: Wikipedia, clans and the “decimal” release

    PubMed Central

    Gardner, Paul P.; Daub, Jennifer; Tate, John; Moore, Benjamin L.; Osuch, Isabelle H.; Griffiths-Jones, Sam; Finn, Robert D.; Nawrocki, Eric P.; Kolbe, Diana L.; Eddy, Sean R.; Bateman, Alex

    2011-01-01

    The Rfam database aims to catalogue non-coding RNAs through the use of sequence alignments and statistical profile models known as covariance models. In this contribution, we discuss the pros and cons of using the online encyclopedia, Wikipedia, as a source of community-derived annotation. We discuss the addition of groupings of related RNA families into clans and new developments to the website. Rfam is available on the Web at http://rfam.sanger.ac.uk. PMID:21062808

  5. Models of earth structure inferred from neodymium and strontium isotopic abundances

    PubMed Central

    Wasserburg, G. J.; DePaolo, D. J.

    1979-01-01

    A simplified model of earth structure based on the Nd and Sr isotopic characteristics of oceanic and continental tholeiitic flood basalts is presented, taking into account the motion of crustal plates and a chemical balance for trace elements. The resulting structure that is inferred consists of a lower mantle that is still essentially undifferentiated, overlain by an upper mantle that is the residue of the original source from which the continents were derived. PMID:16592688

  6. Dissecting the Gamma-Ray Background in Search of Dark Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cholis, Ilias; Hooper, Dan; McDermott, Samuel D.

    2014-02-01

    Several classes of astrophysical sources contribute to the approximately isotropic gamma-ray background measured by the Fermi Gamma-Ray Space Telescope. In this paper, we use Fermi's catalog of gamma-ray sources (along with corresponding source catalogs at infrared and radio wavelengths) to build and constrain a model for the contributions to the extragalactic gamma-ray background from astrophysical sources, including radio galaxies, star-forming galaxies, and blazars. We then combine our model with Fermi's measurement of the gamma-ray background to derive constraints on the dark matter annihilation cross section, including contributions from both extragalactic and galactic halos and subhalos. The resulting constraints are competitivemore » with the strongest current constraints from the Galactic Center and dwarf spheroidal galaxies. As Fermi continues to measure the gamma-ray emission from a greater number of astrophysical sources, it will become possible to more tightly constrain the astrophysical contributions to the extragalactic gamma-ray background. We project that with 10 years of data, Fermi's measurement of this background combined with the improved constraints on the astrophysical source contributions will yield a sensitivity to dark matter annihilations that exceeds the strongest current constraints by a factor of ~ 5 - 10.« less

  7. Estimation of Polychlorinated Biphenyl Sources in Industrial Port Sediments Using a Bayesian Semifactor Model Considering Unidentified Sources.

    PubMed

    Anezaki, Katsunori; Nakano, Takeshi; Kashiwagi, Nobuhisa

    2016-01-19

    Using the chemical balance method, and considering the presence of unidentified sources, we estimated the origins of PCB contamination in surface sediments of Muroran Port, Japan. It was assumed that these PCBs originated from four types of Kanechlor products (KC300, KC400, KC500, and KC600), combustion and two kinds of pigments (azo and phthalocyanine). The characteristics of these congener patterns were summarized on the basis of principal component analysis and explanatory variables determined. A Bayesian semifactor model (CMBK2) was applied to the explanatory variables to analyze the sources of PCBs in the sediments. The resulting estimates of the contribution ratio of each kind of sediment indicate that the existence of unidentified sources can be ignored and that the assumed seven sources are adequate to account for the contamination. Within the port, the contribution ratio of KC500 and KC600 (used as paints for ship hulls) was extremely high, but outside the port, the influence of azo pigments was observable to a limited degree. This indicates that environmental PCBs not derived from technical PCBs are present at levels that cannot be ignored.

  8. Gravimetric control of active volcanic processes

    NASA Astrophysics Data System (ADS)

    Saltogianni, Vasso; Stiros, Stathis

    2017-04-01

    Volcanic activity includes phases of magma chamber inflation and deflation, produced by movement of magma and/or hydrothermal processes. Such effects usually leave their imprint as deformation of the ground surfaces which can be recorded by GNSS and other methods, on one hand, and on the other hand they can be modeled as elastic deformation processes, with deformation produced by volcanic masses of finite dimensions such as spheres, ellipsoids and parallelograms. Such volumes are modeled on the basis of inversion (non-linear, numerical solution) of systems of equations relating the unknown dimensions and location of magma sources with observations, currently mostly GNSS and INSAR data. Inversion techniques depend on the misfit between model predictions and observations, but because systems of equations are highly non-linear, and because adopted models for the geometry of magma sources is simple, non-unique solutions can be derived, constrained by local extrema. Assessment of derived magma models can be provided by independent observations and models, such as micro-seismicity distribution and changes in geophysical parameters. In the simplest case magmatic intrusions can be modeled as spheres with diameters of at least a few tens of meters at a depth of a few kilometers; hence they are expected to have a gravimetric signature in permanent recording stations on the ground surface, while larger intrusions may also have an imprint in sensors in orbit around the earth or along precisely defined air paths. Identification of such gravimetric signals and separation of the "true" signal from the measurement and ambient noise requires fine forward modeling of the wider areas based on realistic simulation of the ambient gravimetric field, and then modeling of its possible distortion because of magmatic anomalies. Such results are useful to remove ambiguities in inverse modeling of ground deformation, and also to detect magmatic anomalies offshore.

  9. K-Rich Basaltic Sources beneath Ultraslow Spreading Central Lena Trough in the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Ling, X.; Snow, J. E.; Li, Y.

    2016-12-01

    Magma sources fundamentally influence accretion processes at ultraslow spreading ridges. Potassium enriched Mid-Ocean Ridge Basalt (K-MORB) was dredged from the central Lena Trough (CLT) in the Arctic Ocean (Nauret et al., 2011). Its geochemical signatures indicate a heterogeneous mantle source with probable garnet present under low pressure. To explore the basaltic mantle sources beneath the study area, multiple models are carried out predicting melting sources and melting P-T conditions in this study. P-T conditions are estimated by the experimental derived thermobarometer from Hoang and Flower (1998). Batch melting model and major element model (AlphaMELTs) are used to calculate the heterogeneous mantle sources. The modeling suggests phlogopite is the dominant H2O-K bearing mineral in the magma source. 5% partial melting of phlogopite and amphibole mixing with depleted mantle (DM) melt is consistent with the incompatible element pattern of CLT basalt. P-T estimation shows 1198-1212oC/4-7kbar as the possible melting condition for CLT basalt. Whereas the chemical composition of north Lena Trough (NLT) basalt is similar to N-MORB, and the P-T estimation corresponds to 1300oC normal mantle adiabat. The CLT basalt bulk composition is of mixture of 40% of the K-MORB endmember and an N-MORB-like endmember similar to NLT basalt. Therefore the binary mixing of the two endmembers exists in the CLT region. This kind of mixing infers to the tectonic evolution of the region, which is simultaneous to the Arctic Ocean opening.

  10. Use of empirically derived source-destination models to map regional conservation corridors

    Treesearch

    Samuel A. Cushman; Kevin S. McKelvey; Michael K. Schwartz

    2008-01-01

    The ability of populations to be connected across large landscapes via dispersal is critical to longterm viability for many species. One means to mitigate population isolation is the protection of movement corridors among habitat patches. Nevertheless, the utility of small, narrow, linear features as habitat corridors has been hotly debated. Here, we argue that...

  11. Machine learning and hurdle models for improving regional predictions of stream water acid neutralizing capacity

    Treesearch

    Nicholas A. Povak; Paul F. Hessburg; Keith M. Reynolds; Timothy J. Sullivan; Todd C. McDonnell; R. Brion Salter

    2013-01-01

    In many industrialized regions of the world, atmospherically deposited sulfur derived from industrial, nonpoint air pollution sources reduces stream water quality and results in acidic conditions that threaten aquatic resources. Accurate maps of predicted stream water acidity are an essential aid to managers who must identify acid-sensitive streams, potentially...

  12. Partitioning error components for accuracy-assessment of near-neighbor methods of imputation

    Treesearch

    Albert R. Stage; Nicholas L. Crookston

    2007-01-01

    Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...

  13. Models of Vocabulary Acquisition: Direct Tests and Text-Derived Simulations of Vocabulary Growth

    ERIC Educational Resources Information Center

    Biemiller, Andrew; Rosenstein, Mark; Sparks, Randall; Landauer, Thomas K.; Foltz, Peter W.

    2014-01-01

    Determining word meanings that ought to be taught or introduced is important for educators. A sequence for vocabulary growth can be inferred from many sources, including testing children's knowledge of word meanings at various ages, predicting from print frequency, or adult-recalled Age of Acquisition. A new approach, Word Maturity, is based on…

  14. Endothelium-derived fibronectin regulates neonatal vascular morphogenesis in an autocrine fashion.

    PubMed

    Turner, Christopher J; Badu-Nkansah, Kwabena; Hynes, Richard O

    2017-11-01

    Fibronectin containing alternatively spliced EIIIA and EIIIB domains is largely absent from mature quiescent vessels in adults, but is highly expressed around blood vessels during developmental and pathological angiogenesis. The precise functions of fibronectin and its splice variants during developmental angiogenesis however remain unclear due to the presence of cardiac, somitic, mesodermal and neural defects in existing global fibronectin KO mouse models. Using a rare family of surviving EIIIA EIIIB double KO mice, as well as inducible endothelial-specific fibronectin-deficient mutant mice, we show that vascular development in the neonatal retina is regulated in an autocrine manner by endothelium-derived fibronectin, and requires both EIIIA and EIIIB domains and the RGD-binding α5 and αv integrins for its function. Exogenous sources of fibronectin do not fully substitute for the autocrine function of endothelial fibronectin, demonstrating that fibronectins from different sources contribute differentially to specific aspects of angiogenesis.

  15. Increased fluxes of shelf-derived materials to the central Arctic Ocean

    PubMed Central

    Kipp, Lauren E.; Charette, Matthew A.; Moore, Willard S.; Henderson, Paul B.; Rigor, Ignatius G.

    2018-01-01

    Rising temperatures in the Arctic Ocean region are responsible for changes such as reduced ice cover, permafrost thawing, and increased river discharge, which, together, alter nutrient and carbon cycles over the vast Arctic continental shelf. We show that the concentration of radium-228, sourced to seawater through sediment-water exchange processes, has increased substantially in surface waters of the central Arctic Ocean over the past decade. A mass balance model for 228Ra suggests that this increase is due to an intensification of shelf-derived material inputs to the central basin, a source that would also carry elevated concentrations of dissolved organic carbon and nutrients. Therefore, we suggest that significant changes in the nutrient, carbon, and trace metal balances of the Arctic Ocean are underway, with the potential to affect biological productivity and species assemblages in Arctic surface waters. PMID:29326980

  16. Concise Review: Stem Cell Trials Using Companion Animal Disease Models.

    PubMed

    Hoffman, Andrew M; Dow, Steven W

    2016-07-01

    Studies to evaluate the therapeutic potential of stem cells in humans would benefit from more realistic animal models. In veterinary medicine, companion animals naturally develop many diseases that resemble human conditions, therefore, representing a novel source of preclinical models. To understand how companion animal disease models are being studied for this purpose, we reviewed the literature between 2008 and 2015 for reports on stem cell therapies in dogs and cats, excluding laboratory animals, induced disease models, cancer, and case reports. Disease models included osteoarthritis, intervertebral disc degeneration, dilated cardiomyopathy, inflammatory bowel diseases, Crohn's fistulas, meningoencephalomyelitis (multiple sclerosis-like), keratoconjunctivitis sicca (Sjogren's syndrome-like), atopic dermatitis, and chronic (end-stage) kidney disease. Stem cells evaluated in these studies included mesenchymal stem-stromal cells (MSC, 17/19 trials), olfactory ensheathing cells (OEC, 1 trial), or neural lineage cells derived from bone marrow MSC (1 trial), and 16/19 studies were performed in dogs. The MSC studies (13/17) used adipose tissue-derived MSC from either allogeneic (8/13) or autologous (5/13) sources. The majority of studies were open label, uncontrolled studies. Endpoints and protocols were feasible, and the stem cell therapies were reportedly safe and elicited beneficial patient responses in all but two of the trials. In conclusion, companion animals with naturally occurring diseases analogous to human conditions can be recruited into clinical trials and provide realistic insight into feasibility, safety, and biologic activity of novel stem cell therapies. However, improvements in the rigor of manufacturing, study design, and regulatory compliance will be needed to better utilize these models. Stem Cells 2016;34:1709-1729. © 2016 AlphaMed Press.

  17. Dynamics of β-cell turnover: evidence for β-cell turnover and regeneration from sources of β-cells other than β-cell replication in the HIP rat

    PubMed Central

    Manesso, Erica; Toffolo, Gianna M.; Saisho, Yoshifumi; Butler, Alexandra E.; Matveyenko, Aleksey V.; Cobelli, Claudio; Butler, Peter C.

    2009-01-01

    Type 2 diabetes is characterized by hyperglycemia, a deficit in β-cells, increased β-cell apoptosis, and islet amyloid derived from islet amyloid polypeptide (IAPP). These characteristics are recapitulated in the human IAPP transgenic (HIP) rat. We developed a mathematical model to quantify β-cell turnover and applied it to nondiabetic wild type (WT) vs. HIP rats from age 2 days to 10 mo to establish 1) whether β-cell formation is derived exclusively from β-cell replication, or whether other sources of β-cells (OSB) are present, and 2) to what extent, if any, there is attempted β-cell regeneration in the HIP rat and if this is through β-cell replication or OSB. We conclude that formation and maintenance of adult β-cells depends largely (∼80%) on formation of β-cells independent from β-cell duplication. Moreover, this source adaptively increases in the HIP rat, implying attempted β-cell regeneration that substantially slows loss of β-cell mass. PMID:19470833

  18. Dynamics of beta-cell turnover: evidence for beta-cell turnover and regeneration from sources of beta-cells other than beta-cell replication in the HIP rat.

    PubMed

    Manesso, Erica; Toffolo, Gianna M; Saisho, Yoshifumi; Butler, Alexandra E; Matveyenko, Aleksey V; Cobelli, Claudio; Butler, Peter C

    2009-08-01

    Type 2 diabetes is characterized by hyperglycemia, a deficit in beta-cells, increased beta-cell apoptosis, and islet amyloid derived from islet amyloid polypeptide (IAPP). These characteristics are recapitulated in the human IAPP transgenic (HIP) rat. We developed a mathematical model to quantify beta-cell turnover and applied it to nondiabetic wild type (WT) vs. HIP rats from age 2 days to 10 mo to establish 1) whether beta-cell formation is derived exclusively from beta-cell replication, or whether other sources of beta-cells (OSB) are present, and 2) to what extent, if any, there is attempted beta-cell regeneration in the HIP rat and if this is through beta-cell replication or OSB. We conclude that formation and maintenance of adult beta-cells depends largely ( approximately 80%) on formation of beta-cells independent from beta-cell duplication. Moreover, this source adaptively increases in the HIP rat, implying attempted beta-cell regeneration that substantially slows loss of beta-cell mass.

  19. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  20. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

Top