Sample records for distributed source modeling

  1. Quantifying the uncertainty of nonpoint source attribution in distributed water quality models: A Bayesian assessment of SWAT's sediment export predictions

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan

    2014-11-01

    Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.

  2. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  3. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  4. The Competition Between a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  5. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  6. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  7. Ion-source modeling and improved performance of the CAMS high-intensity Cs-sputter ion source

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    2000-10-01

    The interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS) has been computer modeled using the program NEDLab, with the aim of improving negative ion output. Space charge effects on ion trajectories within the source were modeled through a successive iteration process involving the calculation of ion trajectories through Poisson-equation-determined electric fields, followed by calculation of modified electric fields incorporating the charge distribution from the previously calculated ion trajectories. The program has several additional features that are useful in ion source modeling: (1) averaging of space charge distributions over successive iterations to suppress instabilities, (2) Child's Law modeling of space charge limited ion emission from surfaces, and (3) emission of particular ion groups with a thermal energy distribution and at randomized angles. The results of the modeling effort indicated that significant modification of the interior geometry of the source would double Cs + ion production from our spherical ionizer and produce a significant increase in negative ion output from the source. The results of the implementation of the new geometry were found to be consistent with the model results.

  8. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  9. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  10. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  11. Dependence of Microlensing on Source Size and Lens Mass

    NASA Astrophysics Data System (ADS)

    Congdon, A. B.; Keeton, C. R.

    2007-11-01

    In gravitational lensed quasars, the magnification of an image depends on the configuration of stars in the lensing galaxy. We study the statistics of the magnification distribution for random star fields. The width of the distribution characterizes the amount by which the observed magnification is likely to differ from models in which the mass is smoothly distributed. We use numerical simulations to explore how the width of the magnification distribution depends on the mass function of stars, and on the size of the source quasar. We then propose a semi-analytic model to describe the distribution width for different source sizes and stellar mass functions.

  12. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  13. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  14. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.

  15. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  17. Objective estimates of mantle 3He in the ocean and implications for constraining the deep ocean circulation

    NASA Astrophysics Data System (ADS)

    Holzer, Mark; DeVries, Timothy; Bianchi, Daniele; Newton, Robert; Schlosser, Peter; Winckler, Gisela

    2017-01-01

    Hydrothermal vents along the ocean's tectonic ridge systems inject superheated water and large amounts of dissolved metals that impact the deep ocean circulation and the oceanic cycling of trace metals. The hydrothermal fluid contains dissolved mantle helium that is enriched in 3He relative to the atmosphere, providing an isotopic tracer of the ocean's deep circulation and a marker of hydrothermal sources. This work investigates the potential for the 3He/4He isotope ratio to constrain the ocean's mantle 3He source and to provide constraints on the ocean's deep circulation. We use an ensemble of 11 data-assimilated steady-state ocean circulation models and a mantle helium source based on geographically varying sea-floor spreading rates. The global source distribution is partitioned into 6 regions, and the vertical profile and source amplitude of each region are varied independently to determine the optimal 3He source distribution that minimizes the mismatch between modeled and observed δ3He. In this way, we are able to fit the observed δ3He distribution to within a relative error of ∼15%, with a global 3He source that ranges from 640 to 850 mol yr-1, depending on circulation. The fit captures the vertical and interbasin gradients of the δ3He distribution very well and reproduces its jet-sheared saddle point in the deep equatorial Pacific. This demonstrates that the data-assimilated models have much greater fidelity to the deep ocean circulation than other coarse-resolution ocean models. Nonetheless, the modelled δ3He distributions still display some systematic biases, especially in the deep North Pacific where δ3He is overpredicted by our models, and in the southeastern tropical Pacific, where observed westward-spreading δ3He plumes are not well captured. Sources inferred by the data-assimilated transport with and without isopycnally aligned eddy diffusivity differ widely in the Southern Ocean, in spite of the ability to match the observed distributions of CFCs and radiocarbon for either eddy parameterization.

  18. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  19. Discontinuous model with semi analytical sheath interface for radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Miyashita, Masaru

    2016-09-01

    Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.

  20. A methodology for efficiency optimization of betavoltaic cell design using an isotropic planar source having an energy dependent beta particle distribution.

    PubMed

    Theirrattanakul, Sirichai; Prelas, Mark

    2017-09-01

    Nuclear batteries based on silicon carbide betavoltaic cells have been studied extensively in the literature. This paper describes an analysis of design parameters, which can be applied to a variety of materials, but is specific to silicon carbide. In order to optimize the interface between a beta source and silicon carbide p-n junction, it is important to account for the specific isotope, angular distribution of the beta particles from the source, the energy distribution of the source as well as the geometrical aspects of the interface between the source and the transducer. In this work, both the angular distribution and energy distribution of the beta particles are modeled using a thin planar beta source (e.g., H-3, Ni-63, S-35, Pm-147, Sr-90, and Y-90) with GEANT4. Previous studies of betavoltaics with various source isotopes have shown that Monte Carlo based codes such as MCNPX, GEANT4 and Penelope generate similar results. GEANT4 is chosen because it has important strengths for the treatment of electron energies below one keV and it is widely available. The model demonstrates the effects of angular distribution, the maximum energy of the beta particle and energy distribution of the beta source on the betavoltaic and it is useful in determining the spatial profile of the power deposition in the cell. Copyright © 2017. Published by Elsevier Ltd.

  1. Naima: a Python package for inference of particle distribution properties from nonthermal spectra

    NASA Astrophysics Data System (ADS)

    Zabalza, V.

    2015-07-01

    The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Here I present naima, an open source Python package that provides models for nonthermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. Here I present the models and methods available in naima and an example of their application to the understanding of a galactic nonthermal source. naima's documentation, including how to install the package, is available at http://naima.readthedocs.org.

  2. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  3. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less

  4. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  5. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  6. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  7. Modeling of Optical Waveguide Poling and Thermally Stimulated Discharge (TSD) Charge and Current Densities for Guest/Host Electro Optic Polymers

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Ashley, Paul R.; Abushagur, Mustafa

    2004-01-01

    A charge density and current density model of a waveguide system has been developed to explore the effects of electric field electrode poling. An optical waveguide may be modeled during poling by considering the dielectric charge distribution, polarization charge distribution, and conduction charge generated by the poling field. These charge distributions are the source of poling current densities. The model shows that boundary charge current density and polarization current density are the major source of currents measured during poling and thermally stimulated discharge These charge distributions provide insight into the poling mechanisms and are directly related to E(sub A), and, alpha(sub r). Initial comparisons with experimental data show excellent correlation to the model results.

  8. SU-E-T-667: Radiosensitization Due to Gold Nanoparticles: A Monte Carlo Cellular Dosimetry Investigation of An Expansive Parameter Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinov, M; Thomson, R

    2015-06-15

    Purpose: To investigate dose enhancement to cellular compartments following gold nanoparticle (GNP) uptake in tissue, varying cell and tissue morphology, intra and extracellular GNP distribution, and source energy using Monte Carlo (MC) simulations. Methods: Models of single and multiple cells are developed for normal and cancerous tissues; cells (outer radii 5–10 µm) are modeled as concentric spheres comprising the nucleus (radii 2.5–7.5 µm) and cytoplasm. GNP distributions modeled include homogeneous distributions throughout the cytoplasm, variable numbers of GNP-containing endosomes within the cytoplasm, or distributed in a spherical shell about the nucleus. Gold concentrations range from 1 to 30 mg/g. Dosemore » to nucleus and to cytoplasm for simulations including GNPs are compared to simulations without GNPs to compute Nuclear and Cytoplasm Dose Enhancement Factors (NDEF, CDEF). Photon source energies are between 20 keV and 1.25 MeV. Results: DEFs are highly sensitive to GNP intracellular distribution; for a 2.5 µm radius nucleus irradiated by a 30 keV source, NDEF varies from 1.2 for a single endosome containing all GNPs to 8.2 for GNPs distributed about the nucleus (7 mg/g). DEFs vary with cell dimensions and source energy: NDEFs vary from 2.5 (90 keV) to 8.2 (30 keV) for a 2.5 µm radius nucleus and from 1.1 (90 keV) to 1.3 (30 keV) for a 7.5 µm radius nucleus, both with GNPs in a spherical shell about the nucleus (7 mg/g). NDEF and CDEF are generally different within a single cell. For multicell models, the presence of gold within intervening tissues between source and target perturbs the fluence reaching cellular targets, resulting in DEF inhomogeneities within a population of irradiated cells. Conclusion: DEFs vary by an order of magnitude for different cell models, GNP distributions, and source energies, demonstrating the importance of detailed modelling for advancing GNP development for radiotherapy. Funding provided by the Natural Sciences and Engineering Council of Canada (NSERC), and the Canada Research Chairs Program (CRC)« less

  9. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  10. New (125)I brachytherapy source IsoSeed I25.S17plus: Monte Carlo dosimetry simulation and comparison to sources of similar design.

    PubMed

    Pantelis, Evaggelos; Papagiannis, Panagiotis; Anagnostopoulos, Giorgos; Baltas, Dimos

    2013-12-01

    To determine the relative dose rate distribution around the new (125)I brachytherapy source IsoSeed I25.S17plus and report results in a form suitable for clinical use. Results for the new source are also compared to corresponding results for other commercially available (125)I sources of similar design. Monte Carlo simulations were performed using the MCNP5 v.1.6 general purpose code. The model of the new source was prepared from information provided by the manufacturer and verified by imaging a sample of ten non-radioactive sources. Corresponding simulations were also performed for the 6711 (125)I brachytherapy source, using updated geometric information presented recently in the literature. The uncertainty of the dose distribution around the new source, as well as the dosimetric quantities derived from it according to the Task Group 43 formalism, were determined from the standard error of the mean of simulations for a sample of fifty source models. These source models were prepared by randomly selecting values of geometric parameters from uniform distributions defined by manufacturer stated tolerances. Results are presented in the form of the quantities defined in the update of the Task Group 43 report, as well as a relative dose rate table in Cartesian coordinates. The dose rate distribution of the new source is comparable to that of sources of similar design (IsoSeed I25.S17, Oncoseed 6711, SelectSeed 130.002, Advantage IAI-125A, I-Seed AgX100, Thinseed 9011). Noticeable differences were observed only for the IsoSeed I25.S06 and Best 2301 sources.

  11. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, JR; Rivard, MJ

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less

  12. Towards a distributed information architecture for avionics data

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan

    2003-01-01

    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  13. Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.

    2008-05-01

    The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.

  14. The distribution of Enceladus water-group neutrals in Saturn’s Magnetosphere

    NASA Astrophysics Data System (ADS)

    Smith, Howard T.; Richardson, John D.

    2017-10-01

    Saturn’s magnetosphere is unique in that the plumes from the small icy moon, Enceladus, serve at the primary source for heavy particles in Saturn’s magnetosphere. The resulting co-orbiting neutral particles interact with ions, electrons, photons and other neutral particles to generate separate H2O, OH and O tori. Characterization of these toroidal distributions is essential for understanding Saturn magnetospheric sources, composition and dynamics. Unfortunately, limited direct observations of these features are available so modeling is required. A significant modeling challenge involves ensuring that either the plasma and neutral particle populations are not simply input conditions but can provide feedback to each population (i.e. are self-consistent). Jurac and Richardson (2005) executed such a self-consistent model however this research was performed prior to the return of Cassini data. In a similar fashion, we have coupled a 3-D neutral particle model (Smith et al. 2004, 2005, 2006, 2007, 2009, 2010) with a plasma transport model (Richardson 1998; Richardson & Jurac 2004) to develop a self-consistent model which is constrained by all available Cassini observations and current findings on Saturn’s magnetosphere and the Enceladus plume source resulting in much more accurate neutral particle distributions. Here a new self-consistent model of the distribution of the Enceladus-generated neutral tori that is validated by all available observations. We also discuss the implications for source rate and variability.

  15. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.

  16. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas

    2017-04-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.

  18. The innovative concept of three-dimensional hybrid receptor modeling

    NASA Astrophysics Data System (ADS)

    Stojić, A.; Stanišić Stojić, S.

    2017-09-01

    The aim of this study was to improve the current understanding of air pollution transport processes at regional and long-range scale. For this purpose, three-dimensional (3D) potential source contribution function and concentration weighted trajectory models, as well as new hybrid receptor model, concentration weighted boundary layer (CWBL), which uses a two-dimensional grid and a planetary boundary layer height as a frame of reference, are presented. The refined approach to hybrid receptor modeling has two advantages. At first, it considers whether each trajectory endpoint meets the inclusion criteria based on planetary boundary layer height, which is expected to provide a more realistic representation of the spatial distribution of emission sources and pollutant transport pathways. Secondly, it includes pollutant time series preprocessing to make hybrid receptor models more applicable for suburban and urban locations. The 3D hybrid receptor models presented herein are designed to identify altitude distribution of potential sources, whereas CWBL can be used for analyzing the vertical distribution of pollutant concentrations along the transport pathway.

  19. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  20. Lunar Neutral Exposphere Properties from Pickup Ion Analysis

    NASA Technical Reports Server (NTRS)

    Hartle, R. E.; Sarantos, M.; Killen, R.; Sittler, E. C. Jr.; Halekas, J.; Yokota, S.; Saito, Y.

    2009-01-01

    Composition and structure of neutral constituents in the lunar exosphere can be determined through measurements of phase space distributions of pickup ions borne from the exosphere [1]. An essential point made in an early study [ 1 ] and inferred by recent pickup ion measurements [2, 3] is that much lower neutral exosphere densities can be derived from ion mass spectrometer measurements of pickup ions than can be determined by conventional neutral mass spectrometers or remote sensing instruments. One approach for deriving properties of neutral exospheric source gasses is to first compare observed ion spectra with pickup ion model phase space distributions. Neutral exosphere properties are then inferred by adjusting exosphere model parameters to obtain the best fit between the resulting model pickup ion distributions and the observed ion spectra. Adopting this path, we obtain ion distributions from a new general pickup ion model, an extension of a simpler analytic description obtained from the Vlasov equation with an ion source [4]. In turn, the ion source is formed from a three-dimensional exospheric density distribution, which can range from the classical Chamberlain type distribution to one with variable exobase temperatures and nonthermal constituents as well as those empirically derived. The initial stage of this approach uses the Moon's known neutral He and Na exospheres to deriv e He+ and Na+ pickup ion exospheres, including their phase space distributions, densities and fluxes. The neutral exospheres used are those based on existing models and remote sensing studies. As mentioned, future ion measurements can be used to constrain the pickup ion model and subsequently improve the neutral exosphere descriptions. The pickup ion model is also used to estimate the exosphere sources of recently observed pickup ions on KAGUYA [3]. Future missions carrying ion spectrometers (e.g., ARTEMIS) will be able to study the lunar neutral exosphere with great sensitivity, yielding the necessary ion velocity spectra needed to further analysis of parent neutral exosphere properties.

  1. ALMA observations of lensed Herschel sources: testing the dark matter halo paradigm

    NASA Astrophysics Data System (ADS)

    Amvrosiadis, A.; Eales, S. A.; Negrello, M.; Marchetti, L.; Smith, M. W. L.; Bourne, N.; Clements, D. L.; De Zotti, G.; Dunne, L.; Dye, S.; Furlanetto, C.; Ivison, R. J.; Maddox, S. J.; Valiante, E.; Baes, M.; Baker, A. J.; Cooray, A.; Crawford, S. M.; Frayer, D.; Harris, A.; Michałowski, M. J.; Nayyeri, H.; Oliver, S.; Riechers, D. A.; Serjeant, S.; Vaccari, M.

    2018-04-01

    With the advent of wide-area submillimetre surveys, a large number of high-redshift gravitationally lensed dusty star-forming galaxies have been revealed. Because of the simplicity of the selection criteria for candidate lensed sources in such surveys, identified as those with S500 μm > 100 mJy, uncertainties associated with the modelling of the selection function are expunged. The combination of these attributes makes submillimetre surveys ideal for the study of strong lens statistics. We carried out a pilot study of the lensing statistics of submillimetre-selected sources by making observations with the Atacama Large Millimeter Array (ALMA) of a sample of strongly lensed sources selected from surveys carried out with the Herschel Space Observatory. We attempted to reproduce the distribution of image separations for the lensed sources using a halo mass function taken from a numerical simulation that contains both dark matter and baryons. We used three different density distributions, one based on analytical fits to the haloes formed in the EAGLE simulation and two density distributions [Singular Isothermal Sphere (SIS) and SISSA] that have been used before in lensing studies. We found that we could reproduce the observed distribution with all three density distributions, as long as we imposed an upper mass transition of ˜1013 M⊙ for the SIS and SISSA models, above which we assumed that the density distribution could be represented by a Navarro-Frenk-White profile. We show that we would need a sample of ˜500 lensed sources to distinguish between the density distributions, which is practical given the predicted number of lensed sources in the Herschel surveys.

  2. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  3. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  4. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  5. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  6. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  7. Pulling it all together: the self-consistent distribution of neutral tori in Saturn's Magnetosphere based on all Cassini observations

    NASA Astrophysics Data System (ADS)

    Smith, H. T.; Richardson, J. D.

    2017-12-01

    Saturn's magnetosphere is unique in that the plumes from the small icy moon, Enceladus, serve at the primary source for heavy particles in Saturn's magnetosphere. The resulting co-orbiting neutral particles interact with ions, electrons, photons and other neutral particles to generate separate H2O, OH and O tori. Characterization of these toroidal distributions is essential for understanding Saturn magnetospheric sources, composition and dynamics. Unfortunately, limited direct observations of these features are available so modeling is required. A significant modeling challenge involves ensuring that either the plasma and neutral particle populations are not simply input conditions but can provide feedback to each population (i.e. are self-consistent). Jurac and Richardson (2005) executed such a self-consistent model however this research was performed prior to the return of Cassini data. In a similar fashion, we have coupled a 3-D neutral particle model (Smith et al. 2004, 2005, 2006, 2007, 2009, 2010) with a plasma transport model (Richardson 1998; Richardson & Jurac 2004) to develop a self-consistent model which is constrained by all available Cassini observations and current findings on Saturn's magnetosphere and the Enceladus plume source resulting in much more accurate neutral particle distributions. We present a new self-consistent model of the distribution of the Enceladus-generated neutral tori that is validated by all available observations. We also discuss the implications for source rate and variability.

  8. A general circulation model study of atmospheric carbon monoxide

    NASA Technical Reports Server (NTRS)

    Pinto, J. P.; Rind, D.; Russell, G. L.; Lerner, J. A.; Hansen, J. E.; Yung, Y. L.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 to the 15th g/yr, in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 750,000/cu cm. Models that calculate globally averaged OH concentrations much lower than this nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources.

  9. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  10. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  11. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  12. The influence of boreal biomass burning emissions on the distribution of tropospheric ozone over North America and the North Atlantic during 2010

    NASA Astrophysics Data System (ADS)

    Parrington, M.; Palmer, P. I.; Henze, D. K.; Tarasick, D. W.; Hyer, E. J.; Owen, R. C.; Helmig, D.; Clerbaux, C.; Bowman, K. W.; Deeter, M. N.; Barratt, E. M.; Coheur, P.-F.; Hurtmans, D.; Jiang, Z.; George, M.; Worden, J. R.

    2012-02-01

    We have analysed the sensitivity of the tropospheric ozone distribution over North America and the North Atlantic to boreal biomass burning emissions during the summer of 2010 using the GEOS-Chem 3-D global tropospheric chemical transport model and observations from in situ and satellite instruments. We show that the model ozone distribution is consistent with observations from the Pico Mountain Observatory in the Azores, ozonesondes across Canada, and the Tropospheric Emission Spectrometer (TES) and Infrared Atmospheric Sounding Instrument (IASI) satellite instruments. Mean biases between the model and observed ozone mixing ratio in the free troposphere were less than 10 ppbv. We used the adjoint of GEOS-Chem to show the model ozone distribution in the free troposphere over Maritime Canada is largely sensitive to NOx emissions from biomass burning sources in Central Canada, lightning sources in the central US, and anthropogenic sources in the eastern US and south-eastern Canada. We also used the adjoint of GEOS-Chem to evaluate the Fire Locating And Monitoring of Burning Emissions (FLAMBE) inventory through assimilation of CO observations from the Measurements Of Pollution In The Troposphere (MOPITT) satellite instrument. The CO inversion showed that, on average, the FLAMBE emissions needed to be reduced to 89% of their original values, with scaling factors ranging from 12% to 102%, to fit the MOPITT observations in the boreal regions. Applying the CO scaling factors to all species emitted from boreal biomass burning sources led to a decrease of the model tropospheric distributions of CO, PAN, and NOx by as much as -20 ppbv, -50 pptv, and -20 pptv respectively. The modification of the biomass burning emission estimates reduced the model ozone distribution by approximately -3 ppbv (-8%) and on average improved the agreement of the model ozone distribution compared to the observations throughout the free troposphere, reducing the mean model bias from 5.5 to 4.0 ppbv for the Pico Mountain Observatory, 3.0 to 0.9 ppbv for ozonesondes, 2.0 to 0.9 ppbv for TES, and 2.8 to 1.4 ppbv for IASI.

  13. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  14. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  15. Calculated occultation profiles of Io and the hot spots

    NASA Technical Reports Server (NTRS)

    Mcewen, A. S.; Soderblom, L. A.; Matson, D. L.; Johnson, T. V.; Lunine, J. I.

    1986-01-01

    Occultations of Io by other Galilean satellites in 1985 provide a means to locate volcanic hot spots and to model their temperatures. The expected time variations in the integral reflected and emitted radiation of the occultations are computed as a function of wavelength (visual to 8.7 microns). The best current ephemerides were used to calculate the geometry of each event as viewed from earth. Visual reflectances were modeled from global mosaics of Io. Thermal emission from the hot spots was calculated from Voyager 1 IRIS observations and, for regions unobserved by IRIS, from a model based on the distribution of low-albedo features. The occultations may help determine (1) the location and temperature distribution of Loki; (2) the source(s) of excess emission in the region from long 50 deg to 200 deg and (3) the distribution of small, high-temperature sources.

  16. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  17. The Distribution of Carbon Monoxide in the GOCART Model

    NASA Technical Reports Server (NTRS)

    Fan, Xiaobiao; Chin, Mian; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Carbon monoxide (CO) is an important trace gas because it is a significant source of tropospheric Ozone (O3) as well as a major sink for atmospheric hydroxyl radical (OH). The distribution of CO is set by a balance between the emissions, transport, and chemical processes in the atmosphere. The Georgia Tech/Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model is used to simulate the atmospheric distribution of CO. The GOCART model is driven by the assimilated meteorological data from the Goddard Earth Observing System Data Assimilation System (GEOS DAS) in an off-line mode. We study the distribution of CO on three time scales: (1) day to day fluctuation produced by the synoptic waves; (2) seasonal changes due to the annual cycle of CO sources and sinks; and (3) interannual variability induced by dynamics. Comparison of model results with ground based and remote sensing measurements will also be presented.

  18. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  19. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  20. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  1. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  2. Testing the uniqueness of mass models using gravitational lensing

    NASA Astrophysics Data System (ADS)

    Walls, Levi; Williams, Liliya L. R.

    2018-06-01

    The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.

  3. Advection-diffusion model for the simulation of air pollution distribution from a point source emission

    NASA Astrophysics Data System (ADS)

    Ulfah, S.; Awalludin, S. A.; Wahidin

    2018-01-01

    Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  5. The Distribution of Cosmic-Ray Sources in the Galaxy, Gamma-Rays and the Gradient in the CO-to-H2 Relation

    NASA Technical Reports Server (NTRS)

    Strong, A. W.; Moskalenko, I. V.; Reimer, O.; Diehl, S.; Diehl, R.

    2004-01-01

    We present a solution to the apparent discrepancy between the radial gradient in the diffuse Galactic gamma-ray emissivity and the distribution of supernova remnants, believed to be the sources of cosmic rays. Recent determinations of the pulsar distribution have made the discrepancy even more apparent. The problem is shown to be plausibly solved by a variation in the Wco-to-N(H2) scaling factor. If this factor increases by a factor of 5-10 from the inner to the outer Galaxy, as expected from the Galactic metallicity gradient and supported by other evidence, we show that the source distribution required to match the radial gradient of gamma-rays can be reconciled with the distribution of supernova remnants as traced by current studies of pulsars. The resulting model fits the EGRET gamma-ray profiles extremely well in longitude, and reproduces the mid-latitude inner Galaxy intensities better than previous models.

  6. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  7. Toxic metals in Venics lagoon sediments: Model, observation, an possible removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, A.; Molinaroli, E.

    1994-11-01

    We have modeled the distribution of nine toxic metals in the surface sediments from 163 stations in the Venice lagoon using published data. Three entrances from the Adriatic Sea control the circulation in the lagoon and divide it into three basins. We assume, for purposes of modeling, that Porto Marghera at the head of the Industrial Zone area is the single source of toxic metals in the Venice lagoon. In a standing body of lagoon water, concentration of pollutants at distance x from the source (C{sub 0}) may be given by C=C{sub 0}e{sup -kx} where k is the rate constantmore » of dispersal. We calculated k empirically using concentrations at the source, and those farthest from it, that is the end points of the lagoon. Average k values (ppm/km) in the lagoon are: Zn 0.165, Cd 0.116, Hg 0.110, Cu 0.105, Co 0.072, Pb 0.058, Ni 0.008, Cr (0.011) and Fe (0.018 percent/km), and they have complex distributions. Given the k values, concentration at source (C{sub 0}), and the distance x of any point in the lagoon from the source, we have calculated the model concentrations of the nine metals at each sampling station. Tides, currents, floor morphology, additional sources, and continued dumping perturb model distributions causing anomalies (observed minus model concentrations). Positive anomalies are found near the source, where continued dumping perturbs initial boundary conditions, and in areas of sluggish circulation. Negative anomalies are found in areas with strong currents that may flush sediments out of the lagoon. We have thus identified areas in the lagoon where higher rate of sediment removal and exchange may lesson pollution. 41 refs., 4 figs., 3 tabs.« less

  8. Evaluation of the AnnAGNPS model for predicting runoff and sediment yield in a small Mediterranean agricultural watershed in Navarre (Spain)

    USDA-ARS?s Scientific Manuscript database

    AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...

  9. Preliminary results concerning the simulation of beam profiles from extracted ion current distributions for mini-STRIKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agostinetti, P., E-mail: piero.agostinetti@igi.cnr.it; Serianni, G.; Veltri, P.

    The Radio Frequency (RF) negative hydrogen ion source prototype has been chosen for the ITER neutral beam injectors due to its optimal performances and easier maintenance demonstrated at Max-Planck-Institut für Plasmaphysik, Garching in hydrogen and deuterium. One of the key information to better understand the operating behavior of the RF ion sources is the extracted negative ion current density distribution. This distribution—influenced by several factors like source geometry, particle drifts inside the source, cesium distribution, and layout of cesium ovens—is not straightforward to be evaluated. The main outcome of the present contribution is the development of a minimization method tomore » estimate the extracted current distribution using the footprint of the beam recorded with mini-STRIKE (Short-Time Retractable Instrumented Kalorimeter). To accomplish this, a series of four computational models have been set up, where the output of a model is the input of the following one. These models compute the optics of the ion beam, evaluate the distribution of the heat deposited on the mini-STRIKE diagnostic calorimeter, and finally give an estimate of the temperature distribution on the back of mini-STRIKE. Several iterations with different extracted current profiles are necessary to give an estimate of the profile most compatible with the experimental data. A first test of the application of the method to the BAvarian Test Machine for Negative ions beam is given.« less

  10. The influence of boreal biomass burning emissions on the distribution of tropospheric ozone over North America and the North Atlantic during 2010

    NASA Astrophysics Data System (ADS)

    Parrington, M.; Palmer, P. I.; Henze, D. K.; Tarasick, D. W.; Hyer, E. J.; Owen, R. C.; Helmig, D.; Clerbaux, C.; Bowman, K. W.; Deeter, M. N.; Barratt, E. M.; Coheur, P.-F.; Hurtmans, D.; George, M.; Worden, J. R.

    2011-09-01

    We analyse the tropospheric ozone distribution over North America and the North Atlantic to boreal biomass burning emissions during the summer of 2010 using the GEOS-Chem 3-D global tropospheric chemical transport model, and observations from in situ and satellite instruments. In comparison to observations from the PICO-NARE observatory in the Azores, ozonesondes across Canada, and the Tropospheric Emission Spectrometer (TES) and Infrared Atmospheric Sounding Instrument (IASI) satellite instruments, the model ozone distribution is shown to be in reasonable agreement with mean biases less than 10 ppbv. We use the adjoint of GEOS-Chem to show the model ozone distribution in the free troposphere over Maritime Canada is largely sensitive to NOx emissions from biomass burning sources in Central Canada, lightning sources in the central US, and anthropogenic sources in eastern US and south-eastern Canada. We also use the adjoint of GEOS-Chem to evaluate the Fire Locating And Monitoring of Burning Emissions (FLAMBE) inventory through assimilation of CO observations from the Measurements Of Pollution In The Troposphere (MOPITT) satellite instrument. The CO inversion showed that, on average the FLAMBE emissions needed to be reduced to 89 % of their original values, with scaling factors ranging from 12 % to 102 %, to fit the MOPITT observations in the boreal regions. Applying the CO scaling factors to all species emitted from boreal biomass burning sources led to a decrease of the model tropospheric distributions of CO, PAN, and NOx by as much as -20 ppbv, -50 ppbv, and -20 ppbv respectively. The impact of optimizing the biomass burning emissions was to reduce the model ozone distribution by approximately -3 ppbv (-8 %) and on average improved the agreement of the model ozone distribution compared to the observations throughout the free troposphere reducing the mean model bias from 5.5 to 4.0 ppbv for the PICO-NARE observatory, 3.0 to 0.9 ppbv for ozonesondes, 2.0 to 0.9 ppbv for TES, and 2.8 to 1.4 ppbv for IASI.

  11. Analysis of classical Fourier, SPL and DPL heat transfer model in biological tissues in presence of metabolic and external heat source

    NASA Astrophysics Data System (ADS)

    Kumar, Dinesh; Singh, Surjan; Rai, K. N.

    2016-06-01

    In this paper, the temperature distribution in a finite biological tissue in presence of metabolic and external heat source when the surface subjected to different type of boundary conditions is studied. Classical Fourier, single-phase-lag (SPL) and dual-phase-lag (DPL) models were developed for bio-heat transfer in biological tissues. The analytical solution obtained for all the three models using Laplace transform technique and results are compared. The effect of the variability of different parameters such as relaxation time, metabolic heat source, spatial heat source, different type boundary conditions on temperature distribution in different type of the tissues like muscle, tumor, fat, dermis and subcutaneous based on three models are analyzed and discussed in detail. The result obtained in three models is compared with experimental observation of Stolwijk and Hardy (Pflug Arch 291:129-162, 1966). It has been observe that the DPL bio-heat transfer model provides better result in comparison of other two models. The value of metabolic and spatial heat source in boundary condition of first, second and third kind for different type of thermal therapies are evaluated.

  12. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise levels. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significantly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  13. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise level. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significnatly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  14. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  15. Aeroacoustic catastrophes: upstream cusp beaming in Lilley's equation.

    PubMed

    Stone, J T; Self, R H; Howls, C J

    2017-05-01

    The downstream propagation of high-frequency acoustic waves from a point source in a subsonic jet obeying Lilley's equation is well known to be organized around the so-called 'cone of silence', a fold catastrophe across which the amplitude may be modelled uniformly using Airy functions. Here we show that acoustic waves not only unexpectedly propagate upstream, but also are organized at constant distance from the point source around a cusp catastrophe with amplitude modelled locally by the Pearcey function. Furthermore, the cone of silence is revealed to be a cross-section of a swallowtail catastrophe. One consequence of these discoveries is that the peak acoustic field upstream is not only structurally stable but also at a similar level to the known downstream field. The fine structure of the upstream cusp is blurred out by distributions of symmetric acoustic sources, but peak upstream acoustic beaming persists when asymmetries are introduced, from either arrays of discrete point sources or perturbed continuum ring source distributions. These results may pose interesting questions for future novel jet-aircraft engine designs where asymmetric source distributions arise.

  16. POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.

    PubMed

    Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei

    2018-03-01

    Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.

  17. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  18. Continuous-variable quantum key distribution with Gaussian source noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen Yujie; Peng Xiang; Yang Jian

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  19. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  20. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  1. Probing the Spatial Distribution of the Interstellar Dust Medium by High Angular Resolution X-ray Halos of Point Sources

    NASA Astrophysics Data System (ADS)

    Xiang, Jingen

    X-rays are absorbed and scattered by dust grains when they travel through the interstellar medium. The scattering within small angles results in an X-ray ``halo''. The halo properties are significantly affected by the energy of radiation, the optical depth of the scattering, the grain size distributions and compositions, and the spatial distribution of dust along the line of sight (LOS). Therefore analyzing the X-ray halo properties is an important tool to study the size distribution and spatial distribution of interstellar grains, which plays a central role in the astrophysical study of the interstellar medium, such as the thermodynamics and chemistry of the gas and the dynamics of star formation. With excellent angular resolution, good energy resolution and broad energy band, the Chandra ACIS is so far the best instrument for studying the X-ray halos. But the direct images of bright sources obtained with ACIS usually suffer from severe pileup which prevents us from obtaining the halos in small angles. We first improve the method proposed by Yao et al to resolve the X-ray dust scattering halos of point sources from the zeroth order data in CC-mode or the first order data in TE mode with Chandra HETG/ACIS. Using this method we re-analyze the Cygnus X-1 data observed with Chandra. Then we studied the X-ray dust scattering halos around 17 bright X-ray point sources using Chandra data. All sources were observed with the HETG/ACIS in CC-mode or TE-mode. Using the interstellar grain models of WD01 model and MRN model to fit the halo profiles, we get the hydrogen column densities and the spatial distributions of the scattering dust grains along the line of sights (LOS) to these sources. We find there is a good linear correlation not only between the scattering hydrogen column density from WD01 model and the one from MRN model, but also between N_{H} derived from spectral fits and the one derived from the grain models WD01 and MRN (except for GX 301-2 and Vela X-1): N_{H,WD01} = (0.720±0.009) × N_{H,abs} + (0.051±0.013) and N_{H, MRN} = (1.156±0.016) × N_{H,abs} + (0.062±0.024) in the units 10^{22} cm^{-2}. Then the correlation between FHI and N_{H} is obtained. Both WD01 model and MRN model fits show that the scattering dust density very close to these sources is much higher than the normal interstellar medium and we consider it is the evidence of molecular clouds around these X-ray binaries. We also find that there is the linear correlation between the effective distance through the galactic dust layer and hydrogen scattering olumn density N_{H} excluding the one in x=0.99-1.0 but the correlation does not exist between he effective distance and the N_{H} in x=0.99-1.0. It shows that the dust nearby the X-ray sources is not the dust from galactic disk. Then we estimate the structure and density of the stellar wind around the special X-ray pulsars Vela X-1 and GX 301-2. Finally we discuss the possibility of probing the three dimensional structure of the interstellar using the X-ray halos of the transient sources, probing the spatial distributions of interstellar dust medium nearby the point sources, even the structure of the stellar winds using higher angular resolution X-ray dust scattering halos and testing the model that the black hole can be formed from the direct collapse of a massive star without supernova using the statistical distribution of the dust density nearby the X-ray binaries.

  2. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  3. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  4. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  5. Estimating oxygen distribution from vasculature in three-dimensional tumour tissue

    PubMed Central

    Kannan, Pavitra; Warren, Daniel R.; Markelc, Bostjan; Bates, Russell; Muschel, Ruth; Partridge, Mike

    2016-01-01

    Regions of tissue which are well oxygenated respond better to radiotherapy than hypoxic regions by up to a factor of three. If these volumes could be accurately estimated, then it might be possible to selectively boost dose to radio-resistant regions, a concept known as dose-painting. While imaging modalities such as 18F-fluoromisonidazole positron emission tomography (PET) allow identification of hypoxic regions, they are intrinsically limited by the physics of such systems to the millimetre domain, whereas tumour oxygenation is known to vary over a micrometre scale. Mathematical modelling of microscopic tumour oxygen distribution therefore has the potential to complement and enhance macroscopic information derived from PET. In this work, we develop a general method of estimating oxygen distribution in three dimensions from a source vessel map. The method is applied analytically to line sources and quasi-linear idealized line source maps, and also applied to full three-dimensional vessel distributions through a kernel method and compared with oxygen distribution in tumour sections. The model outlined is flexible and stable, and can readily be applied to estimating likely microscopic oxygen distribution from any source geometry. We also investigate the problem of reconstructing three-dimensional oxygen maps from histological and confocal two-dimensional sections, concluding that two-dimensional histological sections are generally inadequate representations of the three-dimensional oxygen distribution. PMID:26935806

  6. Estimating oxygen distribution from vasculature in three-dimensional tumour tissue.

    PubMed

    Grimes, David Robert; Kannan, Pavitra; Warren, Daniel R; Markelc, Bostjan; Bates, Russell; Muschel, Ruth; Partridge, Mike

    2016-03-01

    Regions of tissue which are well oxygenated respond better to radiotherapy than hypoxic regions by up to a factor of three. If these volumes could be accurately estimated, then it might be possible to selectively boost dose to radio-resistant regions, a concept known as dose-painting. While imaging modalities such as 18F-fluoromisonidazole positron emission tomography (PET) allow identification of hypoxic regions, they are intrinsically limited by the physics of such systems to the millimetre domain, whereas tumour oxygenation is known to vary over a micrometre scale. Mathematical modelling of microscopic tumour oxygen distribution therefore has the potential to complement and enhance macroscopic information derived from PET. In this work, we develop a general method of estimating oxygen distribution in three dimensions from a source vessel map. The method is applied analytically to line sources and quasi-linear idealized line source maps, and also applied to full three-dimensional vessel distributions through a kernel method and compared with oxygen distribution in tumour sections. The model outlined is flexible and stable, and can readily be applied to estimating likely microscopic oxygen distribution from any source geometry. We also investigate the problem of reconstructing three-dimensional oxygen maps from histological and confocal two-dimensional sections, concluding that two-dimensional histological sections are generally inadequate representations of the three-dimensional oxygen distribution. © 2016 The Authors.

  7. Comparison of two propeller source models for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1986-01-01

    The sensitivity of the predicted synchrophasing (SP) effectiveness trends to the propeller source model issued is investigated with reference to the development of advanced turboprop engines for transport aircraft. SP effectiveness is shown to be sensitive to the type of source model used. For the virtually rotating dipole source model, the SP effectiveness is sensitive to the direction of rotation at some frequencies but not at others. The SP effectiveness obtained from the virtually rotating dipole model is not very sensitive to the radial location of the source distribution within reasonable limits. Finally, the predicted SP effectiveness is shown to be more sensitive to the details of the source model used for the case of corotation than for the case of counterrotation.

  8. The impact of runoff generation mechanisms on the location of critical source areas

    USGS Publications Warehouse

    Lyon, S.W.; McHale, M.R.; Walter, M.T.; Steenhuis, T.S.

    2006-01-01

    Identifying phosphorus (P) source areas and transport pathways is a key step in decreasing P loading to natural water systems. This study compared the effects of two modeled runoff generation processes - saturation excess and infiltration excess - on total phosphorus (TP) and soluble reactive phosphorus (SRP) concentrations in 10 catchment streams of a Catskill mountain watershed in southeastern New York. The spatial distribution of runoff from forested land and agricultural land was generated for both runoff processes; results of both distributions were consistent with Soil Conservation Service-Curve Number (SCS-CN) theory. These spatial runoff distributions were then used to simulate stream concentrations of TP and SRP through a simple equation derived from an observed relation between P concentration and land use; empirical results indicate that TP and SRP concentrations increased with increasing percentage of agricultural land. Simulated TP and SRP stream concentrations predicted for the 10 catchments were strongly affected by the assumed runoff mechanism. The modeled TP and SRP concentrations produced by saturation excess distribution averaged 31 percent higher and 42 percent higher, respectively, than those produced by the infiltration excess distribution. Misrepresenting the primary runoff mechanism could not only produce erroneous concentrations, it could fail to correctly locate critical source areas for implementation of best management practices. Thus, identification of the primary runoff mechanism is critical in selection of appropriate models in the mitigation of nonpoint source pollution. Correct representation of runoff processes is also critical in the future development of biogeochemical transport models, especially those that address nutrient fluxes.

  9. Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Dai, Fa Foster

    Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.

  10. A virtual photon energy fluence model for Monte Carlo dose calculation.

    PubMed

    Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan

    2003-03-01

    The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.

  11. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  12. Research in atmospheric chemistry and transport

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1982-01-01

    The carbon monoxide cycle was studied by incorporating the known CO sources and sinks in a tracer model which used the winds generated by a general circulation model. The photochemical production and loss terms, which depended on OH radical concentrations, were calculated in an interactive fashion. Comparison of the computed global distribution and seasonal variations of CO with observations was used to yield constraints on the distribution and magnitude of the sources and sinks of CO, and the abundance of OH radicals in the troposphere.

  13. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  15. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  16. Statistical interpretation of pollution data from satellites. [for levels distribution over metropolitan area

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; Green, R. N.; Young, G. R.

    1974-01-01

    The NIMBUS-G environmental monitoring satellite has an instrument (a gas correlation spectrometer) onboard for measuring the mass of a given pollutant within a gas volume. The present paper treats the problem: How can this type measurement be used to estimate the distribution of pollutant levels in a metropolitan area. Estimation methods are used to develop this distribution. The pollution concentration caused by a point source is modeled as a Gaussian plume. The uncertainty in the measurements is used to determine the accuracy of estimating the source strength, the wind velocity, diffusion coefficients and source location.

  17. Modeling of light distribution in the brain for topographical imaging

    NASA Astrophysics Data System (ADS)

    Okada, Eiji; Hayashi, Toshiyuki; Kawaguchi, Hiroshi

    2004-07-01

    Multi-channel optical imaging system can obtain a topographical distribution of the activated region in the brain cortex by a simple mapping algorithm. Near-infrared light is strongly scattered in the head and the volume of tissue that contributes to the change in the optical signal detected with source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. We report theoretical investigations on the spatial resolution of the topographic imaging of the brain activity. The head model for the theoretical study consists of five layers that imitate the scalp, skull, subarachnoid space, gray matter and white matter. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The source-detector pairs are one dimensionally arranged on the surface of the model and the distance between the adjoining source-detector pairs are varied from 4 mm to 32 mm. The change in detected intensity caused by the absorption change is obtained by Monte Carlo simulation. The position of absorption change is reconstructed by the conventional mapping algorithm and the reconstruction algorithm using the spatial sensitivity profiles. We discuss the effective interval between the source-detector pairs and the choice of reconstruction algorithms to improve the topographic images of brain activity.

  18. A tuneable approach to uniform light distribution for artificial daylight photodynamic therapy.

    PubMed

    O'Mahoney, Paul; Haigh, Neil; Wood, Kenny; Brown, C Tom A; Ibbotson, Sally; Eadie, Ewan

    2018-06-16

    Implementation of daylight photodynamic therapy (dPDT) is somewhat limited by variable weather conditions. Light sources have been employed to provide artificial dPDT indoors, with low irradiances and longer treatment times. Uniform light distribution across the target area is key to ensuring effective treatment, particularly for large areas. A novel light source is developed with tuneable direction of light emission in order to meet this challenge. Wavelength composition of the novel light source is controlled such that the protoporphyrin-IX (PpIX) weighed spectra of both the light source and daylight match. The uniformity of the light source is characterised on a flat surface, a model head and a model leg. For context, a typical conventional PDT light source is also characterised. Additionally, the wavelength uniformity across the treatment site is characterised. The PpIX-weighted spectrum of the novel light source matches with PpIX-weighted daylight spectrum, with irradiance values within the bounds for effective dPDT. By tuning the direction of light emission, improvements are seen in the uniformity across large anatomical surfaces. Wavelength uniformity is discussed. We have developed a light source that addresses the challenges in uniform, multiwavelength light distribution for large area artificial dPDT across curved anatomical surfaces. Copyright © 2018. Published by Elsevier B.V.

  19. Gravitational lens models of arcs in clusters

    NASA Technical Reports Server (NTRS)

    Bergmann, Anton G.; Petrosian, Vahe; Lynds, Roger

    1990-01-01

    It is now well established that the luminous arcs discovered in clusters of galaxies, in particular those in Abell 370 and Cluster 2244-02, are produced by gravitational lensing of background sources. The arcs are modeled and constraints are placed on the distribution of the mass in the clusters and the shape and size of the sources. The models require, as expected, a large amount of dark matter in the clusters and a mass-to blue-light ratio for the cluster which exceeds 100 solar mass/solar luminosity and could be as high as 1000 solar mass/solar luminosity depending on cosmological parameters and the distribution of the dark matter. Furthermore, it is found that in the case of the arc in A370 the dark matter must have a different distribution than the luminous galaxies, while for the arc in Cl 2244 the dark matter can have a distribution similar to that of the light matter (galaxies) or a separate distribution.

  20. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purwaningsih, Anik

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result ofmore » calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.« less

  2. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  3. Theory for Deducing Volcanic Activity From Size Distributions in Plinian Pyroclastic Fall Deposits

    NASA Astrophysics Data System (ADS)

    Iriyama, Yu; Toramaru, Atsushi; Yamamoto, Tetsuo

    2018-03-01

    Stratigraphic variation in the grain size distribution (GSD) of plinian pyroclastic fall deposits reflects volcanic activity. To extract information on volcanic activity from the analyses of deposits, we propose a one-dimensional theory that provides a formula connecting the sediment GSD to the source GSD. As the simplest case, we develop a constant-source model (CS model), in which the source GSD and the source height are constant during the duration of release of particles. We assume power laws of particle radii for the terminal fall velocity and the source GSD. The CS model can describe an overall (i.e., entire vertically variable) feature of the GSD structure of the sediment. It is shown that the GSD structure is characterized by three parameters, that is, the duration of supply of particles to the source scaled by the fall time of the largest particle, ts/tM, and the power indices of the terminal fall velocity p and of the source GSD q. We apply the CS model to samples of the Worzel D ash layer and compare the sediment GSD structure calculated by using the CS model to the observed structure. The results show that the CS model reproduces the overall structure of the observed GSD. We estimate the duration of the eruption and the q value of the source GSD. Furthermore, a careful comparison of the observed and calculated GSDs reveals new interpretation of the original sediment GSD structure of the Worzel D ash layer.

  4. An equivalent source model of the satellite-altitude magnetic anomaly field over Australia

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Johnson, B. D.; Langel, R. A.

    1980-01-01

    The low-amplitude, long-wavelength magnetic anomaly field measured between 400 and 700 km elevation over Australia by the POGO satellites is modeled by means of the equivalent source technique. Magnetic dipole moments are computed for a latitude-longitude array of dipole sources on the earth's surface such that the dipoles collectively give rise to a field which makes a least squares best fit to that observed. The distribution of magnetic moments is converted to a model of apparent magnetization contrast in a layer of constant (40 km) thickness, which contains information equivalent to the lateral variation in the vertical integral of magnetization down to the Curie isotherm and can be transformed to a model of variable thickness magnetization. It is noted that the closest equivalent source spacing giving a stable solution is about 2.5 deg, corresponding to about half the mean data elevation, and that the magnetization distribution correlates well with some of the principle tectonic elements of Australia.

  5. Assimilating multi-source uncertainties of a parsimonious conceptual hydrological model using hierarchical Bayesian modeling

    Treesearch

    Wei Wu; James Clark; James Vose

    2010-01-01

    Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model – GR4J – by coherently assimilating the uncertainties from the...

  6. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  7. A simulation-based analytic model of radio galaxies

    NASA Astrophysics Data System (ADS)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  8. Distributed watershed modeling of design storms to identify nonpoint source loading areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endreny, T.A.; Wood, E.F.

    1999-03-01

    Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yrmore » return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.« less

  9. The Generalized Quantum Episodic Memory Model.

    PubMed

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  10. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  11. Incorporation of a spatial source distribution and a spatial sensor sensitivity in a laser ultrasound propagation model using a streamlined Huygens' principle.

    PubMed

    Laloš, Jernej; Babnik, Aleš; Možina, Janez; Požar, Tomaž

    2016-03-01

    The near-field, surface-displacement waveforms in plates are modeled using interwoven concepts of Green's function formalism and streamlined Huygens' principle. Green's functions resemble the building blocks of the sought displacement waveform, superimposed and weighted according to the simplified distribution. The approach incorporates an arbitrary circular spatial source distribution and an arbitrary circular spatial sensitivity in the area probed by the sensor. The displacement histories for uniform, Gaussian and annular normal-force source distributions and the uniform spatial sensor sensitivity are calculated, and the corresponding weight distributions are compared. To demonstrate the applicability of the developed scheme, measurements of laser ultrasound induced solely by the radiation pressure are compared with the calculated waveforms. The ultrasound is induced by laser pulse reflection from the mirror-surface of a glass plate. The measurements show excellent agreement not only with respect to various wave-arrivals but also in the shape of each arrival. Their shape depends on the beam profile of the excitation laser pulse and its corresponding spatial normal-force distribution. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Dispersion modeling of polycyclic aromatic hydrocarbons from combustion of biomass and fossil fuels and production of coke in Tianjin, China.

    PubMed

    Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran

    2006-08-01

    A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.

  13. Using WNTR to Model Water Distribution System Resilience

    EPA Science Inventory

    The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of di...

  14. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  15. Modeling of mineral dust in the atmosphere: Sources, transport, and optical thickness

    NASA Technical Reports Server (NTRS)

    Tegen, Ina; Fung, Inez

    1994-01-01

    A global three-dimensional model of the atmospheric mineral dust cycle is developed for the study of its impact on the radiative balance of the atmosphere. The model includes four size classes of minearl dust, whose source distributions are based on the distributions of vegetation, soil texture and soil moisture. Uplift and deposition are parameterized using analyzed winds and rainfall statistics that resolve high-frequency events. Dust transport in the atmosphere is simulated with the tracer transport model of the Goddard Institute for Space Studies. The simulated seasonal variations of dust concentrations show general reasonable agreement with the observed distributions, as do the size distributions at several observing sites. The discrepancies between the simulated and the observed dust concentrations point to regions of significant land surface modification. Monthly distribution of aerosol optical depths are calculated from the distribution of dust particle sizes. The maximum optical depth due to dust is 0.4-0.5 in the seasonal mean. The main uncertainties, about a factor of 3-5, in calculating optical thicknesses arise from the crude resolution of soil particle sizes, from insufficient constraint by the total dust loading in the atmosphere, and from our ignorance about adhesion, agglomeration, uplift, and size distributions of fine dust particles (less than 1 micrometer).

  16. Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties.

    PubMed

    Ohyu, Shigeharu; Okamoto, Yoshiwo; Kuriki, Shinya

    2002-06-01

    A novel magnetocardiographic inverse method for reconstructing the action potential amplitude (APA) and the activation time (AT) on the ventricular myocardium is proposed. This method is based on the propagated excitation model, in which the excitation is propagated through the ventricle with nonuniform height of action potential. Assumption of stepwise waveform on the transmembrane potential was introduced in the model. Spatial gradient of transmembrane potential, which is defined by APA and AT distributed in the ventricular wall, is used for the computation of a current source distribution. Based on this source model, the distributions of APA and AT are inversely reconstructed from the QRS interval of magnetocardiogram (MCG) utilizing a maximum a posteriori approach. The proposed reconstruction method was tested through computer simulations. Stability of the methods with respect to measurement noise was demonstrated. When reference APA was provided as a uniform distribution, root-mean-square errors of estimated APA were below 10 mV for MCG signal-to-noise ratios greater than, or equal to, 20 dB. Low-amplitude regions located at several sites in reference APA distributions were correctly reproduced in reconstructed APA distributions. The goal of our study is to develop a method for detecting myocardial ischemia through the depression of reconstructed APA distributions.

  17. Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects

    ERIC Educational Resources Information Center

    Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.

    2018-01-01

    The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…

  18. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  19. Assessment of spatial distribution of soil heavy metals using ANN-GA, MSLR and satellite imagery.

    PubMed

    Naderi, Arman; Delavar, Mohammad Amir; Kaboudin, Babak; Askari, Mohammad Sadegh

    2017-05-01

    This study aims to assess and compare heavy metal distribution models developed using stepwise multiple linear regression (MSLR) and neural network-genetic algorithm model (ANN-GA) based on satellite imagery. The source identification of heavy metals was also explored using local Moran index. Soil samples (n = 300) were collected based on a grid and pH, organic matter, clay, iron oxide contents cadmium (Cd), lead (Pb) and zinc (Zn) concentrations were determined for each sample. Visible/near-infrared reflectance (VNIR) within the electromagnetic ranges of satellite imagery was applied to estimate heavy metal concentrations in the soil using MSLR and ANN-GA models. The models were evaluated and ANN-GA model demonstrated higher accuracy, and the autocorrelation results showed higher significant clusters of heavy metals around the industrial zone. The higher concentration of Cd, Pb and Zn was noted under industrial lands and irrigation farming in comparison to barren and dryland farming. Accumulation of industrial wastes in roads and streams was identified as main sources of pollution, and the concentration of soil heavy metals was reduced by increasing the distance from these sources. In comparison to MLSR, ANN-GA provided a more accurate indirect assessment of heavy metal concentrations in highly polluted soils. The clustering analysis provided reliable information about the spatial distribution of soil heavy metals and their sources.

  20. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  1. Numerical Simulations of Flow Separation Control in Low-Pressure Turbines using Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.

    2007-01-01

    A recently introduced phenomenological model to simulate flow control applications using plasma actuators has been further developed and improved in order to expand its use to complicated actuator geometries. The new modeling approach eliminates the requirement of an empirical charge density distribution shape by using the embedded electrode as a source for the charge density. The resulting model is validated against a flat plate experiment with quiescent environment. The modeling approach incorporates the effect of the plasma actuators on the external flow into Navier Stokes computations as a body force vector which is obtained as a product of the net charge density and the electric field. The model solves the Maxwell equation to obtain the electric field due to the applied AC voltage at the electrodes and an additional equation for the charge density distribution representing the plasma density. The new modeling approach solves the charge density equation in the computational domain assuming the embedded electrode as a source therefore automatically generating a charge density distribution on the surface exposed to the flow similar to that observed in the experiments without explicitly specifying an empirical distribution. The model is validated against a flat plate experiment with quiescent environment.

  2. Systematically biological prioritizing remediation sites based on datasets of biological investigations and heavy metals in soil

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen

    2015-04-01

    Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.

  3. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  4. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  5. Real-time identification of indoor pollutant source positions based on neural network locator of contaminant sources and optimized sensor networks.

    PubMed

    Vukovic, Vladimir; Tabares-Velasco, Paulo Cesar; Srebric, Jelena

    2010-09-01

    A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational fluid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.

  6. Distributed Source Modeling of Language with Magnetoencephalography: Application to Patients with Intractable Epilepsy

    PubMed Central

    McDonald, Carrie R.; Thesen, Thomas; Hagler, Donald J.; Carlson, Chad; Devinksy, Orrin; Kuzniecky, Rubin; Barr, William; Gharapetian, Lusineh; Trongnetrpunya, Amy; Dale, Anders M.; Halgren, Eric

    2009-01-01

    Purpose To examine distributed patterns of language processing in healthy controls and patients with epilepsy using magnetoencephalography (MEG), and to evaluate the concordance between laterality of distributed MEG sources and language laterality as determined by the intracarotid amobarbitol procedure (IAP). Methods MEG was performed in ten healthy controls using an anatomically-constrained, noise-normalized distributed source solution (dSPM). Distributed source modeling of language was then applied to eight patients with intractable epilepsy. Average source strengths within temporoparietal and frontal lobe regions of interest (ROIs) were calculated and the laterality of activity within ROIs during discrete time windows was compared to results from the IAP. Results In healthy controls, dSPM revealed activity in visual cortex bilaterally from ~80-120ms in response to novel words and sensory control stimuli (i.e., false fonts). Activity then spread to fusiform cortex ~160-200ms, and was dominated by left hemisphere activity in response to novel words. From ~240-450ms, novel words produced activity that was left-lateralized in frontal and temporal lobe regions, including anterior and inferior temporal, temporal pole, and pars opercularis, as well as bilaterally in posterior superior temporal cortex. Analysis of patient data with dSPM demonstrated that from 350-450ms, laterality of temporoparietal sources agreed with the IAP 75% of the time, whereas laterality of frontal MEG sources agreed with the IAP in all eight patients. Discussion Our results reveal that dSPM can unveil the timing and spatial extent of language processes in patients with epilepsy and may enhance knowledge of language lateralization and localization for use in preoperative planning. PMID:19552656

  7. Biosecurity and Open-Source Biology: The Promise and Peril of Distributed Synthetic Biological Technologies.

    PubMed

    Evans, Nicholas G; Selgelid, Michael J

    2015-08-01

    In this article, we raise ethical concerns about the potential misuse of open-source biology (OSB): biological research and development that progresses through an organisational model of radical openness, deskilling, and innovation. We compare this organisational structure to that of the open-source software model, and detail salient ethical implications of this model. We demonstrate that OSB, in virtue of its commitment to openness, may be resistant to governance attempts.

  8. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  9. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Qi, Bing; Liao, Zhongfa

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  10. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  11. Comparison of two trajectory based models for locating particle sources for two rural New York sites

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Hopke, Philip K.; Liu, Wei

    Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.

  12. Estimation of in-canopy ammonia sources and sinks in a fertilized Zea mays field

    EPA Science Inventory

    An analytical model was developed that describes the in-canopy vertical distribution of NH3 source and sinks and vertical fluxes in a fertilized agricultural setting using measured in-canopy concentration and wind speed profiles. This model was applied to quantify in-canopy air-s...

  13. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    EPA Science Inventory

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied ...

  14. Capturing microbial sources distributed in a mixed-use watershed within an integrated environmental modeling workflow

    USDA-ARS?s Scientific Manuscript database

    Many watershed models simulate overland and instream microbial fate and transport, but few provide loading rates on land surfaces and point sources to the waterbody network. This paper describes the underlying equations for microbial loading rates associated with 1) land-applied manure on undevelope...

  15. The pressure distribution for biharmonic transmitting array: theoretical study

    NASA Astrophysics Data System (ADS)

    Baranowska, A.

    2005-03-01

    The aim of the paper is theoretical analysis of the finite amplitude waves interaction problem for the biharmonic transmitting array. We assume that the array consists of 16 circular pistons of the same dimensions that regrouped in two sections. Two different arrangements of radiating elements were considered. In this situation the radiating surface is non-continuous without axial symmetry. The mathematical model was built on the basis of the Khokhlov - Zabolotskaya - Kuznetsov (KZK) equation. To solve the problem the finite-difference method was applied. On-axis pressure amplitude for different frequency waves as a function of distance from the source, transverse pressure distribution of these waves at fixed distances from the source and pressure amplitude distribution for them at fixed planes were examined. Especially changes of normalized pressure amplitude for difference frequency were studied. The paper presents mathematical model and some results of theoretical investigations obtained for different values of source parameters.

  16. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model.

    PubMed

    Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J

    2015-06-01

    The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10(10) histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an (125)I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  17. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com

    2015-06-15

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Methods: Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Montemore » Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. Results: The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an {sup 125}I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.« less

  18. Analysis of an entrainment model of the jet in a crossflow

    NASA Technical Reports Server (NTRS)

    Chang, H. S.; Werner, J. E.

    1972-01-01

    A theoretical model has been proposed for the problem of a round jet in an incompressible cross-flow. The method of matched asymptotic expansions has been applied to this problem. For the solution to the flow problem in the inner region, the re-entrant wake flow model was used with the re-entrant flow representing the fluid entrained by the jet. Higher order corrections are obtained in terms of this basic solution. The perturbation terms in the outer region was found to be a line distribution of doublets and sources. The line distribution of sources represents the combined effect of the entrainment and the displacement.

  19. Conceptual Model Scenarios for the Vapor Intrusion Pathway

    EPA Pesticide Factsheets

    This report provides simplified simulation examples to illustrate graphically how subsurface conditions and building-specific characteristics determine the distribution chemical distribution and indoor air concentration relative to a source concentration.

  20. Sources and distribution of NO(x) in the upper troposphere at northern midlatitudes

    NASA Technical Reports Server (NTRS)

    Rohrer, Franz; Ehhalt, Dieter H.; Wahner, Andreas

    1994-01-01

    A simple quasi 2-D model is used to study the zonal distribution of NO(x). The model includes vertical transport in form of eddy diffusion and deep convection, zonal transport by a vertically uniform wind, and a simplified chemistry of NO, NO2 and HNO3. The NO(x) sources considered are surface emissions (mostly from the combustion of fossil fuel), lightning, aircraft emissions, and downward transport from the stratosphere. The model is applied to the latitude band of 40 deg N to 50 deg N during the month of June; the contributions to the zonal NO(x) distribution from the individual sources and transport processes are investigated. The model predicted NO(x) concentration in the upper troposphere is dominated by air lofted from the polluted planetary boundary layer over the large industrial areas of Eastern North America and Europe. Aircraft emissions are also important and contribute on average 30 percent. Stratospheric input is minor about 10 percent, less even than that by lightning. The model provides a clear indication of intercontinental transport of NO(x) and HNO3 in the upper troposphere. Comparison of the modelled NO profiles over the Western Atlantic with those measured during STRATOZ 3 in 1984 shows good agreement at all altitudes.

  1. Light source distribution and scattering phase function influence light transport in diffuse multi-layered media

    NASA Astrophysics Data System (ADS)

    Vaudelle, Fabrice; L'Huillier, Jean-Pierre; Askoura, Mohamed Lamine

    2017-06-01

    Red and near-Infrared light is often used as a useful diagnostic and imaging probe for highly scattering media such as biological tissues, fruits and vegetables. Part of diffusively reflected light gives interesting information related to the tissue subsurface, whereas light recorded at further distances may probe deeper into the interrogated turbid tissues. However, modelling diffusive events occurring at short source-detector distances requires to consider both the distribution of the light sources and the scattering phase functions. In this report, a modified Monte Carlo model is used to compute light transport in curved and multi-layered tissue samples which are covered with a thin and highly diffusing tissue layer. Different light source distributions (ballistic, diffuse or Lambertian) are tested with specific scattering phase functions (modified or not modified Henyey-Greenstein, Gegenbauer and Mie) to compute the amount of backscattered and transmitted light in apple and human skin structures. Comparisons between simulation results and experiments carried out with a multispectral imaging setup confirm the soundness of the theoretical strategy and may explain the role of the skin on light transport in whole and half-cut apples. Other computational results show that a Lambertian source distribution combined with a Henyey-Greenstein phase function provides a higher photon density in the stratum corneum than in the upper dermis layer. Furthermore, it is also shown that the scattering phase function may affect the shape and the magnitude of the Bidirectional Reflectance Distribution (BRDF) exhibited at the skin surface.

  2. Life and Death Near Zero: The distribution and evolution of NEA orbits of near-zero MOID, (e, i), and q

    NASA Astrophysics Data System (ADS)

    Harris, Alan W.; Morbidelli, Alessandro; Granvik, Mikael

    2016-10-01

    Modeling the distribution of orbits with near-zero orbital parameters requires special attention to the dimensionality of the parameters in question. This is even more true since orbits of near-zero MOID, (e, i), or q are especially interesting as sources or sinks of NEAs. An essentially zero value of MOID (Minimum Orbital Intersection Distance) with respect to the Earth's orbit is a requirement for an impact trajectory, and initially also for ejecta from lunar impacts into heliocentric orbits. The collision cross section of the Earth goes up greatly with decreasing relative encounter velocity, venc, thus the impact flux onto the Earth is enhanced in such low-venc objects, which correspond to near-zero (e,i) orbits. And lunar ejecta that escapes from the Earth-moon system mostly does so at only barely greater than minimum velocity for escape (Gladman, et al., 1995, Icarus 118, 302-321), so the Earth-moon system is both a source and a sink of such low-venc orbits, and understanding the evolution of these populations requires accurately modeling the orbit distributions. Lastly, orbits of very low heliocentric perihelion distance, q, are particularly interesting as a "sink" in the NEA population as asteroids "fall into the sun" (Farinella, et al., 1994, Nature 371, 314-317). Understanding this process, and especially the role of disintegration of small asteroids as they evolve into low-q orbits (Granvik et al., 2016, Nature 530, 303-306), requires accurate modeling of the q distribution that would exist in the absence of a "sink" in the distribution. In this paper, we derive analytical expressions for the expected steady-state distributions near zero of MOID, (e,i), and q in the absence of sources or sinks, compare those to numerical simulations of orbit distributions, and lastly evaluate the distributions of discovered NEAs to try to understand the sources and sinks of NEAs "near zero" of these orbital parameters.

  3. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  4. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  5. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  6. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  7. On the 10 μm Silicate Feature in Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Elitzur, Moshe; Lacy, Mark

    2009-12-01

    The 10 μm silicate feature observed with Spitzer in active galactic nuclei (AGNs) reveals some puzzling behavior. It (1) has been detected in emission in type 2 sources, (2) shows broad, flat-topped emission peaks shifted toward long wavelengths in several type 1 sources, and (3) is not seen in deep absorption in any source observed so far. We solve all three puzzles with our clumpy dust radiative transfer formalism. Addressing (1), we present the spectral energy distribution (SED) of SST1721+6012, the first type 2 quasar observed to show a clear 10 μm silicate feature in emission. Such emission arises in models of the AGN torus easily when its clumpy nature is taken into account. We constructed a large database of clumpy torus models and performed extensive fitting of the observed SED. We find that the cloud radial distribution varies as r -1.5 and the torus contains 2-4 clouds along radial equatorial rays, each with optical depth at visual ~60-80. The source bolometric luminosity is ~3 × 1012 Lsun. Our modeling suggests that lsim35% of objects with tori sharing these characteristics and geometry would have their central engines obscured. This relatively low obscuration probability can explain the clear appearance of the 10 μm emission feature in SST1721+6012 together with its rarity among other QSO2. Investigating (2), we also fitted the SED of PG1211+143, one of the first type 1 QSOs with a 10 μm silicate feature detected in emission. Together with other similar sources, this QSO appears to display an unusually broadened feature whose peak is shifted toward longer wavelengths. Although this led to suggestions of non-standard dust chemistry in these sources, our analysis fits such SEDs with standard galactic dust; the apparent peak shifts arise from simple radiative transfer effects. Regarding (3), we find additionally that the distribution of silicate feature strengths among clumpy torus models closely resembles the observed distribution, and the feature never occurs deeply absorbed. Comparing such distributions in several AGN samples we also show that the silicate emission feature becomes stronger in the transition from Seyfert to quasar luminosities.

  8. Monitoring and modeling as a continuing learning process: the use of hydrological models in a general probabilistic framework.

    NASA Astrophysics Data System (ADS)

    Baroni, G.; Gräff, T.; Reinstorf, F.; Oswald, S. E.

    2012-04-01

    Nowadays uncertainty and sensitivity analysis are considered basic tools for the assessment of hydrological models and the evaluation of the most important sources of uncertainty. In this context, in the last decades several methods have been developed and applied in different hydrological conditions. However, in most of the cases, the studies have been done by investigating mainly the influence of the parameter uncertainty on the simulated outputs and few approaches tried to consider also other sources of uncertainty i.e. input and model structure. Moreover, several constrains arise when spatially distributed parameters are involved. To overcome these limitations a general probabilistic framework based on Monte Carlo simulations and the Sobol method has been proposed. In this study, the general probabilistic framework was applied at field scale using a 1D physical-based hydrological model (SWAP). Furthermore, the framework was extended at catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are applied in two different experimental sites in Germany: a relatively flat cropped field close to Potsdam (Brandenburg) and a small mountainous catchment with agricultural land use (Schaefertal, Harz Mountains). For both cases, input and parameters are considered as major sources of uncertainty. Evaluation of the models was based on soil moisture detected at plot scale in different depths and, for the catchment site, also with daily discharge values. The study shows how the framework can take into account all the various sources of uncertainty i.e. input data, parameters (either in scalar or spatially distributed form) and model structures. The framework can be used in a loop in order to optimize further monitoring activities used to improve the performance of the model. In the particular applications, the results show how the sources of uncertainty are specific for each process considered. The influence of the input data as well as the presence of compensating errors become clear by the different processes simulated.

  9. The Star Formation History of SHADES Sources

    NASA Astrophysics Data System (ADS)

    Aretxaga, Itziar; SHADES Consortium; AzTEC Team

    2006-12-01

    We present the redshift distribution of the SHADES 850um selected galaxy population based on the rest-frame radio-mm-FIR colours of 120 robustly detected sources in the Lockman Hole East (LH) and Subaru XMM-Newton Deep Field (SXDF). The redshift of sources constrained with at least two photometric bands peaks at z 2.4 and has a near-Gaussian distribution. The inclusion of sources detected only at 850um, for which only very weak redshift constraints are available, leads to the possibility of a high-redshit tail. We find a small difference between the redshift distributions in the two fields; the SXDF peaking at a slightly lower redshift than the LH, which we mainly attribute to the noise properties of the photometry used. We discuss the impact of the AzTEC data on the further precission of these results. Finally we present a brief comparison with sub-mm galaxy formation models and their predicted and assumed redshift distributions and derive the contribution of these sources to the star formation rate density at different epochs.

  10. A study of the sources and sinks of methane and methyl chloroform using a global three-dimensional Lagrangian tropospheric tracer transport model

    NASA Technical Reports Server (NTRS)

    Taylor, John A.; Brasseur, G. P.; Zimmerman, P. R.; Cicerone, R. J.

    1991-01-01

    Sources and sinks of methane and methyl chloroform are investigated using a global three-dimensional Lagrangian tropospheric tracer transport model with parameterized hydroxyl and temperature fields. Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). The second model identified source regions for methane from rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies, indicating that either the assumption that a uniform fraction of NPP is converted to methane is not valid for rice paddies, or that NPP is underestimated for rice paddies, or that present methane emission estimates from rice paddies are too high.

  11. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  14. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    NASA Astrophysics Data System (ADS)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  15. The embedded young stars in the Taurus-Auriga molecular cloud. I - Models for spectral energy distributions

    NASA Technical Reports Server (NTRS)

    Kenyon, Scott J.; Calvet, Nuria; Hartmann, Lee

    1993-01-01

    We describe radiative transfer calculations of infalling, dusty envelopes surrounding pre-main-sequence stars and use these models to derive physical properties for a sample of 21 heavily reddened young stars in the Taurus-Auriga molecular cloud. The density distributions needed to match the FIR peaks in the spectral energy distributions of these embedded sources suggest mass infall rates similar to those predicted for simple thermally supported clouds with temperatures about 10 K. Unless the dust opacities are badly in error, our models require substantial departures from spherical symmetry in the envelopes of all sources. These flattened envelopes may be produced by a combination of rotation and cavities excavated by bipolar flows. The rotating infall models of Terebey et al. (1984) models indicate a centrifugal radius of about 70 AU for many objects if rotation is the only important physical effect, and this radius is reasonably consistent with typical estimates for the sizes of circumstellar disks around T Tauri stars.

  16. Steady-state solution of the semi-empirical diffusion equation for area sources. [air pollution studies

    NASA Technical Reports Server (NTRS)

    Lebedeff, S. A.; Hameed, S.

    1975-01-01

    The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.

  17. SeaQuaKE: Sea-optimized Quantum Key Exchange

    DTIC Science & Technology

    2014-11-01

    ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair source based on spontaneous four-wave mixing for...Distribution Special Notice (13-SN- 0004 under ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair ...areas over the last quarter include (i) development of a wavelength-dependent, entangled photon - pair source model and (ii) end-to-end system modeling

  18. Topographic filtering simulation model for sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  19. Characterization of the ITER model negative ion source during long pulse operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemsworth, R.S.; Boilson, D.; Crowley, B.

    2006-03-15

    It is foreseen to operate the neutral beam system of the International Thermonuclear Experimental Reactor (ITER) for pulse lengths extending up to 1 h. The performance of the KAMABOKO III negative ion source, which is a model of the source designed for ITER, is being studied on the MANTIS test bed at Cadarache. This article reports the latest results from the characterization of the ion source, in particular electron energy distribution measurements and the comparison between positive ion and negative ion extraction from the source.

  20. Source detection in astronomical images by Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  1. Application of a combined approach including contamination indexes, geographic information system and multivariate statistical models in levels, distribution and sources study of metals in soils in Northern China

    PubMed Central

    Huang, Kuixian; Luo, Xingzhang

    2018-01-01

    The purpose of this study is to recognize the contamination characteristics of trace metals in soils and apportion their potential sources in Northern China to provide a scientific basis for basic of soil environment management and pollution control. The data set of metals for 12 elements in surface soil samples was collected. The enrichment factor and geoaccumulation index were used to identify the general geochemical characteristics of trace metals in soils. The UNMIX and positive matrix factorizations (PMF) models were comparatively applied to apportion their potential sources. Furthermore, geostatistical tools were used to study the spatial distribution of pollution characteristics and to identify the affected regions of sources that were derived from apportionment models. The soils were contaminated by Cd, Hg, Pb and Zn to varying degree. Industrial activities, agricultural activities and natural sources were identified as the potential sources determining the contents of trace metals in soils with contributions of 24.8%–24.9%, 33.3%–37.2% and 38.0%–41.8%, respectively. The slightly different results obtained from UNMIX and PMF might be caused by the estimations of uncertainty and different algorithms within the models. PMID:29474412

  2. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  3. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  4. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  5. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.

    PubMed

    Han, Changcai; Yang, Jinsheng

    2017-10-30

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.

  6. Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network

    PubMed Central

    Han, Changcai; Yang, Jinsheng

    2017-01-01

    The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155

  7. A systematic evaluation of the dose-rate constant determined by photon spectrometry for 21 different models of low-energy photon-emitting brachytherapy sources.

    PubMed

    Chen, Zhe Jay; Nath, Ravinder

    2010-10-21

    The aim of this study was to perform a systematic comparison of the dose-rate constant (Λ) determined by the photon spectrometry technique (PST) with the consensus value ((CON)Λ) recommended by the American Association of Physicists in Medicine (AAPM) for 21 low-energy photon-emitting interstitial brachytherapy sources. A total of 63 interstitial brachytherapy sources (21 different models with 3 sources per model) containing either (125)I (14 models), (103)Pd (6 models) or (131)Cs (1 model) were included in this study. A PST described by Chen and Nath (2007 Med. Phys. 34 1412-30) was used to determine the dose-rate constant ((PST)Λ) for each source model. Source-dependent variations in (PST)Λ were analyzed systematically against the spectral characteristics of the emitted photons and the consensus values recommended by the AAPM brachytherapy subcommittee. The values of (PST)Λ for the encapsulated sources of (103)Pd, (125)I and (131)Cs varied from 0.661 to 0.678 cGyh(-1) U(-1), 0.959 to 1.024 cGyh(-1)U(-1) and 1.066 to 1.073 cGyh(-1)U(-1), respectively. The relative variation in (PST)Λ among the six (103)Pd source models, caused by variations in photon attenuation and in spatial distributions of radioactivity among the source models, was less than 3%. Greater variations in (PST)Λ were observed among the 14 (125)I source models; the maximum relative difference was over 6%. These variations were caused primarily by the presence of silver in some (125)I source models and, to a lesser degree, by the variations in photon attenuation and in spatial distribution of radioactivity among the source models. The presence of silver generates additional fluorescent x-rays with lower photon energies which caused the (PST)Λ value to vary from 0.959 to 1.019 cGyh(-1)U(-1) depending on the amount of silver used by a given source model. For those (125)I sources that contain no silver, their (PST)Λ was less variable and had values within 1% of 1.024 cGyh(-1)U(-1). For the 16 source models that currently have an AAPM recommended (CON)Λ value, the agreement between (PST)Λ and (CON)Λ was less than 2% for 15 models and was 2.6% for 1 (103)Pd source model. Excellent agreement between (PST)Λ and (CON)Λ was observed for all source models that currently have an AAPM recommended consensus dose-rate constant value. These results demonstrate that the PST is an accurate and robust technique for the determination of the dose-rate constant for low-energy brachytherapy sources.

  8. X-ray emission from galaxies - The distribution of low-luminosity X-ray sources in the Galactic Centre region

    NASA Astrophysics Data System (ADS)

    Heard, Victoria; Warwick, Robert

    2012-09-01

    We report a study of the extended X-ray emission observed in the Galactic Centre (GC) region based on archival XMM-Newton data. The GC diffuse emission can be decomposed into three distinct components: the emission from low-luminosity point sources; the fluorescence of (and reflection from) dense molecular material; and soft (kT ~1 keV), diffuse thermal plasma emission most likely energised by supernova explosions. Here, we examine the emission due to unresolved point sources. We show that this source component accounts for the bulk of the 6.7-keV and 6.9-keV line emission. We fit the surface brightness distribution evident in these lines with an empirical 2-d model, which we then compare with a prediction derived from a 3-d mass model for the old stellar population in the GC region. We find that the X-ray surface brightness declines more rapidly with angular offset from Sgr A* than the mass-model prediction. One interpretation is that the X-ray luminosity per solar mass characterising the GC source population is increasing towards the GC. Alternatively, some refinement of the mass-distribution within the nuclear stellar disc may be required. The unresolved X-ray source population is most likely dominated by magnetic CVs. We use the X-ray observations to set constraints on the number density of such sources in the GC region. Our analysis does not support the premise that the GC is pervaded by very hot (~ 7.5 keV) thermal plasma, which is truly diffuse in nature.

  9. Global excitation of wave phenomena in a dissipative multiconstituent medium. I - Transfer function of the earth's thermosphere. II - Impulsive perturbations in the earth's thermosphere

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Herrero, F. A.; Varosi, F.

    1984-01-01

    A transfer function approach is taken in constructing a spectral model of the acoustic-gravity wave response in a multiconstituent thermosphere. The model is then applied to describing the thermospheric response to various sources around the globe. Zonal spherical harmonics serve to model the horizontal variations in propagating waves which, when integrated with respect to height, generate a transfer function for a vertical source distribution in the thermosphere. Four wave components are characterized as resonance phenomena and are associated with magnetic activity and ionospheric disturbances. The waves are either trapped or propagate, the latter becoming significant when possessing frequencies above 3 cycles/day. The energy input is distributed by thermospheric winds. The disturbances decay slowly, mainly due to heat conduction and diffusion. Gravity waves appear abruptly and are connected to a sudden switching on or off of a source. Turn off of a source coincides with a reversal of the local atmospheric circulation.

  10. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  11. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  12. Widely distributed SEP events and pseudostreamers

    NASA Astrophysics Data System (ADS)

    Panasenco, O.; Panasenco, A.; Velli, M.

    2017-12-01

    Our analysis of the pseudostreamer magnetic topology reveals new interesting implications for understanding SEP acceleration in CMEs. The possible reasons for the wide distribution of some SEP events can be the presence of pseudostreamers in the vicinity of the SEP source region which creates conditions for the existence of strong longitudinal spread of energetic particles as well as an anomalous longitudinal solar wind magnetic field component. We reconstructed the 3D magnetic configurations of pseudostreamers with a potential field source surface (PFSS) model, which uses as a lower boundary condition the magnetic field derived from an evolving surface-flux transport model. In order to estimate the possible magnetic connections between the spacecraft and the SEP source region, we used the Parker spiral, ENLIL and PFSS models. We found that in cases of the wide SEP distributions a specific configuration of magnetic field appears to exist at low solar latitudes all the way around the sun, we named this phenomenon a pseudostreamers belt. It appears that the presence of the well developed pseudostreamer or, rather multiple pseudostreamers, organized into the pseudostreamer belt can be considered as a very favorable condition for wide SEP events.

  13. Contributions of solar wind and micrometeoroids to molecular hydrogen in the lunar exosphere

    NASA Astrophysics Data System (ADS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; Pryor, Wayne; Stickle, Angela; Killen, Rosemary M.; Stern, S. Alan

    2017-02-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50% of the solar wind H+ inventory to be converted to H2 to account for the observations.

  14. Contributions of Solar Wind and Micrometeoroids to Molecular Hydrogen in the Lunar Exosphere

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; hide

    2016-01-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50 of the solar wind H+ inventory to be converted to H2 to account for the observations.

  15. Source Identification and Apportionment of Trace Elements in Soils in the Yangtze River Delta, China.

    PubMed

    Shao, Shuai; Hu, Bifeng; Fu, Zhiyi; Wang, Jiayu; Lou, Ge; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou

    2018-06-12

    Trace elements pollution has attracted a lot of attention worldwide. However, it is difficult to identify and apportion the sources of multiple element pollutants over large areas because of the considerable spatial complexity and variability in the distribution of trace elements in soil. In this study, we collected total of 2051 topsoil (0⁻20 cm) samples, and analyzed the general pollution status of soils from the Yangtze River Delta, Southeast China. We applied principal component analysis (PCA), a finite mixture distribution model (FMDM), and geostatistical tools to identify and quantitatively apportion the sources of seven kinds of trace elements (chromium (Cr), cadmium (Cd), mercury (Hg), copper (Cu), zinc (Zn), nickel (Ni), and arsenic (As)) in soil. The PCA results indicated that the trace elements in soil in the study area were mainly from natural, multi-pollutant and industrial sources. The FMDM also fitted three sub log-normal distributions. The results from the two models were quite similar: Cr, As, and Ni were mainly from natural sources caused by parent material weathering; Cd, Cu, and Zu were mainly from mixed sources, with a considerable portion from anthropogenic activities such as traffic pollutants, domestic garbage, and agricultural inputs, and Hg was mainly from industrial wastes and pollutants.

  16. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  17. Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications

    NASA Astrophysics Data System (ADS)

    Shamsi, Pourya

    Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.

  18. Probability model for atmospheric sulfur dioxide concentrations in the area of Venice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttazzoni, C.; Lavagnini, I.; Marani, A.

    1986-09-01

    This paper deals with a comparative screening of existing air quality models based on their ability to simulate the distribution of sulfur dioxide data in the Venetian area. Investigations have been carried out on sulfur dioxide dispersion in the atmosphere of the Venetian area. The studies have been mainly focused on transport models (Gaussian, plume and K-models) aiming at meaningful correlations of sources and receptors. Among the results, a noteworthy disagreement of simulated and experimental data, due to the lack of thorough knowledge of source field conditions and of local meteorology of the sea-land transition area, has been shown. Investigationsmore » with receptor oriented models (based, e.g., on time series analysis, Fourier analysis, or statistical distributions) have also been performed.« less

  19. Studies of the gas tori of Titan and Triton

    NASA Technical Reports Server (NTRS)

    Smyth, William H.

    1995-01-01

    Progress in the development of the model for the circumplanetary distribution of atomic hydrogen in the Saturn system produced by a Titan source is discussed. Because of the action of the solar radiation acceleration and the obliquity of Saturn, the hydrogen distribution is shown to undergo seasonal changes as the planet moves about the Sun. Preliminary model calculations show that for a continuous Titan source, the H distribution is highly asymmetric about the planet and has a density maximum near the dusk side of Saturn, qualitatively similar to the pattern recently deduced by Shemansky and Hall from observations acquired by the UVS instruments aboard the Voyager spacecrafts. The investigation of these Voyager data will be undertaken in the next project year.

  20. Target/error overlap in jargonaphasia: The case for a one-source model, lexical and non-lexical summation, and the special status of correct responses.

    PubMed

    Olson, Andrew; Halloran, Elizabeth; Romani, Cristina

    2015-12-01

    We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  2. Relativistic jet feedback - II. Relationship to gigahertz peak spectrum and compact steep spectrum radio galaxies

    NASA Astrophysics Data System (ADS)

    Bicknell, Geoffrey V.; Mukherjee, Dipanjan; Wagner, Alexander Y.; Sutherland, Ralph S.; Nesvadba, Nicole P. H.

    2018-04-01

    We propose that Gigahertz Peak Spectrum (GPS) and Compact Steep Spectrum (CSS) radio sources are the signposts of relativistic jet feedback in evolving galaxies. Our simulations of relativistic jets interacting with a warm, inhomogeneous medium, utilizing cloud densities and velocity dispersions in the range derived from optical observations, show that free-free absorption can account for the ˜ GHz peak frequencies and low-frequency power laws inferred from the radio observations. These new computational models replace a power-law model for the free-free optical depth a more fundamental model involving disrupted log-normal distributions of warm gas. One feature of our new models is that at early stages, the low-frequency spectrum is steep but progressively flattens as a result of a broader distribution of optical depths, suggesting that the steep low-frequency spectra discovered by Callingham et al. may possibly be attributed to young sources. We also investigate the inverse correlation between peak frequency and size and find that the initial location on this correlation is determined by the average density of the warm ISM. The simulated sources track this correlation initially but eventually fall below it, indicating the need for a more extended ISM than presently modelled. GPS and CSS sources can potentially provide new insights into the phenomenon of AGN feedback since their peak frequencies and spectra are indicative of the density, turbulent structure, and distribution of gas in the host galaxy.

  3. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2002-11-01

    The problem of a wave-field excitation in a ground from a quasiperiodic source, placed on the ground surface or on some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The latter are included in the software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high-intensity underground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of the used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurement results are presented in the paper.

  4. Ecological prognosis near intensive acoustic sources

    NASA Astrophysics Data System (ADS)

    Kostarev, Stanislav A.; Makhortykh, Sergey A.; Rybak, Samuil A.

    2003-04-01

    The problem of a wave field excitation in a ground from a quasi-periodic source, placed on the ground surface or at some depth in soil is investigated. The ecological situation in this case in many respects is determined by quality of the raised vibrations and noise forecast. In the present work the distributed source is modeled by the set of statistically linked compact sources on the surface or in the ground. Changes of parameters of the media along an axis and horizontal heterogeneity of environment are taken into account. Both analytical and numerical approaches are developed. The last are included in software package VibraCalc, allowing to calculate distribution of the elastic waves field in a ground from quasilinear sources. Accurate evaluation of vibration levels in buildings from high intensity under ground sources is fulfilled by modeling of the wave propagation in dissipative inhomogeneous elastic media. The model takes into account both bulk (longitudinal and shear) and surface Rayleigh waves. For the verification of used approach a series of measurements was carried out near the experimental part of monorail road designed in Moscow. Both calculation and measurements results are presented in the paper.

  5. Debiased estimates for NEO orbits, absolute magnitudes, and source regions

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce T.; Bottke, William; Beshore, Edward C.; Vokrouhlicky, David; Nesvorny, David; Michel, Patrick

    2017-10-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies on individual NEOs as well as on more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis (a), eccentricity (e), inclination (i), and absolute magnitude (H). We calibrate the model using NEO detections by the 703 and G96 stations of the Catalina Sky Survey (CSS) during 2005-2012 corresponding to objects with 17

  6. The Velocity and Density Distribution of Earth-Intersecting Meteoroids: Implications for Environment Models

    NASA Technical Reports Server (NTRS)

    Moorhead, A. V.; Brown, P. G.; Campbell-Brown, M. D.; Moser, D. E.; Blaauw, R. C.; Cooke, W. J.

    2017-01-01

    Meteoroids are known to damage spacecraft: they can crater or puncture components, disturb a spacecraft's attitude, and potentially create secondary electrical effects. Because the damage done depends on the speed, size, density, and direction of the impactor, accurate environment models are critical for mitigating meteoroid-related risks. Yet because meteoroid properties are derived from indirect observations such as meteors and impact craters, many characteristics of the meteoroid environment are uncertain. In this work, we present recent improvements to the meteoroid speed and density distributions. Our speed distribution is derived from observations made by the Canadian Meteor Orbit Radar. These observations are de-biased using modern descriptions of the ionization efficiency. Our approach yields a slower meteoroid population than previous analyses (see Fig. 1 for an example) and we compute the uncertainties associated with our derived distribution. We adopt a higher fidelity density distribution than that used by many older models. In our distribution, meteoroids with TJ less than 2 are assigned to a low-density population, while those with TJ greater than 2 have higher densities (see Fig. 2). This division and the distributions themselves are derived from the densities reported by Kikwaya et al. These changes have implications for the environment: for instance, the helion/antihelion sporadic sources have lower speeds than the apex and toroidal sources and originate from high-T(sub J) parent bodies. Our on-average slower and denser distributions thus imply that the helion and antihelion sources dominate the meteoroid environment even more completely than previously thought. Finally, for a given near-Earth meteoroid cratering rate, a slower meteoroid population produces a comparatively higher rate of satellite attitude disturbances.

  7. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  8. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  9. Integrated species distribution models: combining presence-background data and site-occupancy data with imperfect detection

    USGS Publications Warehouse

    Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi

    2017-01-01

    Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.

  10. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  11. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  12. A Community Terrain-Following Ocean Modeling System (ROMS/TOMS)

    DTIC Science & Technology

    2011-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Community Terrain-Following Ocean Modeling System (ROMS...732) 932-6555 x266 Fax: (732) 932-6520 email: arango@marine.rutgers.edu Award Number: N00014-10- 1 -0322 http://ocean-modeling.org http...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  13. Emission Features and Source Counts of Galaxies in Mid-Infrared

    NASA Technical Reports Server (NTRS)

    Xu, C.; Hacking, P. B.; Fang, F.; Shupe, D. L.; Lonsdale, C. J.; Lu, N. Y.; Helou, G.; Stacey, G. J.; Ashby, M. L. N.

    1998-01-01

    In this work we incorporate the newest ISO results on the mid-infrared spectral-energy-distributions (MIR SEDs) of galaxies into models for the number counts and redshift distributions of MIR surveys.

  14. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  15. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    PubMed

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  16. Thematic and Spatial Resolutions Affect Model-Based Predictions of Tree Species Distribution

    PubMed Central

    Liang, Yu; He, Hong S.; Fraser, Jacob S.; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution. PMID:23861828

  17. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  18. Production of NOx by Lightning and its Effects on Atmospheric Chemistry

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.

    2009-01-01

    Production of NO(x) by lightning remains the NO(x) source with the greatest uncertainty. Current estimates of the global source strength range over a factor of four (from 2 to 8 TgN/year). Ongoing efforts to reduce this uncertainty through field programs, cloud-resolved modeling, global modeling, and satellite data analysis will be described in this seminar. Representation of the lightning source in global or regional chemical transport models requires three types of information: the distribution of lightning flashes as a function of time and space, the production of NO(x) per flash, and the effective vertical distribution of the lightning-injected NO(x). Methods of specifying these items in a model will be discussed. For example, the current method of specifying flash rates in NASA's Global Modeling Initiative (GMI) chemical transport model will be discussed, as well as work underway in developing algorithms for use in the regional models CMAQ and WRF-Chem. A number of methods have been employed to estimate either production per lightning flash or the production per unit flash length. Such estimates derived from cloud-resolved chemistry simulations and from satellite NO2 retrievals will be presented as well as the methodologies employed. Cloud-resolved model output has also been used in developing vertical profiles of lightning NO(x) for use in global models. Effects of lightning NO(x) on O3 and HO(x) distributions will be illustrated regionally and globally.

  19. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    PubMed

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  20. The spectral energy distributions of isolated neutron stars in the resonant cyclotron scattering model

    NASA Astrophysics Data System (ADS)

    Tong, Hao; Xu, Renxin

    2013-03-01

    The X-ray dim isolated neutron stars (XDINSs) are peculiar pulsar-like objects, characterized by their very well Planck-like spectrum. In studying their spectral energy distributions, the optical/UV excess is a long standing problem. Recently, Kaplan et al. (2011) have measured the optical/UV excess for all seven sources, which is understandable in the resonant cyclotron scattering (RCS) model previously addressed. The RCS model calculations show that the RCS process can account for the observed optical/UV excess for most sources. The flat spectrum of RX J2143.0+0654 may due to contribution from bremsstrahlung emission of the electron system in addition to the RCS process.

  1. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  2. The National Map seamless digital elevation model specifications

    USGS Publications Warehouse

    Archuleta, Christy-Ann M.; Constance, Eric W.; Arundel, Samantha T.; Lowe, Amanda J.; Mantey, Kimberly S.; Phillips, Lori A.

    2017-08-02

    This specification documents the requirements and standards used to produce the seamless elevation layers for The National Map of the United States. Seamless elevation data are available for the conterminous United States, Hawaii, Alaska, and the U.S. territories, in three different resolutions—1/3-arc-second, 1-arc-second, and 2-arc-second. These specifications include requirements and standards information about source data requirements, spatial reference system, distribution tiling schemes, horizontal resolution, vertical accuracy, digital elevation model surface treatment, georeferencing, data source and tile dates, distribution and supporting file formats, void areas, metadata, spatial metadata, and quality assurance and control.

  3. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  4. Numerical convergence and validation of the DIMP inverse particle transport model

    DOE PAGES

    Nelson, Noel; Azmy, Yousry

    2017-09-01

    The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less

  5. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  6. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  7. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  8. Seasonal source-sink dynamics at the edge of a species' range

    USGS Publications Warehouse

    Kanda, L.L.; Fuller, T.K.; Sievert, P.R.; Kellogg, R.L.

    2009-01-01

    The roles of dispersal and population dynamics in determining species' range boundaries recently have received theoretical attention but little empirical work. Here we provide data on survival, reproduction, and movement for a Virginia opossum (Didelphis virginiana) population at a local distributional edge in central Massachusetts (USA). Most juvenile females that apparently exploited anthropogenic resources survived their first winter, whereas those using adjacent natural resources died of starvation. In spring, adult females recolonized natural areas. A life-table model suggests that a population exploiting anthropogenic resources may grow, acting as source to a geographically interlaced sink of opossums using only natural resources, and also providing emigrants for further range expansion to new human-dominated landscapes. In a geographical model, this source-sink dynamic is consistent with the local distribution identified through road-kill surveys. The Virginia opossum's exploitation of human resources likely ameliorates energetically restrictive winters and may explain both their local distribution and their northward expansion in unsuitable natural climatic regimes. Landscape heterogeneity, such as created by urbanization, may result in source-sink dynamics at highly localized scales. Differential fitness and individual dispersal movements within local populations are key to generating regional distributions, and thus species ranges, that exceed expectations. ?? 2009 by the Ecological Society of America.

  9. Inputs and spatial distribution patterns of Cr in Jiaozhou Bay

    NASA Astrophysics Data System (ADS)

    Yang, Dongfang; Miao, Zhenqing; Huang, Xinmin; Wei, Linzhen; Feng, Ming

    2018-03-01

    Cr pollution in marine bays has been one of the critical environmental issues, and understanding the input and spatial distribution patterns is essential to pollution control. In according to the source strengths of the major pollution sources, the input patterns of pollutants to marine bay include slight, moderate and heavy, and the spatial distribution are corresponding to three block models respectively. This paper analyzed input patterns and distributions of Cr in Jiaozhou Bay, eastern China based on investigation on Cr in surface waters during 1979-1983. Results showed that the input strengths of Cr in Jiaozhou Bay could be classified as moderate input and slight input, and the input strengths were 32.32-112.30 μg L-1 and 4.17-19.76 μg L-1, respectively. The input patterns of Cr included two patterns of moderate input and slight input, and the horizontal distributions could be defined by means of Block Model 2 and Block Model 3, respectively. In case of moderate input pattern via overland runoff, Cr contents were decreasing from the estuaries to the bay mouth, and the distribution pattern was parallel. In case of moderate input pattern via marine current, Cr contents were decreasing from the bay mouth to the bay, and the distribution pattern was parallel to circular. The Block Models were able to reveal the transferring process of various pollutants, and were helpful to understand the distributions of pollutants in marine bay.

  10. An approach to a real-time distribution system

    NASA Technical Reports Server (NTRS)

    Kittle, Frank P., Jr.; Paddock, Eddie J.; Pocklington, Tony; Wang, Lui

    1990-01-01

    The requirements of a real-time data distribution system are to provide fast, reliable delivery of data from source to destination with little or no impact to the data source. In this particular case, the data sources are inside an operational environment, the Mission Control Center (MCC), and any workstation receiving data directly from the operational computer must conform to the software standards of the MCC. In order to supply data to development workstations outside of the MCC, it is necessary to use gateway computers that prevent unauthorized data transfer back to the operational computers. Many software programs produced on the development workstations are targeted for real-time operation. Therefore, these programs must migrate from the development workstation to the operational workstation. It is yet another requirement for the Data Distribution System to ensure smooth transition of the data interfaces for the application developers. A standard data interface model has already been set up for the operational environment, so the interface between the distribution system and the application software was developed to match that model as closely as possible. The system as a whole therefore allows the rapid development of real-time applications without impacting the data sources. In summary, this approach to a real-time data distribution system provides development users outside of the MCC with an interface to MCC real-time data sources. In addition, the data interface was developed with a flexible and portable software design. This design allows for the smooth transition of new real-time applications to the MCC operational environment.

  11. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  12. Transport and solubility of Hetero-disperse dry deposition particulate matter subject to urban source area rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Ying, G.; Sansalone, J.

    2010-03-01

    SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.

  13. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  14. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  15. Data format standard for sharing light source measurements

    NASA Astrophysics Data System (ADS)

    Gregory, G. Groot; Ashdown, Ian; Brandenburg, Willi; Chabaud, Dominique; Dross, Oliver; Gangadhara, Sanjay; Garcia, Kevin; Gauvin, Michael; Hansen, Dirk; Haraguchi, Kei; Hasna, Günther; Jiao, Jianzhong; Kelley, Ryan; Koshel, John; Muschaweck, Julius

    2013-09-01

    Optical design requires accurate characterization of light sources for computer aided design (CAD) software. Various methods have been used to model sources, from accurate physical models to measurement of light output. It has become common practice for designers to include measured source data for design simulations. Typically, a measured source will contain rays which sample the output distribution of the source. The ray data must then be exported to various formats suitable for import into optical analysis or design software. Source manufacturers are also making measurements of their products and supplying CAD models along with ray data sets for designers. The increasing availability of data has been beneficial to the design community but has caused a large expansion in storage needs for the source manufacturers since each software program uses a unique format to describe the source distribution. In 2012, the Illuminating Engineering Society (IES) formed a working group to understand the data requirements for ray data and recommend a standard file format. The working group included representatives from software companies supplying the analysis and design tools, source measurement companies providing metrology, source manufacturers creating the data and users from the design community. Within one year the working group proposed a file format which was recently approved by the IES for publication as TM-25. This paper will discuss the process used to define the proposed format, highlight some of the significant decisions leading to the format and list the data to be included in the first version of the standard.

  16. The proton and helium anomalies in the light of the Myriad model

    NASA Astrophysics Data System (ADS)

    Salati, Pierre; Génolini, Yoann; Serpico, Pasquale; Taillet, Richard

    2017-03-01

    A hardening of the proton and helium fluxes is observed above a few hundreds of GeV/nuc. The distribution of local sources of primary cosmic rays has been suggested as a potential solution to this puzzling behavior. Some authors even claim that a single source is responsible for the observed anomalies. But how probable these explanations are? To answer that question, our current description of cosmic ray Galactic propagation needs to be replaced by the Myriad model. In the former approach, sources of protons and helium nuclei are treated as a jelly continuously spread over space and time. A more accurate description is provided by the Myriad model where sources are considered as point-like events. This leads to a probabilistic derivation of the fluxes of primary species, and opens the possibility that larger-than-average values may be observed at the Earth. For a long time though, a major obstacle has been the infinite variance associated to the probability distribution function which the fluxes follow. Several suggestions have been made to cure this problem but none is entirely satisfactory. We go a step further here and solve the infinite variance problem of the Myriad model by making use of the generalized central limit theorem. We find that primary fluxes are distributed according to a stable law with heavy tail, well-known to financial analysts. The probability that the proton and helium anomalies are sourced by local SNR can then be calculated. The p-values associated to the CREAM measurements turn out to be small, unless somewhat unrealistic propagation parameters are assumed.

  17. SU-F-19A-05: Experimental and Monte Carlo Characterization of the 1 Cm CivaString 103Pd Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Micka, J; Culberson, W

    Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less

  18. Source Distributions of Substorm Ions Observed in the Near-Earth Magnetotail

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; El-Alaoui, M.; Peroomian, V.; Walker, R. J.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1999-01-01

    This study employs Geotail plasma observations and numerical modeling to determine sources of the ions observed in the near-Earth magnetotail near midnight during a substorm. The growth phase has the low-latitude boundary layer as its most important source of ions at Geotail, but during the expansion phase the plasma mantle is dominant. The mantle distribution shows evidence of two distinct entry mechanisms: entry through a high latitude reconnection region resulting in an accelerated component, and entry through open field lines traditionally identified with the mantle source. The two entry mechanisms are separated in time, with the high-latitude reconnection region disappearing prior to substorm onset.

  19. MODELING THE DISTRIBUTION OF NONPOINT NITROGEN SOURCES AND SINKS IN THE NEUSE RIVER BASIN OF NORTH CAROLINA, USA

    EPA Science Inventory

    This study quantified nonpoint nitrogen (N) sources and sinks across the 14,582 km2 Neuse River Basin (NRB) located in North Carolina, to provide a tabular database to initialize in-stream N decay models and graphic overlay products for the development of management approaches to...

  20. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  1. Heterogeneity of direct aftershock productivity of the main shock rupture

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong

    2017-07-01

    The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.

  2. On performance of parametric and distribution-free models for zero-inflated and over-dispersed count responses.

    PubMed

    Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M

    2015-10-30

    Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Active source monitoring at the Wenchuan fault zone: coseismic velocity change associated with aftershock event and its implication

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Ge, Hongkui; Wang, Baoshan; Hu, Jiupeng; Yuan, Songyong; Qiao, Sen

    2014-12-01

    With the improvement of seismic observation system, more and more observations indicate that earthquakes may cause seismic velocity change. However, the amplitude and spatial distribution of the velocity variation remains a controversial issue. Recent active source monitoring carried out adjacent to Wenchuan Fault Scientific Drilling (WFSD) revealed unambiguous coseismic velocity change associated with a local M s5.5 earthquake. Here, we carry out forward modeling using two-dimensional spectral element method to further investigate the amplitude and spatial distribution of observed velocity change. The model is well constrained by results from seismic reflection and WFSD coring. Our model strongly suggests that the observed coseismic velocity change is localized within the fault zone with width of ~120 m rather than dynamic strong ground shaking. And a velocity decrease of ~2.0 % within the fault zone is required to fit the observed travel time delay distribution, which coincides with rock mechanical experiment and theoretical modeling.

  4. Development of thermal model to analyze thermal flux distribution in thermally enhanced machining of high chrome white cast iron

    NASA Astrophysics Data System (ADS)

    Ravi, A. M.; Murigendrappa, S. M.

    2018-04-01

    In recent times, thermally enhanced machining (TEM) slowly gearing up to cut hard metals like high chrome white cast iron (HCWCI) which were impossible in conventional procedures. Also setting up of suitable cutting parameters and positioning of the heat source against the work appears to be critical in order to enhance the machinability characteristics of the work material. In this research work, the Oxy - LPG flame was used as the heat source and HCWCI as the workpiece. ANSYS-CFD-Flow software was used to develop the transient thermal model to analyze the thermal flux distribution on the work surface during TEM of HCWCI using Cubic boron nitride (CBN) tools. Non-contact type Infrared thermo sensor was used to measure the surface temperature continuously at different positions, and is validated with the thermal model results. The result confirms thermal model is a better predictive tool for thermal flux distribution analysis in TEM process.

  5. Recent Simulation Results on Ring Current Dynamics Using the Comprehensive Ring Current Model

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Zaharia, Sorin G.; Lui, Anthony T. Y.; Fok, Mei-Ching

    2010-01-01

    Plasma sheet conditions and electromagnetic field configurations are both crucial in determining ring current evolution and connection to the ionosphere. In this presentation, we investigate how different conditions of plasma sheet distribution affect ring current properties. Results include comparative studies in 1) varying the radial distance of the plasma sheet boundary; 2) varying local time distribution of the source population; 3) varying the source spectra. Our results show that a source located farther away leads to a stronger ring current than a source that is closer to the Earth. Local time distribution of the source plays an important role in determining both the radial and azimuthal (local time) location of the ring current peak pressure. We found that post-midnight source locations generally lead to a stronger ring current. This finding is in agreement with Lavraud et al.. However, our results do not exhibit any simple dependence of the local time distribution of the peak ring current (within the lower energy range) on the local time distribution of the source, as suggested by Lavraud et al. [2008]. In addition, we will show how different specifications of the magnetic field in the simulation domain affect ring current dynamics in reference to the 20 November 2007 storm, which include initial results on coupling the CRCM with a three-dimensional (3-D) plasma force balance code to achieve self-consistency in the magnetic field.

  6. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  7. Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data

    DTIC Science & Technology

    2009-05-01

    operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models

  8. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  9. Relative Contributions of the Saharan and Sahelian Sources to the Atmospheric Dust Load Over the North Atlantic

    NASA Technical Reports Server (NTRS)

    Ginoux, Paul; Chin, M.; Torres, O.; Prospero, J.; Dubovik, O.; Holben, B.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    It has long been recognized that Saharan desert is the major source for long range transport of mineral dust over the Atlantic. The contribution from other natural sources to the dust load over the Atlantic has generally been ignored in previous model studies or been replaced by anthropogenically disturbed soil emissions. Recently, Prospero et.at. have identified the major dust sources over the Earth using TOMS aerosol index. They showed that these sources correspond to dry lakes with layers of sediment deposed in the late Holocene or Pleistocene. One of the most active of these sources seem to be the Bodele depression. Chiapello et al. have analyzed the mineralogical composition of dust on the West coast of Africa. They found that Sahelian dust events are the most intense but are less frequent than Saharan plumes. This suggests that the Bodele depression could contribute significantly to the dust load over the Atlantic. The relative contribution of the Sahel and Sahara dust sources is of importance for marine biogeochemistry or atmospheric radiation, because each source has a distinct mineralogical composition. We present here a model study of the relative contributions of Sahara and Sahel sources to the atmospheric dust aerosols over the North Atlantic. The Georgia Tech/Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model is used to simulate dust distribution in 1996-1997. Dust particles are labeled depending on their sources. In this presentation, we will present the comparison between the model results and observations from ground based measurements (dust concentration, optical thickness and size distribution) and satellite data (TOMS aerosol index). The relative contribution of each source will then be analyzed spatially and temporally.

  10. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  11. Estimates of water source contributions in a dynamic urban water supply system inferred via a Bayesian stable isotope mixing model

    NASA Astrophysics Data System (ADS)

    Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.

    2017-12-01

    Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.

  12. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  13. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  14. A model-based analysis of extinction ratio effects on phase-OTDR distributed acoustic sensing system performance

    NASA Astrophysics Data System (ADS)

    Aktas, Metin; Maral, Hakan; Akgun, Toygar

    2018-02-01

    Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.

  15. Efficient measurement of large light source near-field color and luminance distributions for optical design and simulation

    NASA Astrophysics Data System (ADS)

    Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald

    2009-08-01

    The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.

  16. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less

  18. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  19. Sunlamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J

    The purpose of this model was to facilitate the design of a control system that uses fine grained control of residential and small commercial HVAC loads to counterbalance voltage swings caused by intermittent solar power sources (e.g., rooftop panels) installed in that distribution circuit. Included is the source code and pre-compiled 64 bit dll for adding building HVAC loads to an OpenDSS distribution circuit. As written, the Makefile assumes you are using the Microsoft C++ development tools.

  20. Fermi-LAT observations of the diffuse γ-ray emission: Implications for cosmic rays and the interstellar medium

    DOE PAGES

    Ackermann, M.; Ajello, M.; Atwood, W. B.; ...

    2012-04-09

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  1. Fermi-LAT Observations of the Diffuse γ-Ray Emission: Implications for Cosmic Rays and the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gaggero, D.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grove, J. E.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Horan, D.; Hou, X.; Hughes, R. E.; Jóhannesson, G.; Johnson, A. S.; Johnson, R. P.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lee, S.-H.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Sadrozinski, H. F.-W.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strong, A. W.; Suson, D. J.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Ziegler, M.; Zimmer, S.

    2012-05-01

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.

  2. FERMI-LAT OBSERVATIONS OF THE DIFFUSE {gamma}-RAY EMISSION: IMPLICATIONS FOR COSMIC RAYS AND THE INTERSTELLAR MEDIUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Bechtol, K.

    The {gamma}-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse {gamma}-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. To assess uncertaintiesmore » associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X{sub CO} factor, the ratio between integrated CO-line intensity and H{sub 2} column density, the fluxes and spectra of the {gamma}-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as {gamma}-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. We also provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  3. Fermi-LAT observations of the diffuse γ-ray emission: Implications for cosmic rays and the interstellar medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Atwood, W. B.

    The γ-ray sky >100 MeV is dominated by the diffuse emissions from interactions of cosmic rays with the interstellar gas and radiation fields of the Milky Way. Our observations of these diffuse emissions provide a tool to study cosmic-ray origin and propagation, and the interstellar medium. We present measurements from the first 21 months of the Fermi Large Area Telescope (Fermi-LAT) mission and compare with models of the diffuse γ-ray emission generated using the GALPROP code. The models are fitted to cosmic-ray data and incorporate astrophysical input for the distribution of cosmic-ray sources, interstellar gas, and radiation fields. In ordermore » to assess uncertainties associated with the astrophysical input, a grid of models is created by varying within observational limits the distribution of cosmic-ray sources, the size of the cosmic-ray confinement volume (halo), and the distribution of interstellar gas. An all-sky maximum-likelihood fit is used to determine the X CO factor, the ratio between integrated CO-line intensity and H2 column density, the fluxes and spectra of the γ-ray point sources from the first Fermi-LAT catalog, and the intensity and spectrum of the isotropic background including residual cosmic rays that were misclassified as γ-rays, all of which have some dependency on the assumed diffuse emission model. The models are compared on the basis of their maximum-likelihood ratios as well as spectra, longitude, and latitude profiles. Here, we provide residual maps for the data following subtraction of the diffuse emission models. The models are consistent with the data at high and intermediate latitudes but underpredict the data in the inner Galaxy for energies above a few GeV. Possible explanations for this discrepancy are discussed, including the contribution by undetected point-source populations and spectral variations of cosmic rays throughout the Galaxy. In the outer Galaxy, we find that the data prefer models with a flatter distribution of cosmic-ray sources, a larger cosmic-ray halo, or greater gas density than is usually assumed. Our results in the outer Galaxy are consistent with other Fermi-LAT studies of this region that used different analysis methods than employed in this paper.« less

  4. Modeling the influence of coupled mass transfer processes on mass flux downgradient of heterogeneous DNAPL source zones

    NASA Astrophysics Data System (ADS)

    Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.

    2018-04-01

    Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of sorption, especially for the case of non-ideal sorption, demonstrating the limitations of employing 2-D predictions for field-scale modeling.

  5. Evaluating agricultural nonpoint-source pollution using integrated geographic information systems and hydrologic/water quality model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tim, U.S.; Jolly, R.

    1994-01-01

    Considerable progress has been made in developing physically based, distributed parameter, hydrologic/water quality (HIWQ) models for planning and control of nonpoint-source pollution. The widespread use of these models is often constrained by the excessive and time-consuming input data demands and the lack of computing efficiencies necessary for iterative simulation of alternative management strategies. Recent developments in geographic information systems (GIS) provide techniques for handling large amounts of spatial data for modeling nonpoint-source pollution problems. Because a GIS can be used to combine information from several sources to form an array of model input data and to examine any combinations ofmore » spatial input/output data, it represents a highly effective tool for HiWQ modeling. This paper describes the integration of a distributed-parameter model (AGNPS) with a GIS (ARC/INFO) to examine nonpoint sources of pollution in an agricultural watershed. The ARC/INFO GIS provided the tools to generate and spatially organize the disparate data to support modeling, while the AGNPS model was used to predict several water quality variables including soil erosion and sedimentation within a watershed. The integrated system was used to evaluate the effectiveness of several alternative management strategies in reducing sediment pollution in a 417-ha watershed located in southern Iowa. The implementation of vegetative filter strips and contour buffer (grass) strips resulted in a 41 and 47% reduction in sediment yield at the watershed outlet, respectively. In addition, when the integrated system was used, the combination of the above management strategies resulted in a 71% reduction in sediment yield. In general, the study demonstrated the utility of integrating a simulation model with GIS for nonpoini-source pollution control and planning. Such techniques can help characterize the diffuse sources of pollution at the landscape level. 52 refs., 6 figs., 1 tab.« less

  6. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  7. Monte Carlo modelling of large scale NORM sources using MCNP.

    PubMed

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. Herschel-ATLAS: Dust Temperature and Redshift Distribution of SPIRE and PACS Detected Sources Using Submillimetre Colours

    NASA Technical Reports Server (NTRS)

    Amblard, A.; Cooray, Asantha; Serra, P.; Temi, P.; Barton, E.; Negrello, M.; Auld, R.; Baes, M.; Baldry, I. K.; Bamford, S.; hide

    2010-01-01

    We present colour-colour diagrams of detected sources in the Herschel-ATLAS Science Demonstration Field from 100 to 500/microns using both PACS and SPIRE. We fit isothermal modified-blackbody spectral energy distribution (SED) models in order to extract the dust temperature of sources with counterparts in GAMA or SDSS with either a spectroscopic or a photometric redshift. For a subsample of 331 sources detected in at least three FIR bands with significance greater than 30 sigma, we find an average dust temperature of (28 plus or minus 8)K. For sources with no known redshifts, we populate the colour-colour diagram with a large number of SEDs generated with a broad range of dust temperatures and emissivity parameters and compare to colours of observed sources to establish the redshift distribution of those samples. For another subsample of 1686 sources with fluxes above 35 mJy at 350 microns and detected at 250 and 500 microns with a significance greater than 3sigma, we find an average redshift of 2.2 plus or minus 0.6.

  9. Collaborative mining of graph patterns from multiple sources

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Colonna-Romanoa, John

    2016-05-01

    Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.

  10. Queries over Unstructured Data: Probabilistic Methods to the Rescue

    NASA Astrophysics Data System (ADS)

    Sarawagi, Sunita

    Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.

  11. On the use of an analytic source model for dose calculations in precision image-guided small animal radiotherapy.

    PubMed

    Granton, Patrick V; Verhaegen, Frank

    2013-05-21

    Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1 mm with respect to the central axis.

  12. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  13. Simulations of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Demerdjiev, A.; Goutev, N.; Tonev, D.

    2018-05-01

    The development and the optimisation of negative hydrogen/deuterium ion sources goes hand in hand with modelling. In this paper a brief introduction on the physics and types of different sources, and on the Kinetic and Fluid theories for plasma description is made. Examples of some recent models are considered whereas the main emphasis is on the model behind the concept and design of a matrix source of negative hydrogen ions. At the Institute for Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences a new cyclotron center is under construction which opens new opportunities for research. One of them is the development of plasma sources for additional proton beam acceleration. We have applied the modelling technique implemented in the aforementioned model of the matrix source to a microwave plasma source exemplifying a plasma filled array of cavities made of a dielectric material with high permittivity. Preliminary results for the distribution of the plasma parameters and the φ component of the electric field in the plasma are obtained.

  14. The Effect of Velocity Correlation on the Spatial Evolution of Breakthrough Curves in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Massoudieh, A.; Dentz, M.; Le Borgne, T.

    2017-12-01

    In heterogeneous media, the velocity distribution and the spatial correlation structure of velocity for solute particles determine the breakthrough curves and how they evolve as one moves away from the solute source. The ability to predict such evolution can help relating the spatio-statistical hydraulic properties of the media to the transport behavior and travel time distributions. While commonly used non-local transport models such as anomalous dispersion and classical continuous time random walk (CTRW) can reproduce breakthrough curve successfully by adjusting the model parameter values, they lack the ability to relate model parameters to the spatio-statistical properties of the media. This in turns limits the transferability of these models. In the research to be presented, we express concentration or flux of solutes as a distribution over their velocity. We then derive an integrodifferential equation that governs the evolution of the particle distribution over velocity at given times and locations for a particle ensemble, based on a presumed velocity correlation structure and an ergodic cross-sectional velocity distribution. This way, the spatial evolution of breakthrough curves away from the source is predicted based on cross-sectional velocity distribution and the connectivity, which is expressed by the velocity transition probability density. The transition probability is specified via a copula function that can help construct a joint distribution with a given correlation and given marginal velocities. Using this approach, we analyze the breakthrough curves depending on the velocity distribution and correlation properties. The model shows how the solute transport behavior evolves from ballistic transport at small spatial scales to Fickian dispersion at large length scales relative to the velocity correlation length.

  15. Theoretical overview and modeling of the sodium and potassium atmospheres of the moon

    NASA Technical Reports Server (NTRS)

    Smyth, William H.; Marconi, M. L.

    1995-01-01

    A general theoretical overview for the sources, sinks, gas-surface interactions, and transport dynamics of sodium and potassium in the exospheric atmosphere of the Moon is given. These four factors, which control the spatial distribution of these two alkali-group gases about the Moon, are incorporated in numerical models. The spatial nature and relative importance of the initial source atoms atmosphere (which must be nonthermal to explain observational data) and the ambient (ballistic hopping) atom atmosphere are examined. The transport dynamics, atmospheric structure, and lunar escape of the nonthermal source atoms are time variable with season of the year and lunar phase because of their dependence on the radiation acceleration experienced by sodium and potassium atoms as they resonantly scatter solar photons. The dynamic transport time of fully thermally accomodated ambient atoms along the surface because of solar radiation acceleration (only several percent of surface gravity) is larger than the photoionization lifetimes and hence unimportant in determining the local density, although for potassium the situation is borderline. The sodium model was applied to analyze sodium observations of the sunward brightness profiles acquired near last quarter by Potter & Morgan (1988b) extending from the surface to an altitude of 1200 km, and near first quarter by Mendillo, Baumgardner, & Flynn (1991), extending in altitude from approximately 1430 to approximately 7000 km. The observations at larger altitudes could be fitted only for source atoms having a velocity distribution with a tail that is mildly nonthermal (like an approximately 1000 K Maxwell-Boltzmann distribution). Solar wind sputtering appears to a be a viable source atom mechanism for the sodium observations with photon-simulated desorption also possible but highly uncertain, although micrometeoroid impact vaporization appears to have a source that is too small and too hot, with likely an incorrect angular distribution about the Moon.

  16. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less

  17. Cometary atmospheres: Modeling the spatial distribution of observed neutral radicals

    NASA Technical Reports Server (NTRS)

    Combi, M. R.

    1986-01-01

    New data for the spatial distribution of cometary C2 are presented. A recompilation of the Haser scale lengths for C2 and CN resolves the previously held anomalous drop of the C2/CN ratio for heliocentric distances larger than 1 AU. Clues to the source of cometary C2 have been found through fitting the sunward-antisunward brightness profiles with the Monte Carlo particle-trajectory model. A source (parent) lifetime of 3.1 x 10,000 seconds is found, and an ejection speed for C2 radicals upon dissociation of the parent(s) of approx. 0.5 km 1/5 is calculated.

  18. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  19. Source apportionment of Baltimore aerosol from combined size distribution and chemical composition data

    NASA Astrophysics Data System (ADS)

    Ogulei, David; Hopke, Philip K.; Zhou, Liming; Patrick Pancras, J.; Nair, Narayanan; Ondov, John M.

    Several multivariate data analysis methods have been applied to a combination of particle size and composition measurements made at the Baltimore Supersite. Partial least squares (PLS) was used to investigate the relationship (linearity) between number concentrations and the measured PM2.5 mass concentrations of chemical species. The data were obtained at the Ponca Street site and consisted of six days' measurements: 6, 7, 8, 18, 19 July, and 21 August 2002. The PLS analysis showed that the covariance between the data could be explained by 10 latent variables (LVs), but only the first four of these were sufficient to establish the linear relationship between the two data sets. More LVs could not make the model better. The four LVs were found to better explain the covariance between the large sized particles and the chemical species. A bilinear receptor model, PMF2, was then used to simultaneously analyze the size distribution and chemical composition data sets. The resolved sources were identified using information from number and mass contributions from each source (source profiles) as well as meteorological data. Twelve sources were identified: oil-fired power plant emissions, secondary nitrate I, local gasoline traffic, coal-fired power plant, secondary nitrate II, secondary sulfate, diesel emissions/bus maintenance, Quebec wildfire episode, nucleation, incinerator, airborne soil/road-way dust, and steel plant emissions. Local sources were mostly characterized by bi-modal number distributions. Regional sources were characterized by transport mode particles (0.2- 0.5μm).

  20. Modelling of low-temperature/large-area distributed antenna array microwave-plasma reactor used for nanocrystalline diamond deposition

    NASA Astrophysics Data System (ADS)

    Bénédic, Fabien; Baudrillart, Benoit; Achard, Jocelyn

    2018-02-01

    In this paper we investigate a distributed antenna array Plasma Enhanced Chemical Vapor Deposition system, composed of 16 microwave plasma sources arranged in a 2D matrix, which enables the growth of 4-in. diamond films at low pressure and low substrate temperature by using H2/CH4/CO2 gas chemistry. A self-consistent two-dimensional plasma model developed for hydrogen discharges is used to study the discharge behavior. Especially, the gas temperature is estimated close to 350 K at a position corresponding to the substrate location during the growth, which is suitable for low temperature deposition. Multi-source discharge modeling evidences that the uniformity of the plasma sheet formed by the individual plasmas ignited around each elementary microwave source strongly depends on the distance to the antennas. The radial profile of the film thickness homogeneity may be thus linked to the local variations of species density. Contribution to the topical issue "Plasma Sources and Plasma Processes (PSPP)", edited by Luis Lemos Alves, Thierry Belmonte and Tibeinea Minea.

  1. On the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.

    2001-01-01

    The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.

  2. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  3. Size and composition distribution of fine particulate matter emitted from wood burning, meat charbroiling, and cigarettes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleeman, M.J.; Schauer, J.J.; Cass, G.R.

    A dilution source sampling system is augmented to measure the size-distributed chemical composition of fine particle emissions from air pollution sources. Measurements are made using a laser optical particle counter (OPC), a differential mobility analyzer/condensation nucleus counter (DMA/CNC) combination, and a pair of microorifice uniform deposit impactors (MOUDIs). The sources tested with this system include wood smoke (pine, oak, eucalyptus), meat charbroiling, and cigarettes. The particle mass distributions from all wood smoke sources have a single mode that peaks at approximately 0.1--0.2 {micro}m particle diameter. The smoke from meat charbroiling shows a major peak in the particle mass distribution atmore » 0.1--0.2 {micro}m particle diameter, with some material present at larger particle sizes. Particle mass distributions from cigarettes peak between 0.3 and 0.4 {micro}m particle diameter. Chemical composition analysis reveals that particles emitted from the sources tested here are largely composed of organic compounds. Noticeable concentrations of elemental carbon are found in the particles emitted from wood burning. The size distributions of the trace species emissions from these sources also are presented, including data for Na, K, Ti, Fe, Br, Ru, Cl, Al, Zn, Ba, Sr, V, Mn, Sb, La, Ce, as well as sulfate, nitrate, and ammonium ion when present in statistically significant amounts. These data are intended for use with air quality models that seek to predict the size distribution of the chemical composition of atmospheric fine particles.« less

  4. Separation of the low-frequency atmospheric variability into non-Gaussian multidimensional sources by Independent Subspace Analysis

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Ribeiro, Andreia

    2016-04-01

    An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.

  5. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  6. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  7. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  8. Modelling Seasonally Freezing Ground Conditions

    DTIC Science & Technology

    1989-05-01

    used as the ’snow input’ in the larger hydrological models, e.g. Pangburn (1987). The most advanced index model is Anderson’s (1973) model. This bases...source as the soils) is shown in figures 32 and 33. Table 10 shows the percentage areas of Hydrologic Soil Groups, Land Use and Slope Distribution for...C") z c~cu CYa) 65 table 10: Percentage areas of Hydrologic Soil Grouos, Land Use and Slope Distribution over W3 (?Pn!ke e: al., 1978) Parameter

  9. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  10. Development of Load Duration Curve System in Data Scarce Watersheds Based on a Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2017-12-01

    In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.

  11. Kinetic modeling of particle dynamics in H- negative ion sources (invited)

    NASA Astrophysics Data System (ADS)

    Hatayama, A.; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T.; Miyamoto, K.; Fukano, A.; Mizuno, T.

    2014-02-01

    Progress in the kinetic modeling of particle dynamics in H- negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H- ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H- production, and (ii) extraction physics of H- ions and beam optics.

  12. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  13. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  14. The spatial coherence function in scanning transmission electron microscopy and spectroscopy.

    PubMed

    Nguyen, D T; Findlay, S D; Etheridge, J

    2014-11-01

    We investigate the implications of the form of the spatial coherence function, also referred to as the effective source distribution, for quantitative analysis in scanning transmission electron microscopy, and in particular for interpreting the spatial origin of imaging and spectroscopy signals. These questions are explored using three different source distribution models applied to a GaAs crystal case study. The shape of the effective source distribution was found to have a strong influence not only on the scanning transmission electron microscopy (STEM) image contrast, but also on the distribution of the scattered electron wavefield and hence on the spatial origin of the detected electron intensities. The implications this has for measuring structure, composition and bonding at atomic resolution via annular dark field, X-ray and electron energy loss STEM imaging are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Learning grammatical categories from distributional cues: flexible frames for language acquisition.

    PubMed

    St Clair, Michelle C; Monaghan, Padraic; Christiansen, Morten H

    2010-09-01

    Numerous distributional cues in the child's environment may potentially assist in language learning, but what cues are useful to the child and when are these cues utilised? We propose that the most useful source of distributional cue is a flexible frame surrounding the word, where the language learner integrates information from the preceding and the succeeding word for grammatical categorisation. In corpus analyses of child-directed speech together with computational models of category acquisition, we show that these flexible frames are computationally advantageous for language learning, as they benefit from the coverage of bigram information across a large proportion of the language environment as well as exploiting the enhanced accuracy of trigram information. Flexible frames are also consistent with the developmental trajectory of children's sensitivity to different sources of distributional information, and they are therefore a useful and usable information source for supporting the acquisition of grammatical categories. 2010 Elsevier B.V. All rights reserved.

  16. Integration of Reference Frames Using VLBI

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; Smith, David E. (Technical Monitor)

    2001-01-01

    Very Long Baseline Interferometry (VLBI) has the unique potential to integrate the terrestrial and celestial reference frames through simultaneous estimation of positions and velocities of approx. 40 active VLBI stations and a similar number of stations/sites with sufficient historical data, the position and position stability of approx. 150 well-observed extragalactic radio sources and another approx. 500 sources distributed fairly uniformly on the sky, and the time series of the five parameters that specify the relative orientation of the two frames. The full realization of this potential is limited by a number of factors including the temporal and spatial distribution of the stations, uneven distribution of observations over the sources and the sky, variations in source structure, modeling of the solid/fluid Earth and troposphere, logistical restrictions on the daily observing network size, and differing strategies for optimizing analysis for TRF, for CRF and for EOP. The current status of separately optimized and integrated VLBI analysis will be discussed.

  17. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  18. [Groundwater organic pollution source identification technology system research and application].

    PubMed

    Wang, Xiao-Hong; Wei, Jia-Hua; Cheng, Zhi-Neng; Liu, Pei-Bin; Ji, Yi-Qun; Zhang, Gan

    2013-02-01

    Groundwater organic pollutions are found in large amount of locations, and the pollutions are widely spread once onset; which is hard to identify and control. The key process to control and govern groundwater pollution is how to control the sources of pollution and reduce the danger to groundwater. This paper introduced typical contaminated sites as an example; then carried out the source identification studies and established groundwater organic pollution source identification system, finally applied the system to the identification of typical contaminated sites. First, grasp the basis of the contaminated sites of geological and hydrogeological conditions; determine the contaminated sites characteristics of pollutants as carbon tetrachloride, from the large numbers of groundwater analysis and test data; then find the solute transport model of contaminated sites and compound-specific isotope techniques. At last, through groundwater solute transport model and compound-specific isotope technology, determine the distribution of the typical site of organic sources of pollution and pollution status; invest identified potential sources of pollution and sample the soil to analysis. It turns out that the results of two identified historical pollution sources and pollutant concentration distribution are reliable. The results provided the basis for treatment of groundwater pollution.

  19. Detecting black bear source-sink dynamics using individual-based genetic graphs.

    PubMed

    Draheim, Hope M; Moore, Jennifer A; Etter, Dwayne; Winterstein, Scott R; Scribner, Kim T

    2016-07-27

    Source-sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source-sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source-sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source-sink dynamics and their implications on harvest management of game species. © 2016 The Author(s).

  20. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  1. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  2. Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.

    PubMed

    Robinson, John D; Hall, David W; Wares, John P

    2013-05-01

    Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.

  3. Origins and Asteroid Main-Belt Stratigraphy for H-, L-, LL-Chondrite Meteorites

    NASA Astrophysics Data System (ADS)

    Binzel, Richard; DeMeo, Francesca; Burbine, Thomas; Polishook, David; Birlan, Mirel

    2016-10-01

    We trace the origins of ordinary chondrite meteorites to their main-belt sources using their (presumably) larger counterparts observable as near-Earth asteroids (NEAs). We find the ordinary chondrite stratigraphy in the main belt to be LL, H, L (increasing distance from the Sun). We derive this result using spectral information from more than 1000 near-Earth asteroids [1]. Our methodology is to correlate each NEA's main-belt source region [2] with its modeled mineralogy [3]. We find LL chondrites predominantly originate from the inner edge of the asteroid belt (nu6 region at 2.1 AU), H chondrites from the 3:1 resonance region (2.5 AU), and the L chondrites from the outer belt 5:2 resonance region (2.8 AU). Each of these source regions has been cited by previous researchers [e.g. 4, 5, 6], but this work uses an independent methodology that simultaneously solves for the LL, H, L stratigraphy. We seek feedback from the planetary origins and meteoritical communities on the viability or implications of this stratrigraphy.Methodology: Spectroscopic and taxonomic data are from the NASA IRTF MIT-Hawaii Near-Earth Object Spectroscopic Survey (MITHNEOS) [1]. For each near-Earth asteroid, we use the Bottke source model [2] to assign a probability that the object is derived from five different main-belt source regions. For each spectrum, we apply the Shkuratov model [3] for radiative transfer within compositional mixing to derive estimates for the ol / (ol+px) ratio (and its uncertainty). The Bottke source region model [2] and the Shkuratov mineralogic model [3] each deliver a probability distribution. For each NEA, we convolve its source region probability distribution with its meteorite class distribution to yield a likelihood for where that class originates. Acknowledgements: This work supported by the National Science Foundation Grant 0907766 and NASA Grant NNX10AG27G.References: [1] Binzel et al. (2005), LPSC XXXVI, 36.1817. [2] Bottke et al. (2002). Icarus 156, 399. [3] Shkuratov et al. (1999). Icarus 137, 222. [4] Vernazza et al. (2008). Nature 454, 858. [5] Thomas et al. (2010). Icarus 205, 419. [6] Nesvorný et al.(2009). Icarus 200, 698.

  4. Estimation of In-Canopy Ammonia Sources and Sinks in a Fertilized Zea mays Field

    EPA Science Inventory

    An analytical model was developed that describes the in-canopy vertical distribution of NH3 source and sinks and vertical fluxes in a fertilized agricultural setting using measured in-canopy concentration and wind speed profiles.

  5. Effect of high energy electrons on H⁻ production and destruction in a high current DC negative ion source for cyclotron.

    PubMed

    Onai, M; Etoh, H; Aoki, Y; Shibata, T; Mattei, S; Fujita, S; Hatayama, A; Lettry, J

    2016-02-01

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H(-) production. The modelling results reasonably explains the dependence of the H(-) extraction current on the arc-discharge power in the experiments.

  6. Distribution of tsunami interevent times

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2008-01-01

    The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.

  7. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.

  8. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  9. Viscous and Interacting Flow Field Effects.

    DTIC Science & Technology

    1980-06-01

    in the inviscid flow analysis using free vortex sheets whose shapes are determined by iteration. The outer iteration employs boundary layer...Methods, Inc. which replaces the source distribution in the separation zone by a vortex wake model . This model is described in some detail in (2), but...in the potential flow is obtained using linearly varying vortex singularities distributed on planar panels. The wake is represented by sheets of

  10. A 3D tomographic reconstruction method to analyze Jupiter's electron-belt emission observations

    NASA Astrophysics Data System (ADS)

    Santos-Costa, Daniel; Girard, Julien; Tasse, Cyril; Zarka, Philippe; Kita, Hajime; Tsuchiya, Fuminori; Misawa, Hiroaki; Clark, George; Bagenal, Fran; Imai, Masafumi; Becker, Heidi N.; Janssen, Michael A.; Bolton, Scott J.; Levin, Steve M.; Connerney, John E. P.

    2017-04-01

    Multi-dimensional reconstruction techniques of Jupiter's synchrotron radiation from radio-interferometric observations were first developed by Sault et al. [Astron. Astrophys., 324, 1190-1196, 1997]. The tomographic-like technique introduced 20 years ago had permitted the first 3-dimensional mapping of the brightness distribution around the planet. This technique has demonstrated the advantage to be weakly dependent on planetary field models. It also does not require any knowledge on the energy and spatial distributions of the radiating electrons. On the downside, it is assumed that the volume emissivity of any punctual point source around the planet is isotropic. This assumption becomes incorrect when mapping the brightness distribution for non-equatorial point sources or any point sources from Juno's perspective. In this paper, we present our modeling effort to bypass the isotropy issue. Our approach is to use radio-interferometric observations and determine the 3-D brightness distribution in a cylindrical coordinate system. For each set (z, r), we constrain the longitudinal distribution with a Fourier series and the anisotropy is addressed with a simple periodic function when possible. We develop this new method over a wide range of frequencies using past VLA and LOFAR observations of Jupiter. We plan to test this reconstruction method with observations of Jupiter that are currently being carried out with LOFAR and GMRT in support to the Juno mission. We describe how this new 3D tomographic reconstruction method provides new model constraints on the energy and spatial distributions of Jupiter's ultra-relativistic electrons close to the planet and be used to interpret Juno MWR observations of Jupiter's electron-belt emission and assist in evaluating the background noise from the radiation environment in the atmospheric measurements.

  11. Modeling of neutrals in the Linac4 H- ion source plasma: Hydrogen atom production density profile and Hα intensity by collisional radiative model

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Shibata, T.; Ohta, M.; Yasumoto, M.; Nishida, K.; Hatayama, A.; Mattei, S.; Lettry, J.; Sawada, K.; Fantz, U.

    2014-02-01

    To control the H0 atom production profile in the H- ion sources is one of the important issues for the efficient and uniform surface H- production. The purpose of this study is to construct a collisional radiative (CR) model to calculate the effective production rate of H0 atoms from H2 molecules in the model geometry of the radio-frequency (RF) H- ion source for Linac4 accelerator. In order to validate the CR model by comparison with the experimental results from the optical emission spectroscopy, it is also necessary for the model to calculate Balmer photon emission rate in the source. As a basic test of the model, the time evolutions of H0 production and the Balmer Hα photon emission rate are calculated for given electron energy distribution functions in the Linac4 RF H- ion source. Reasonable test results are obtained and basis for the detailed comparisons with experimental results have been established.

  12. A spatial individual-based model predicting a great impact of copious sugar sources and resting sites on survival of Anopheles gambiae and malaria parasite transmission

    USGS Publications Warehouse

    Zhu, Lin; Qualls, Whitney A.; Marshall, John M; Arheart, Kris L.; DeAngelis, Donald L.; McManus, John W.; Traore, Sekou F.; Doumbia, Seydou; Schlein, Yosef; Muller, Gunter C.; Beier, John C.

    2015-01-01

    BackgroundAgent-based modelling (ABM) has been used to simulate mosquito life cycles and to evaluate vector control applications. However, most models lack sugar-feeding and resting behaviours or are based on mathematical equations lacking individual level randomness and spatial components of mosquito life. Here, a spatial individual-based model (IBM) incorporating sugar-feeding and resting behaviours of the malaria vector Anopheles gambiae was developed to estimate the impact of environmental sugar sources and resting sites on survival and biting behaviour.MethodsA spatial IBM containing An. gambiae mosquitoes and humans, as well as the village environment of houses, sugar sources, resting sites and larval habitat sites was developed. Anopheles gambiae behaviour rules were attributed at each step of the IBM: resting, host seeking, sugar feeding and breeding. Each step represented one second of time, and each simulation was set to run for 60 days and repeated 50 times. Scenarios of different densities and spatial distributions of sugar sources and outdoor resting sites were simulated and compared.ResultsWhen the number of natural sugar sources was increased from 0 to 100 while the number of resting sites was held constant, mean daily survival rate increased from 2.5% to 85.1% for males and from 2.5% to 94.5% for females, mean human biting rate increased from 0 to 0.94 bites per human per day, and mean daily abundance increased from 1 to 477 for males and from 1 to 1,428 for females. When the number of outdoor resting sites was increased from 0 to 50 while the number of sugar sources was held constant, mean daily survival rate increased from 77.3% to 84.3% for males and from 86.7% to 93.9% for females, mean human biting rate increased from 0 to 0.52 bites per human per day, and mean daily abundance increased from 62 to 349 for males and from 257 to 1120 for females. All increases were significant (P < 0.01). Survival was greater when sugar sources were randomly distributed in the whole village compared to clustering around outdoor resting sites or houses.ConclusionsIncreases in densities of sugar sources or outdoor resting sites significantly increase the survival and human biting rates of An. gambiae mosquitoes. Survival of An. gambiae is more supported by random distribution of sugar sources than clustering of sugar sources around resting sites or houses. Density and spatial distribution of natural sugar sources and outdoor resting sites modulate vector populations and human biting rates, and thus malaria parasite transmission.

  13. A spatial individual-based model predicting a great impact of copious sugar sources and resting sites on survival of Anopheles gambiae and malaria parasite transmission.

    PubMed

    Zhu, Lin; Qualls, Whitney A; Marshall, John M; Arheart, Kris L; DeAngelis, Donald L; McManus, John W; Traore, Sekou F; Doumbia, Seydou; Schlein, Yosef; Müller, Günter C; Beier, John C

    2015-02-05

    Agent-based modelling (ABM) has been used to simulate mosquito life cycles and to evaluate vector control applications. However, most models lack sugar-feeding and resting behaviours or are based on mathematical equations lacking individual level randomness and spatial components of mosquito life. Here, a spatial individual-based model (IBM) incorporating sugar-feeding and resting behaviours of the malaria vector Anopheles gambiae was developed to estimate the impact of environmental sugar sources and resting sites on survival and biting behaviour. A spatial IBM containing An. gambiae mosquitoes and humans, as well as the village environment of houses, sugar sources, resting sites and larval habitat sites was developed. Anopheles gambiae behaviour rules were attributed at each step of the IBM: resting, host seeking, sugar feeding and breeding. Each step represented one second of time, and each simulation was set to run for 60 days and repeated 50 times. Scenarios of different densities and spatial distributions of sugar sources and outdoor resting sites were simulated and compared. When the number of natural sugar sources was increased from 0 to 100 while the number of resting sites was held constant, mean daily survival rate increased from 2.5% to 85.1% for males and from 2.5% to 94.5% for females, mean human biting rate increased from 0 to 0.94 bites per human per day, and mean daily abundance increased from 1 to 477 for males and from 1 to 1,428 for females. When the number of outdoor resting sites was increased from 0 to 50 while the number of sugar sources was held constant, mean daily survival rate increased from 77.3% to 84.3% for males and from 86.7% to 93.9% for females, mean human biting rate increased from 0 to 0.52 bites per human per day, and mean daily abundance increased from 62 to 349 for males and from 257 to 1120 for females. All increases were significant (P < 0.01). Survival was greater when sugar sources were randomly distributed in the whole village compared to clustering around outdoor resting sites or houses. Increases in densities of sugar sources or outdoor resting sites significantly increase the survival and human biting rates of An. gambiae mosquitoes. Survival of An. gambiae is more supported by random distribution of sugar sources than clustering of sugar sources around resting sites or houses. Density and spatial distribution of natural sugar sources and outdoor resting sites modulate vector populations and human biting rates, and thus malaria parasite transmission.

  14. Sodium D-line emission from Io - Comparison of observed and theoretical line profiles

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Matson, D. L.; Johnson, T. V.; Bergstralh, J. T.

    1978-01-01

    High-resolution spectra of the D-line profiles have been obtained for Io's sodium emission cloud. These lines, which are produced through resonance scattering of sunlight, are broad and asymmetric and can be used to infer source and dynamical properties of the sodium cloud. In this paper we compare line profile data with theoretical line shapes computed for several assumed initial velocity distributions corresponding to various source mechanisms. We also examine the consequences of source distributions which are nonuniform over the surface of Io. It is found that the experimental data are compatible with escape of sodium atoms from the leading hemisphere of Io and with velocity distributions characteristic of sputtering processes. Thermal escape and simple models of plasma sweeping are found to be incompatible with the observations.

  15. NOTE: Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool

    NASA Astrophysics Data System (ADS)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  16. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    PubMed

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  17. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene, E-mail: mertsch@nbi.ku.dk, E-mail: mohamed.rameez@nbi.ku.dk, E-mail: tamborra@nbi.ku.dk

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ''warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sourcesmore » with local density exceeding 10{sup −6} Mpc{sup −3} and neutrino luminosity L {sub ν} ∼< 10{sup 42} erg s{sup −1} (10{sup 41} erg s{sup −1}) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.« less

  18. The effect of the charge exchange source on the velocity and 'temperature' distributions and their anisotropies in the earth's exosphere

    NASA Technical Reports Server (NTRS)

    Hodges, R. R., Jr.; Rohrbaugh, R. P.; Tinsley, B. A.

    1981-01-01

    The velocity distribution of atomic hydrogen in the earth's exosphere is calculated as a function of altitude and direction taking into account both the classic exobase source and the higher-altitude plasmaspheric charge exchange source. Calculations are performed on the basis of a Monte Carlo technique in which random ballistic trajectories of individual atoms are traced through a three-dimensional grid of audit zones, at which relative concentrations and momentum or energy fluxes are obtained. In the case of the classical exobase source alone, the slope of the velocity distribution is constant only for the upward radial velocity component and increases dramatically with altitude for the incoming radial and transverse velocity components, resulting in a temperature decrease. The charge exchange source, which produces the satellite hydrogen component and the hot ballistic and escape components of the exosphere, is found to enhance the wings of the velocity distributions, however this effect is not sufficient to overcome the temperature decreases at altitudes above one earth radius. The resulting global model of the hydrogen exosphere may be used as a realistic basis for radiative transfer calculations.

  19. The Herschel-ATLAS: magnifications and physical sizes of 500-μm-selected strongly lensed galaxies

    NASA Astrophysics Data System (ADS)

    Enia, A.; Negrello, M.; Gurwell, M.; Dye, S.; Rodighiero, G.; Massardi, M.; De Zotti, G.; Franceschini, A.; Cooray, A.; van der Werf, P.; Birkinshaw, M.; Michałowski, M. J.; Oteo, I.

    2018-04-01

    We perform lens modelling and source reconstruction of Sub-millimetre Array (SMA) data for a sample of 12 strongly lensed galaxies selected at 500μm in the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). A previous analysis of the same data set used a single Sérsic profile to model the light distribution of each background galaxy. Here we model the source brightness distribution with an adaptive pixel scale scheme, extended to work in the Fourier visibility space of interferometry. We also present new SMA observations for seven other candidate lensed galaxies from the H-ATLAS sample. Our derived lens model parameters are in general consistent with previous findings. However, our estimated magnification factors, ranging from 3 to 10, are lower. The discrepancies are observed in particular where the reconstructed source hints at the presence of multiple knots of emission. We define an effective radius of the reconstructed sources based on the area in the source plane where emission is detected above 5σ. We also fit the reconstructed source surface brightness with an elliptical Gaussian model. We derive a median value reff ˜ 1.77 kpc and a median Gaussian full width at half-maximum ˜1.47 kpc. After correction for magnification, our sources have intrinsic star formation rates (SFR) ˜ 900-3500 M⊙ yr-1, resulting in a median SFR surface density ΣSFR ˜ 132 M⊙ yr-1 kpc-2 (or ˜218 M⊙ yr-1 kpc-2 for the Gaussian fit). This is consistent with that observed for other star-forming galaxies at similar redshifts, and is significantly below the Eddington limit for a radiation pressure regulated starburst.

  20. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  1. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  2. Predicting properties of gas and solid streams by intrinsic kinetics of fast pyrolysis of wood

    DOE PAGES

    Klinger, Jordan; Bar-Ziv, Ezra; Shonnard, David; ...

    2015-12-12

    Pyrolysis has the potential to create a biocrude oil from biomass sources that can be used as fuel or as feedstock for subsequent upgrading to hydrocarbon fuels or other chemicals. The product distribution/composition, however, is linked to the biomass source. This work investigates the products formed from pyrolysis of woody biomass with a previously developed chemical kinetics model. Different woody feedstocks reported in prior literature are placed on a common basis (moisture, ash, fixed carbon free) and normalized by initial elemental composition through ultimate analysis. Observed product distributions over the full devolatilization range are explored, reconstructed by the model, andmore » verified with independent experimental data collected with a microwave-assisted pyrolysis system. These trends include production of permanent gas (CO, CO 2), char, and condensable (oil, water) species. Elementary compositions of these streams are also investigated. As a result, close agreement between literature data, model predictions, and independent experimental data indicate that the proposed model/method is able to predict the ideal distribution from fast pyrolysis given reaction temperature, residence time, and feedstock composition.« less

  3. Gravitational Lenses and the Structure and Evolution of Galaxies

    NASA Technical Reports Server (NTRS)

    Oliversen, Ronald J. (Technical Monitor); Kochanek, Christopher

    2004-01-01

    During the first year of the project we completed five papers, each of which represents a new direction in the theory and interpretation of gravitational lenses. In the first paper, The Importance of Einstein Rings, we developed the first theory for the formation and structure of the Einstein rings formed by lensing extended sources like the host galaxies of quasar and radio sources. In the second paper, Cusped Mass Models Of Gravitational Lenses, we introduced a new class of lens models. In the third paper, Global Probes of the Impact of Baryons on Dark Matter Halos, we made the first globally consistent models for the separation distribution of gravitational lenses including both galaxy and cluster lenses. The last two papers explore the properties of two lenses in detail. During the second year we have focused more closely on the relationship of baryons and dark matter. In the third year we have been further examining the relationship between baryons and dark matter. In the present year we extended our statistical analysis of lens mass distributions using a self-similar model for the halo mass distribution as compared to the luminous galaxy.

  4. Modelling Greenland icebergs

    NASA Astrophysics Data System (ADS)

    Marson, Juliana M.; Myers, Paul G.; Hu, Xianmin

    2017-04-01

    The Atlantic Meridional Overturning Circulation (AMOC) is well known for carrying heat from low to high latitudes, moderating local temperatures. Numerical studies have examined the AMOC's variability under the influence of freshwater input to subduction and deep convections sites. However, an important source of freshwater has often been overlooked or misrepresented: icebergs. While liquid runoff decreases the ocean salinity near the coast, icebergs are a gradual and remote source of freshwater - a difference that affects sea ice cover, temperature, and salinity distribution in ocean models. Icebergs originated from the Greenland ice sheet, in particular, can affect the subduction process in Labrador Sea by decreasing surface water density. Our study aims to evaluate the distribution of icebergs originated from Greenland and their contribution to freshwater input in the North Atlantic. To do that, we use an interactive iceberg module coupled with the Nucleus for European Modelling of the Ocean (NEMO v3.4), which will calve icebergs from Greenland according to rates established by Bamber et al. (2012). Details on the distribution and trajectory of icebergs within the model may also be of use for understanding potential navigation threats, as shipping increases in northern waters.

  5. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  6. Distribution and trajectories of floating and benthic marine macrolitter in the south-eastern North Sea.

    PubMed

    Gutow, Lars; Ricker, Marcel; Holstein, Jan M; Dannheim, Jennifer; Stanev, Emil V; Wolff, Jörg-Olaf

    2018-06-01

    In coastal waters the identification of sources, trajectories and deposition sites of marine litter is often hampered by the complex oceanography of shallow shelf seas. We conducted a multi-annual survey on litter at the sea surface and on the seafloor in the south-eastern North Sea. Bottom trawling was identified as a major source of marine litter. Oceanographic modelling revealed that the distribution of floating litter in the North Sea is largely determined by the site of origin of floating objects whereas the trajectories are strongly influenced by wind drag. Methods adopted from species distribution modelling indicated that resuspension of benthic litter and near-bottom transport processes strongly influence the distribution of litter on the seafloor. Major sink regions for floating marine litter were identified at the west coast of Denmark and in the Skagerrak. Our results may support the development of strategies to reduce the pollution of the North Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  8. Simulation of future groundwater recharge using a climate model ensemble and SAR-image based soil parameter distributions - A case study in an intensively-used Mediterranean catchment.

    PubMed

    Herrmann, Frank; Baghdadi, Nicolas; Blaschek, Michael; Deidda, Roberto; Duttmann, Rainer; La Jeunesse, Isabelle; Sellami, Haykel; Vereecken, Harry; Wendland, Frank

    2016-02-01

    We used observed climate data, an ensemble of four GCM-RCM combinations (global and regional climate models) and the water balance model mGROWA to estimate present and future groundwater recharge for the intensively-used Thau lagoon catchment in southern France. In addition to a highly resolved soil map, soil moisture distributions obtained from SAR-images (Synthetic Aperture Radar) were used to derive the spatial distribution of soil parameters covering the full simulation domain. Doing so helped us to assess the impact of different soil parameter sources on the modelled groundwater recharge levels. Groundwater recharge was simulated in monthly time steps using the ensemble approach and analysed in its spatial and temporal variability. The soil parameters originating from both sources led to very similar groundwater recharge rates, proving that soil parameters derived from SAR images may replace traditionally used soil maps in regions where soil maps are sparse or missing. Additionally, we showed that the variance in different GCM-RCMs influences the projected magnitude of future groundwater recharge change significantly more than the variance in the soil parameter distributions derived from the two different sources. For the period between 1950 and 2100, climate change impacts based on the climate model ensemble indicated that overall groundwater recharge will possibly show a low to moderate decrease in the Thau catchment. However, as no clear trend resulted from the ensemble simulations, reliable recommendations for adapting the regional groundwater management to changed available groundwater volumes could not be derived. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Near-field sound radiation of fan tones from an installed turbofan aero-engine.

    PubMed

    McAlpine, Alan; Gaffney, James; Kingan, Michael J

    2015-09-01

    The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.

  10. The calculating study of the moisture transfer influence at the temperature field in a porous wet medium with internal heat sources

    NASA Astrophysics Data System (ADS)

    Kuzevanov, V. S.; Garyaev, A. B.; Zakozhurnikova, G. S.; Zakozhurnikov, S. S.

    2017-11-01

    A porous wet medium with solid and gaseous components, with distributed or localized heat sources was considered. The regimes of temperature changes at the heating at various initial material moisture were studied. Mathematical model was developed applied to the investigated wet porous multicomponent medium with internal heat sources, taking into account the transfer of the heat by heat conductivity with variable thermal parameters and porosity, heat transfer by radiation, chemical reactions, drying and moistening of solids, heat and mass transfer of volatile products of chemical reactions by flows filtration, transfer of moisture. The algorithm of numerical calculation and the computer program that implements the proposed mathematical model, allowing to study the dynamics of warming up at a local or distributed heat release, in particular the impact of the transfer of moisture in the medium on the temperature field were created. Graphs of temperature change were obtained at different points of the graphics with different initial moisture. Conclusions about the possible control of the regimes of heating a solid porous body by the initial moisture distribution were made.

  11. Activation Time of Cardiac Tissue In Response to a Linear Array of Spatial Alternating Bipolar Electrodes

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Wikswo, John

    2007-11-01

    Prevailing theories about the response of the heart to high field shocks predict that local regions of high resistivity distributed throughout the heart create multiple small virtual electrodes that hyperpolarize or depolarize tissue and lead to widespread activation. This resetting of bulk tissue is responsible for the successful functioning of cardiac defibrillators. By activating cardiac tissue with regular linear arrays of spatially alternating bipolar currents, we can simulate these potentials locally. We have studied the activation time due to distributed currents in both a 1D Beeler-Reuter model and on the surface of the whole heart, varying the strength of each source and the separation between them. By comparison with activation time data from actual field shock of a whole heart in a bath, we hope to better understand these transient virtual electrodes. Our work was done on rabbit RV using florescent optical imaging and our Phased Array Stimulator for driving the 16 current sources. Our model shows that for a total absolute current delivered to a region of tissue, the entire region activates faster if above-threshold sources are more distributed.

  12. The flow structure of jets from transient sources and implications for modeling short-duration explosive volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Chojnicki, K. N.; Clarke, A. B.; Adrian, R. J.; Phillips, J. C.

    2014-12-01

    We used laboratory experiments to examine the rise process in neutrally buoyant jets that resulted from an unsteady supply of momentum, a condition that defines plumes from discrete Vulcanian and Strombolian-style eruptions. We simultaneously measured the analog-jet discharge rate (the supply rate of momentum) and the analog-jet internal velocity distribution (a consequence of momentum transport and dilution). Then, we examined the changes in the analog-jet velocity distribution over time to assess the impact of the supply-rate variations on the momentum-driven rise dynamics. We found that the analog-jet velocity distribution changes significantly and quickly as the supply rate varied, such that the whole-field distribution at any instant differed considerably from the time average. We also found that entrainment varied in space and over time with instantaneous entrainment coefficient values ranging from 0 to 0.93 in an individual unsteady jet. Consequently, we conclude that supply-rate variations exert first-order control over jet dynamics, and therefore cannot be neglected in models without compromising their capability to predict large-scale eruption behavior. These findings emphasize the fundamental differences between unsteady and steady jet dynamics, and show clearly that: (i) variations in source momentum flux directly control the dynamics of the resulting flow; (ii) impulsive flows driven by sources of varying flux cannot reasonably be approximated by quasi-steady flow models. New modeling approaches capable of describing the time-dependent properties of transient volcanic eruption plumes are needed before their trajectory, dilution, and stability can be reliably computed for hazards management.

  13. Open-Source as a strategy for operational software - the case of Enki

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Bruland, Oddbjørn

    2014-05-01

    Since 2002, SINTEF Energy has been developing what is now known as the Enki modelling system. This development has been financed by Norway's largest hydropower producer Statkraft, motivated by a desire for distributed hydrological models in operational use. As the owner of the source code, Statkraft has recently decided on Open Source as a strategy for further development, and for migration from an R&D context to operational use. A current cooperation project is currently carried out between SINTEF Energy, 7 large Norwegian hydropower producers including Statkraft, three universities and one software company. Of course, the most immediate task is that of software maturing. A more important challenge, however, is one of gaining experience within the operational hydropower industry. A transition from lumped to distributed models is likely to also require revision of measurement program, calibration strategy, use of GIS and modern data sources like weather radar and satellite imagery. On the other hand, map based visualisations enable a richer information exchange between hydrologic forecasters and power market traders. The operating context of a distributed hydrology model within hydropower planning is far from settled. Being both a modelling framework and a library of plugin-routines to build models from, Enki supports the flexibility needed in this situation. Recent development has separated the core from the user interface, paving the way for a scripting API, cross-platform compilation, and front-end programs serving different degrees of flexibility, robustness and security. The open source strategy invites anyone to use Enki and to develop and contribute new modules. Once tested, the same modules are available for the operational versions of the program. A core challenge is to offer rigid testing procedures and mechanisms to reject routines in an operational setting, without limiting the experimentation with new modules. The Open Source strategy also has implications for building and maintaining competence around the source code and the advanced hydrological and statistical routines in Enki. Originally developed by hydrologists, the Enki code is now approaching a state where maintenance requires a background in professional software development. Without the advantage of proprietary source code, both hydrologic improvements and software maintenance depend on donations or development support on a case-to-case basis, a situation well known within the open source community. It remains to see whether these mechanisms suffice to keep Enki at the maintenance level required by the hydropower sector. ENKI is available from www.opensource-enki.org.

  14. A class of ejecta transport test problems

    NASA Astrophysics Data System (ADS)

    Oro, David M.; Hammerberg, J. E.; Buttler, William T.; Mariam, Fesseha G.; Morris, Christopher L.; Rousculp, Chris; Stone, Joseph B.

    2012-03-01

    Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function of particulate masses and velocities, f0(m,u;t). Some properties of this source distribution function have been determined from Taylor- and supported-shockwave experiments. Such experiments measure the mass moment of f0 under vacuum conditions assuming weak particle-particle interactions and, usually, fully inelastic scattering (capture) of ejecta particles from piezoelectric diagnostic probes. Recently, planar ejection of W particles into vacuum, Ar, and Xe gas atmospheres have been carried out to provide benchmark transport data for transport model development and validation. We present those experimental results and compare them with modeled transport of the W-ejecta particles in Ar and Xe.

  15. Predatory Publishing, Questionable Peer Review, and Fraudulent Conferences

    PubMed Central

    2014-01-01

    Open-access is a model for publishing scholarly, peer-reviewed journals on the Internet that relies on sources of funding other than subscription fees. Some publishers and editors have exploited the author-pays model of open-access, publishing for their own profit. Submissions are encouraged through widely distributed e-mails on behalf of a growing number of journals that may accept many or all submissions and subject them to little, if any, peer review or editorial oversight. Bogus conference invitations are distributed in a similar fashion. The results of these less than ethical practices might include loss of faculty member time and money, inappropriate article inclusions in curriculum vitae, and costs to the college or funding source. PMID:25657363

  16. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  17. A quasi-static model of global atmospheric electricity. I - The lower atmosphere

    NASA Technical Reports Server (NTRS)

    Hays, P. B.; Roble, R. G.

    1979-01-01

    A quasi-steady model of global lower atmospheric electricity is presented. The model considers thunderstorms as dipole electric generators that can be randomly distributed in various regions and that are the only source of atmospheric electricity and includes the effects of orography and electrical coupling along geomagnetic field lines in the ionosphere and magnetosphere. The model is used to calculate the global distribution of electric potential and current for model conductivities and assumed spatial distributions of thunderstorms. Results indicate that large positive electric potentials are generated over thunderstorms and penetrate to ionospheric heights and into the conjugate hemisphere along magnetic field lines. The perturbation of the calculated electric potential and current distributions during solar flares and subsequent Forbush decreases is discussed, and future measurements of atmospheric electrical parameters and modifications of the model which would improve the agreement between calculations and measurements are suggested.

  18. Quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-07-15

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDCmore » source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses.« less

  19. Effects of Droplet Size on Intrusion of Sub-Surface Oil Spills

    NASA Astrophysics Data System (ADS)

    Adams, Eric; Chan, Godine; Wang, Dayang

    2014-11-01

    We explore effects of droplet size on droplet intrusion and transport in sub-surface oil spills. Negatively buoyant glass beads released continuously to a stratified ambient simulate oil droplets in a rising multiphase plume, and distributions of settled beads are used to infer signatures of surfacing oil. Initial tests used quiescent conditions, while ongoing tests simulate currents by towing the source and a bottom sled. Without current, deposited beads have a Gaussian distribution, with variance increasing with decreasing particle size. Distributions agree with a model assuming first order particle loss from an intrusion layer of constant thickness, and empirically determined flow rate. With current, deposited beads display a parabolic distribution similar to that expected from a source in uniform flow; we are currently comparing observed distributions with similar analytical models. Because chemical dispersants have been used to reduce oil droplet size, our study provides one measure of their effectiveness. Results are applied to conditions from the `Deep Spill' field experiment, and the recent Deepwater Horizon oil spill, and are being used to provide ``inner boundary conditions'' for subsequent far field modeling of these events. This research was made possible by grants from Chevron Energy Technology Co., through the Chevron-MITEI University Partnership Program, and BP/The Gulf of Mexico Research Initiative, GISR.

  20. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    PubMed

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.

  2. MOVES-Matrix and distributed computing for microscale line source dispersion analysis.

    PubMed

    Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L

    2017-07-01

    MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and concentration results as traditional way to use MOVES with AERMOD and CALINE4, which are regulatory models approved by the U.S. EPA for conformity analysis, but the approach is more than 200 times faster than implementing the MOVES model. We highlighted the potentially significant benefit of using CALINE4 as screening tool for identifying potential area that may exceeds air quality standards before using AERMOD, which requires much more meteorology input than CALINE4.

  3. Lenstronomy: Multi-purpose gravitational lens modeling software package

    NASA Astrophysics Data System (ADS)

    Birrer, Simon; Amara, Adam

    2018-04-01

    Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.

  4. Monte Carlo simulation for light propagation in 3D tooth model

    NASA Astrophysics Data System (ADS)

    Fu, Yongji; Jacques, Steven L.

    2011-03-01

    Monte Carlo (MC) simulation was implemented in a three dimensional tooth model to simulate the light propagation in the tooth for antibiotic photodynamic therapy and other laser therapy. The goal of this research is to estimate the light energy deposition in the target region of tooth with given light source information, tooth optical properties and tooth structure. Two use cases were presented to demonstrate the practical application of this model. One case was comparing the isotropic point source and narrow beam dosage distribution and the other case was comparing different incident points for the same light source. This model will help the doctor for PDT design in the tooth.

  5. Manpower Planning for Marketing and Distribution

    ERIC Educational Resources Information Center

    Eggland, Steven A.; Williams, John W.

    1975-01-01

    The article describes a planning model developed by the University of Nebraska for specialized distributive education programs at the postsecondary level that collects data from two sources of information--prospective students and potential employers--to determine the need for such programs as floristry, hardware marketing, advertising, and food…

  6. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  7. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  8. An avoidance behavior model for migrating whale populations

    NASA Astrophysics Data System (ADS)

    Buck, John R.; Tyack, Peter L.

    2003-04-01

    A new model is presented for the avoidance behavior of migrating marine mammals in the presence of a noise stimulus. This model assumes that each whale will adjust its movement pattern near a sound source to maintain its exposure below its own individually specific maximum received sound-pressure level, called its avoidance threshold. The probability distribution function (PDF) of this avoidance threshold across individuals characterizes the migrating population. The avoidance threshold PDF may be estimated by comparing the distribution of migrating whales during playback and control conditions at their closest point of approach to the sound source. The proposed model was applied to the January 1998 experiment which placed a single acoustic source from the U.S. Navy SURTASS-LFA system in the migration corridor of grey whales off the California coast. This analysis found that the median avoidance threshold for this migrating grey whale population was 135 dB, with 90% confidence that the median threshold was within +/-3 dB of this value. This value is less than the 141 dB value for 50% avoidance obtained when the 1984 ``Probability of Avoidance'' model of Malme et al.'s was applied to the same data. [Work supported by ONR.

  9. Humpback whale-generated ambient noise levels provide insight into singers' spatial densities.

    PubMed

    Seger, Kerri D; Thode, Aaron M; Urbán-R, Jorge; Martínez-Loustalot, Pamela; Jiménez-López, M Esther; López-Arzate, Diana

    2016-09-01

    Baleen whale vocal activity can be the dominant underwater ambient noise source for certain locations and seasons. Previous wind-driven ambient-noise formulations have been adjusted to model ambient noise levels generated by random distributions of singing humpback whales in ocean waveguides and have been combined to a single model. This theoretical model predicts that changes in ambient noise levels with respect to fractional changes in singer population (defined as the noise "sensitivity") are relatively unaffected by the source level distributions and song spectra of individual humpback whales (Megaptera novaeangliae). However, the noise "sensitivity" does depend on frequency and on how the singers' spatial density changes with population size. The theoretical model was tested by comparing visual line transect surveys with bottom-mounted passive acoustic data collected during the 2013 and 2014 humpback whale breeding seasons off Los Cabos, Mexico. A generalized linear model (GLM) estimated the noise "sensitivity" across multiple frequency bands. Comparing the GLM estimates with the theoretical predictions suggests that humpback whales tend to maintain relatively constant spacing between one another while singing, but that individual singers either slightly increase their source levels or song duration, or cluster more tightly as the singing population increases.

  10. An equivalent body surface charge model representing three-dimensional bioelectrical activity

    NASA Technical Reports Server (NTRS)

    He, B.; Chernyak, Y. B.; Cohen, R. J.

    1995-01-01

    A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.

  11. Outer satellite atmospheres: Their nature and planetary interactions

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Combi, M. R.

    1984-01-01

    Significant insights regarding the nature and interactions of Io and the planetary magnetosphere were gained through modeling studies of the spatial morphology and brightness of the Io sodium cloud. East-west intensity asymmetries in Region A are consistent with an east-west electric field and the offset of the magnetic and planetary-spin axes. East-west orbital asymmetries and the absolute brightness of Region B suggest a low-velocity (3 km/sec) satellite source of 1 to 2 x 10(26) sodium atoms/sec. The time-varying spatial structure of the sodium directional features in near Region C provides direct evidence for a magnetospheric-wind-driven escape mechanism with a high-velocity (20 km/sec) source of 1 x 10(26) atoms/sec and a flux distribution enhanced at the equator relative to the poles. A model for the Io potassium cloud is presented and analysis of data suggests a low velocity source rate of 5 x 10(24) atoms/sec. To understand the role of Titan and non-Titan sources for H atoms in the Saturn system, the lifetime of hydrogen in the planetary magnetosphere was incorporated into the earlier Titan torus model of Smyth (1981) and its expected impact discussed. A particle trajectory model for cometary hydrogen is presented and applied to the Lyman-alpha distribution of Comet Kohoutek (1973XII).

  12. OpenFLUID: an open-source software environment for modelling fluxes in landscapes

    NASA Astrophysics Data System (ADS)

    Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc

    2013-04-01

    Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org

  13. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  14. Vaginal drug distribution modeling.

    PubMed

    Katz, David F; Yuan, Andrew; Gao, Yajing

    2015-09-15

    This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J. C.; Baillet, S.; Jerbi, K.

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less

  16. Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar

    2014-05-01

    We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.

  17. Observational Constraints on the Global Budget of Ethanol

    NASA Astrophysics Data System (ADS)

    Naik, V.; Fiore, A. M.; Horowitz, L. W.; Singh, H. B.; Wiedinmyer, C.; Guenther, A. B.; de Gouw, J.; Millet, D.; Levy, H.; Oppenheimer, M.

    2007-12-01

    Ethanol, an oxygenated volatile organic compound (OVOC), is used extensively as a motor fuel and fuel additive to promote clean combustion. Ethanol can affect the oxidizing capacity and the ozone-forming potential of the atmosphere. Limited available atmospheric observations suggest a global background atmospheric ethanol mixing ratio of about 20 pptv, with values up to 3 ppbv near source regions; however, the atmospheric distribution and budget of ethanol remain poorly understood. Here, we use the global three-dimensional chemical transport model MOZART-4 to investigate the global ethanol distribution and budget, and place constraints on the budget by evaluating the model with atmospheric observations. We implement a global ethanol source of 14.7 Tg yr-1 in the model consisting of biogenic emissions (9.2 Tg yr-1), industrial/anthropogenic emissions (3.2 Tg yr-1), emissions from biofuels (1.8 Tg yr-1), biomass burning emissions (0.5 Tg yr-1), and a secondary source from atmospheric production (0.056 Tg yr-1). Gas-phase oxidation by the hydroxyl radical accounts for 66% of the global sink of ethanol in the model, dry deposition 9%, and wet scavenging 25%. The simulation yields a global mean ethanol burden of 0.11 Tg and an atmospheric lifetime of 3 days. The simulated boundary layer mean ethanol concentrations underestimate observations from field campaigns over the United States by 50%, downwind of Asia by 76% and over the remote Pacific Ocean by 86%. Because of the short lifetime of ethanol, the model discrepancy over remote tropical regions cannot be attributed to an underestimate of surface emissions over continents. In these regions, the dominant model source is secondary atmospheric production, from the reaction of the ethyl peroxy radical (C2H5O2) either with itself or with the methyl peroxy radical (CH3O2). A ~500-fold increase in this diffuse source (to ~30 Tg yr-1) distributed uniformly throughout the troposphere would largely correct the observation-model mismatch, resulting in a best estimate of the global ethanol source of 44 Tg yr-1. This finding could indicate omission of other chemical species in the model that can provide additional sources of C2H5O2. Candidate OVOCs, such as propionaldehyde, and peroxypropionic nitric anhydride (PPN) that are precursors to C2H5O2, have been measured in the remote troposphere. This hypothesis, however, needs testing by direct measurements of C2H5O2 in the remote tropical troposphere.

  18. Noise-Induced Synchronization among Sub-RF CMOS Analog Oscillators for Skew-Free Clock Distribution

    NASA Astrophysics Data System (ADS)

    Utagawa, Akira; Asai, Tetsuya; Hirose, Tetsuya; Amemiya, Yoshihito

    We present on-chip oscillator arrays synchronized by random noises, aiming at skew-free clock distribution on synchronous digital systems. Nakao et al. recently reported that independent neural oscillators can be synchronized by applying temporal random impulses to the oscillators [1], [2]. We regard neural oscillators as independent clock sources on LSIs; i. e., clock sources are distributed on LSIs, and they are forced to synchronize through the use of random noises. We designed neuron-based clock generators operating at sub-RF region (<1GHz) by modifying the original neuron model to a new model that is suitable for CMOS implementation with 0.25-μm CMOS parameters. Through circuit simulations, we demonstrate that i) the clock generators are certainly synchronized by pseudo-random noises and ii) clock generators exhibited phase-locked oscillations even if they had small device mismatches.

  19. Packaging and distributing ecological data from multisite studies

    NASA Technical Reports Server (NTRS)

    Olson, R. J.; Voorhees, L. D.; Field, J. M.; Gentry, M. J.

    1996-01-01

    Studies of global change and other regional issues depend on ecological data collected at multiple study areas or sites. An information system model is proposed for compiling diverse data from dispersed sources so that the data are consistent, complete, and readily available. The model includes investigators who collect and analyze field measurements, science teams that synthesize data, a project information system that collates data, a data archive center that distributes data to secondary users, and a master data directory that provides broader searching opportunities. Special attention to format consistency is required, such as units of measure, spatial coordinates, dates, and notation for missing values. Often data may need to be enhanced by estimating missing values, aggregating to common temporal units, or adding other related data such as climatic and soils data. Full documentation, an efficient data distribution mechanism, and an equitable way to acknowledge the original source of data are also required.

  20. Charge state distribution and emission characteristics in a table top reflex discharge - Effect of ion confinement and electrons accelerated across the sheath

    DOE PAGES

    Kumar, Deepak; Englesbe, Alexander; Parman, Matthew; ...

    2015-11-05

    Tabletop reflex discharges in a Penning geometry have many applications including ion sources and eXtreme Ultra-Violet (XUV) sources. The presence of primary electrons accelerated across the cathode sheaths is responsible for the distribution of ion charge states and of the unusually high XUV brightness of these plasmas. Absolutely calibrated space resolved XUV spectra from a table top reflex discharge operating with Al cathodes and Ne gas are presented. The spectra are analyzed with a new and complete model for ion charge distribution in similar reflex discharges. The plasma in the discharge was found to have a density of ~10 18mmore » –3 with a significant fraction >0.01 of fast primary electrons. As a result, the implications of the new model on the ion states achievable in a tabletop reflex plasma discharge are also discussed.« less

  1. A spatial analysis of the dispersion of transportation induced carbon monoxide using the Gaussian line source method

    NASA Astrophysics Data System (ADS)

    Tarigan, A. P. M.; Suryati, I.; Gusrianti, D.

    2018-03-01

    The Purpose of this study is to model the spatial distribution of transportation induced carbon monoxide (CO) from a street, i.e. Jl. Singamangaraja, in Medan City using the gaussian line source method with GIS. It is observed that the traffic volume on the Jl. Singamangaraja is 7,591 units/hour in the morning and 7,433 units/hour in the afternoon. The amount emission rate is 49,171.7 µg/m.s in the morning and 46,943.1 µg/m.s in the afternoon. Based on the gaussian line source method, the highest CO concentration is found at the roadside, i.e. 20,340 µg/Nm3 in the morning and 18,340 µg/Nm3 in the afternoon, which are fairly in agreement with those measured in situ. Using GIS, the CO spatial distribution can visually be modeled to observe the affected area.

  2. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  3. Modeling the Influence of Hemispheric Transport on Trends in ...

    EPA Pesticide Factsheets

    We describe the development and application of the hemispheric version of the CMAQ to examine the influence of long-range pollutant transport on trends in surface level O3 distributions. The WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations were recently performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the northern hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ is used to estimate the sensitivity of O3 to emissions from different source regions across the northern hemisphere. The seasonal variations in source region contributions to background O3 is then estimated from these sensitivity calculations and will be discussed. A reduced form model combining these source region sensitivities estimated from DDM with the multi-decadal simulations of O3 distributions and emissions trends, is then developed to characterize the changing contributions of different source regions to background O3 levels across North America. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas

  4. Oil source bed distribution in upper Tertiary of Gulf Coast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dow, W.G.

    1985-02-01

    Effective oil source beds have not been reported in Miocene and younger Gulf Coast sediments and the organic matter present is invariably immature and oxidized. Crude oil composition, however, indicates origin from mature source beds containing reduced kerogen. Oil distribution suggests extensive vertical migration through fracture systems from localized sources in deeply buried, geopressured shales. A model is proposed in which oil source beds were deposited in intraslope basins that formed behind salt ridges. The combination of silled basin topography, rapid sedimentation, and enhanced oxygen-minimum zones during global warmups resulted in periodic anoxic environments and preservation of oil-generating organic matter.more » Anoxia was most widespread during the middle Miocene and Pliocene transgressions and rare during regressive cycles when anoxia occurred primarily in hypersaline conditions such as exist today in the Orca basin.« less

  5. [Applications of GIS in biomass energy source research].

    PubMed

    Su, Xian-Ming; Wang, Wu-Kui; Li, Yi-Wei; Sun, Wen-Xiang; Shi, Hai; Zhang, Da-Hong

    2010-03-01

    Biomass resources have the characteristics of widespread and dispersed distribution, which have close relations to the environment, climate, soil, and land use, etc. Geographic information system (GIS) has the functions of spatial analysis and the flexibility of integrating with other application models and algorithms, being of predominance to the biomass energy source research. This paper summarized the researches on the GIS applications in biomass energy source research, with the focus in the feasibility study of bioenergy development, assessment of biomass resources amount and distribution, layout of biomass exploitation and utilization, evaluation of gaseous emission from biomass burning, and biomass energy information system. Three perspectives of GIS applications in biomass energy source research were proposed, i. e., to enrich the data source, to improve the capacity on data processing and decision-support, and to generate the online proposal.

  6. Identification of land use and other anthropogenic impacts on nitrogen cycling using stable isotopes and distributed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    O'Connell, M. T.; Macko, S. A.

    2017-12-01

    Reactive modeling of sources and processes affecting the concentration of NO3- and NH4+ in natural and anthropogenically influenced surface water can reveal unexpected characteristics of the systems. A distributed hydrologic model, TREX, is presented that provides opportunities to study multiscale effects of nitrogen inputs, outputs, and changes. The model is adapted to run on parallel computing architecture and includes the geochemical reaction module PhreeqcRM, which enables calculation of δ15N and δ18O from biologically mediated transformation reactions in addition to mixing and equilibration. Management practices intended to attenuate nitrate in surface and subsurface waters, in particular the establishment of riparian buffer zones, are variably effective due to spatial heterogeneity of soils and preferential flow through buffers. Accounting for this heterogeneity in a fully distributed biogeochemical model allows for more efficient planning and management practices. Highly sensitive areas within a watershed can be identified based on a number of spatially variable parameters, and by varying those parameters systematically to determine conditions under which those areas are under more or less critical stress. Responses can be predicted at various scales to stimuli ranging from local changes in cropping regimes to global shifts in climate. This work presents simulations of conditions showing low antecedent nitrogen retention versus significant contribution of old nitrate. Nitrogen sources are partitioned using dual isotope ratios and temporally varying concentrations. In these two scenarios, we can evaluate the efficiency of source identification based on spatially explicit information, and model effects of increasing urban land use on N biogeochemical cycling.

  7. A method to derive vegetation distribution maps for pollen dispersion models using birch as an example

    NASA Astrophysics Data System (ADS)

    Pauling, A.; Rotach, M. W.; Gehrig, R.; Clot, B.

    2012-09-01

    Detailed knowledge of the spatial distribution of sources is a crucial prerequisite for the application of pollen dispersion models such as, for example, COSMO-ART (COnsortium for Small-scale MOdeling - Aerosols and Reactive Trace gases). However, this input is not available for the allergy-relevant species such as hazel, alder, birch, grass or ragweed. Hence, plant distribution datasets need to be derived from suitable sources. We present an approach to produce such a dataset from existing sources using birch as an example. The basic idea is to construct a birch dataset using a region with good data coverage for calibration and then to extrapolate this relationship to a larger area by using land use classes. We use the Swiss forest inventory (1 km resolution) in combination with a 74-category land use dataset that covers the non-forested areas of Switzerland as well (resolution 100 m). Then we assign birch density categories of 0%, 0.1%, 0.5% and 2.5% to each of the 74 land use categories. The combination of this derived dataset with the birch distribution from the forest inventory yields a fairly accurate birch distribution encompassing entire Switzerland. The land use categories of the Global Land Cover 2000 (GLC2000; Global Land Cover 2000 database, 2003, European Commission, Joint Research Centre; resolution 1 km) are then calibrated with the Swiss dataset in order to derive a Europe-wide birch distribution dataset and aggregated onto the 7 km COSMO-ART grid. This procedure thus assumes that a certain GLC2000 land use category has the same birch density wherever it may occur in Europe. In order to reduce the strict application of this crucial assumption, the birch density distribution as obtained from the previous steps is weighted using the mean Seasonal Pollen Index (SPI; yearly sums of daily pollen concentrations). For future improvement, region-specific birch densities for the GLC2000 categories could be integrated into the mapping procedure.

  8. The distribution of meteoric 36Cl/Cl in the United States: A comparison of models

    USGS Publications Warehouse

    Moysey, S.; Davis, S.N.; Zreda, M.; Cecil, L.D.

    2003-01-01

    The natural distribution of 36Cl/Cl in groundwater across the continental United States has recently been reported by Davis et al. (2003). In this paper, the large-scale processes and atmospheric sources of 36Cl and chloride responsible for controlling the observed 36Cl/Cl distribution are discussed. The dominant process that affects 36Cl/Cl in meteoric groundwater at the continental scale is the fallout of stable chloride from the atmosphere, which is mainly derived from oceanic sources. Atmospheric circulation transports marine chloride to the continental interior, where distance from the coast, topography, and wind patterns define the chloride distribution. The only major deviation from this pattern is observed in northern Utah and southern Idaho where it is inferred that a continental source of chloride exists in the Bonneville Salt Flats, Utah. In contrast to previous studies, the atmospheric flux of 36Cl to the land surface was found to be approximately constant over the United States, without a strong correlation between local 36Cl fallout and annual precipitation. However, the correlation between these variables was significantly improved (R 2=0.15 to R 2=0.55) when data from the southeastern USA, which presumably have lower than average atmospheric 36Cl concentrations, were excluded. The total mean flux of 36Cl over the continental United States and total global mean flux of 36Cl are calculated to be 30.5??7.0 and 19.6??4.5 atoms m-2 s-1, respectively. The 36Cl/Cl distribution calculated by Bentley et al. (1996) underestimates the magnitude and variability observed for the measured 36Cl/Cl distribution across the continental United States. The model proposed by Hainsworth (1994) provides the best overall fit to the observed 36Cl/Cl distribution in this study. A process-oriented model by Phillips (2000) generally overestimates 36Cl/Cl in most parts of the country and has several significant local departures from the empirical data.

  9. Modeling Soak-Time Distribution of Trips for Mobile Source Emissions Forecasting: Techniques and Applications

    DOT National Transportation Integrated Search

    2000-08-01

    The soak-time of vehicle trip starts is defined as the duration of time in which the vehicle's engine is not operating and that precedes a successful vehicle start. The temporal distribution of the soak-time in an area is an important determinant of ...

  10. Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1995-01-01

    Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…

  11. Distributed Leadership and Organizational Change: Implementation of a Teaching Performance Measure

    ERIC Educational Resources Information Center

    Sloan, Tine

    2013-01-01

    This article explores leadership practice and change as evidenced in multiple data sources gathered during a self-study implementation of a teaching performance assessment. It offers promising models of distributed leadership and organizational change that can inform future program implementers and the field in general. Our experiences suggest…

  12. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  13. Variational Bayesian Learning for Wavelet Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  14. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  15. Development of High-Resolution Dynamic Dust Source Function - A Case Study with a Strong Dust Storm in a Regional Model

    NASA Technical Reports Server (NTRS)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2017-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  16. Studies of the Gas Tori of Titan and Triton

    NASA Technical Reports Server (NTRS)

    Smyth, William H.; Marconi, M. L.

    1997-01-01

    A model for the spatial distribution of hydrogen in the Saturn system including a Titan source, an interior source for the rings and inner icy satellites, and a Saturn source has been applied to the best available Voyager 1 and 2 UVS Lyman-alpha observations presented by Shemansky and Hall. Although the model-data comparison is limited by the quality of the observational data, source rates for a Titan source of 3.3 - 4.8 x 10(exp 27) H atoms/s and, for the first time, source rates larger by about a factor of four for the interior source of 1.4 - 1.9 x 10(exp 27) H atoms/s were determined. Outside the immediate location of the planet, the Saturn source is only a minor contribution of hydrogen. A paper describing this research in more detail has been submitted to The Astrophysical Journal for publication and is included in the Appendix. Limited progress in the development of a model for the collisional gas tori of Triton is also discussed.

  17. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model

    PubMed Central

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2018-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events. PMID:29632432

  18. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model.

    PubMed

    Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul

    2017-06-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  19. Understanding EROS2 observations toward the spiral arms within a classical Galactic model framework

    NASA Astrophysics Data System (ADS)

    Moniez, M.; Sajadian, S.; Karami, M.; Rahvar, S.; Ansari, R.

    2017-08-01

    Aims: EROS (Expérience de Recherche d'Objets Sombres) has searched for microlensing toward four directions in the Galactic plane away from the Galactic center. The interpretation of the catalog optical depth is complicated by the spread of the source distance distribution. We compare the EROS microlensing observations with Galactic models (including the Besançon model), tuned to fit the EROS source catalogs, and take into account all observational data such as the microlensing optical depth, the Einstein crossing durations, and the color and magnitude distributions of the catalogued stars. Methods: We simulated EROS-like source catalogs using the HIgh-Precision PARallax COllecting Satellite (Hipparcos) database, the Galactic mass distribution, and an interstellar extinction table. Taking into account the EROS star detection efficiency, we were able to produce simulated color-magnitude diagrams that fit the observed diagrams. This allows us to estimate average microlensing optical depths and event durations that are directly comparable with the measured values. Results: Both the Besançon model and our Galactic model allow us to fully understand the EROS color-magnitude data. The average optical depths and mean event durations calculated from these models are in reasonable agreement with the observations. Varying the Galactic structure parameters through simulation, we were also able to deduce contraints on the kinematics of the disk, the disk stellar mass function (at a few kpc distance from the Sun), and the maximum contribution of a thick disk of compact objects in the Galactic plane (Mthick< 5 - 7 × 1010M⊙ at 95%, depending on the model). We also show that the microlensing data toward one of our monitored directions are significantly sensitive to the Galactic bar parameters, although much larger statistics are needed to provide competitive constraints. Conclusions: Our simulation gives a better understanding of the lens and source spatial distributions in the microlensing events. The goodness of a global fit taking into account all the observables (from the color-magnitude diagrams and microlensing observations) shows the validity of the Galactic models. Our tests with the parameters excursions show the unique sensitivity of the microlensing data to the kinematical parameters and stellar initial mass function. http://www.lal.in2p3.fr/recherche/eros

  20. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  1. Model uncertainties do not affect observed patterns of species richness in the Amazon.

    PubMed

    Sales, Lilian Patrícia; Neves, Olívia Viana; De Marco, Paulo; Loyola, Rafael

    2017-01-01

    Climate change is arguably a major threat to biodiversity conservation and there are several methods to assess its impacts on species potential distribution. Yet the extent to which different approaches on species distribution modeling affect species richness patterns at biogeographical scale is however unaddressed in literature. In this paper, we verified if the expected responses to climate change in biogeographical scale-patterns of species richness and species vulnerability to climate change-are affected by the inputs used to model and project species distribution. We modeled the distribution of 288 vertebrate species (amphibians, birds and mammals), all endemic to the Amazon basin, using different combinations of the following inputs known to affect the outcome of species distribution models (SDMs): 1) biological data type, 2) modeling methods, 3) greenhouse gas emission scenarios and 4) climate forecasts. We calculated uncertainty with a hierarchical ANOVA in which those different inputs were considered factors. The greatest source of variation was the modeling method. Model performance interacted with data type and modeling method. Absolute values of variation on suitable climate area were not equal among predictions, but some biological patterns were still consistent. All models predicted losses on the area that is climatically suitable for species, especially for amphibians and primates. All models also indicated a current East-western gradient on endemic species richness, from the Andes foot downstream the Amazon river. Again, all models predicted future movements of species upwards the Andes mountains and overall species richness losses. From a methodological perspective, our work highlights that SDMs are a useful tool for assessing impacts of climate change on biodiversity. Uncertainty exists but biological patterns are still evident at large spatial scales. As modeling methods are the greatest source of variation, choosing the appropriate statistics according to the study objective is also essential for estimating the impacts of climate change on species distribution. Yet from a conservation perspective, we show that Amazon endemic fauna is potentially vulnerable to climate change, due to expected reductions on suitable climate area. Climate-driven faunal movements are predicted towards the Andes mountains, which might work as climate refugia for migrating species.

  2. Model uncertainties do not affect observed patterns of species richness in the Amazon

    PubMed Central

    Sales, Lilian Patrícia; Neves, Olívia Viana; De Marco, Paulo

    2017-01-01

    Background Climate change is arguably a major threat to biodiversity conservation and there are several methods to assess its impacts on species potential distribution. Yet the extent to which different approaches on species distribution modeling affect species richness patterns at biogeographical scale is however unaddressed in literature. In this paper, we verified if the expected responses to climate change in biogeographical scale—patterns of species richness and species vulnerability to climate change—are affected by the inputs used to model and project species distribution. Methods We modeled the distribution of 288 vertebrate species (amphibians, birds and mammals), all endemic to the Amazon basin, using different combinations of the following inputs known to affect the outcome of species distribution models (SDMs): 1) biological data type, 2) modeling methods, 3) greenhouse gas emission scenarios and 4) climate forecasts. We calculated uncertainty with a hierarchical ANOVA in which those different inputs were considered factors. Results The greatest source of variation was the modeling method. Model performance interacted with data type and modeling method. Absolute values of variation on suitable climate area were not equal among predictions, but some biological patterns were still consistent. All models predicted losses on the area that is climatically suitable for species, especially for amphibians and primates. All models also indicated a current East-western gradient on endemic species richness, from the Andes foot downstream the Amazon river. Again, all models predicted future movements of species upwards the Andes mountains and overall species richness losses. Conclusions From a methodological perspective, our work highlights that SDMs are a useful tool for assessing impacts of climate change on biodiversity. Uncertainty exists but biological patterns are still evident at large spatial scales. As modeling methods are the greatest source of variation, choosing the appropriate statistics according to the study objective is also essential for estimating the impacts of climate change on species distribution. Yet from a conservation perspective, we show that Amazon endemic fauna is potentially vulnerable to climate change, due to expected reductions on suitable climate area. Climate-driven faunal movements are predicted towards the Andes mountains, which might work as climate refugia for migrating species. PMID:29023503

  3. Analysis and Application of Microgrids

    NASA Astrophysics Data System (ADS)

    Yue, Lu

    New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.

  4. Survey of Large Methane Emitters in North America

    NASA Astrophysics Data System (ADS)

    Deiker, S.

    2017-12-01

    It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.

  5. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  6. Assessing the pollution risk of a groundwater source field at western Laizhou Bay under seawater intrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong, E-mail: wangdong@nju.edu.cn

    Coastal areas have great significance for human living, economy and society development in the world. With the rapid increase of pressures from human activities and climate change, the safety of groundwater resource is under the threat of seawater intrusion in coastal areas. The area of Laizhou Bay is one of the most serious seawater intruded areas in China, since seawater intrusion phenomenon was firstly recognized in the middle of 1970s. This study assessed the pollution risk of a groundwater source filed of western Laizhou Bay area by inferring the probability distribution of groundwater Cl{sup −} concentration. The numerical model ofmore » seawater intrusion process is built by using SEAWAT4. The parameter uncertainty of this model is evaluated by Markov Chain Monte Carlo (MCMC) simulation, and DREAM{sub (ZS)} is used as sampling algorithm. Then, the predictive distribution of Cl{sup -} concentration at groundwater source field is inferred by using the samples of model parameters obtained from MCMC. After that, the pollution risk of groundwater source filed is assessed by the predictive quantiles of Cl{sup -} concentration. The results of model calibration and verification demonstrate that the DREAM{sub (ZS)} based MCMC is efficient and reliable to estimate model parameters under current observation. Under the condition of 95% confidence level, the groundwater source point will not be polluted by seawater intrusion in future five years (2015–2019). In addition, the 2.5% and 97.5% predictive quantiles show that the Cl{sup −} concentration of groundwater source field always vary between 175 mg/l and 200 mg/l. - Highlights: • The parameter uncertainty of seawater intrusion model is evaluated by MCMC. • Groundwater source field won’t be polluted by seawater intrusion in future 5 years. • The pollution risk is assessed by the predictive quantiles of Cl{sup −} concentration.« less

  7. A shoreline fumigation model with wind shear

    NASA Astrophysics Data System (ADS)

    Zhibian, Li; Zengquan, Yao

    A fumigation model has been developed for a plume discharged from an elevated stack in a shoreline environment by introducing different wind directions above and within thermal internal boundary laye:r (TIBL) into a dispersion model. When a continuous point source release occurs above the TIBL pollutants will disperse in the marine stable flow, until the plume intersects the TIBL surface. The fumigation in ithe TIBL is interpreted as occurring from an area source on the imaginary surface of the TIBL. It is assumed that the wind direction varies with height above and below L( x) = Ax2, the height of the TIBL at the distance x. The change of wind direction above and within the TIBL causes the pollutants to change their direction of transport and leads to development of a curved ground level concentration (glc) axis; a decreasing glc along the centreline of the fumigation and a widening pollutant distribution in the transverse direction. Predicted concentration distributions using the wind shear model are compared with observations from an SF 6 tracer experiment near Hangzhou Bay in May-June of 1987. The comparison and an evaluation of the model performance show that the new model is not only more theoretically acceptable than those based on empirical coefficients but also provides concentration distributions which agree well with. SF 6 tracer experiments.

  8. High-frequency predictions for number counts and spectral properties of extragalactic radio sources. New evidence of a break at mm wavelengths in spectra of bright blazar sources

    NASA Astrophysics Data System (ADS)

    Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.

    2011-09-01

    We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org

  9. A simple 2-d thermal model for GMA welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matteson, M.A.; Franke, G.L.; Vassilaros, M.G.

    1996-12-31

    The Rosenthal model of heat distribution from a moving source has been used in many applications to predict the temperature distribution during welding. The equation has performed well in its original form or as modified. The expression has a significant limitation for application to gas metal arc welds (GMAW) that have a papilla extending from the root of the weld bead. The shape of the fusion line between the papilla and the plate surface has a concave shape rather than the expected convex shape. However, at some distance from the fusion line the heat affected zone (HAZ) made visible bymore » etching has the expected convex shape predicted by the Rosenthal expression. This anomaly creates a limitation to the use of the Rosenthal expression for predicting GMAW bead shapes or HAZ temperature histories. Current research at the Naval Surface Warfare Center--Carderock Division (NSWC--CD) to develop a computer based model to predict the microstructure of multi-pass GMAW requires a simple expression to predict the fusion line and temperature history of the HAZ for each weld pass. The solution employed for the NSWC--CD research is a modified Rosenthal expression that has a dual heat source. One heat source is a disk source above the plate surface supplying the majority of the heat. The second heat source is smaller and below the surface of the plate. This second heat source helps simulate the penetration power of many GMAW welds that produces the papilla. The assumptions, strengths and limitations of the model are presented along with some applications.« less

  10. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  11. Evaluation of probabilistic forecasts with the scoringRules package

    NASA Astrophysics Data System (ADS)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  12. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    PubMed

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  13. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  14. Detecting Shielded Special Nuclear Materials Using Multi-Dimensional Neutron Source and Detector Geometries

    NASA Astrophysics Data System (ADS)

    Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard

    2016-10-01

    A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.

  15. Sky distribution of artificial sources in the galactic belt of advanced cosmic life.

    PubMed

    Heidmann, J

    1994-12-01

    In line with the concept of the galactic belt of advanced life, we evaluate the sky distribution of detectable artificial sources, using a simple astrophysical model. The best region to search is the median band of the Milky Way in the Vulpecula-Cygnus region, together with a narrower one in Carina. Although this work was done in view of a proposal to send a SETI probe at a gravitational focus of the Sun, we recommend these sky regions particularly for the searches of the sky survey type.

  16. Modeling of surface dust concentration in snow cover at industrial area using neural networks and kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.

    2017-06-01

    Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.

  17. Artificial neural network application for space station power system fault diagnosis

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Oliver, Walter E.; Dias, Lakshman G.

    1995-01-01

    This study presents a methodology for fault diagnosis using a Two-Stage Artificial Neural Network Clustering Algorithm. Previously, SPICE models of a 5-bus DC power distribution system with assumed constant output power during contingencies from the DDCU were used to evaluate the ANN's fault diagnosis capabilities. This on-going study uses EMTP models of the components (distribution lines, SPDU, TPDU, loads) and power sources (DDCU) of Space Station Alpha's electrical Power Distribution System as a basis for the ANN fault diagnostic tool. The results from the two studies are contrasted. In the event of a major fault, ground controllers need the ability to identify the type of fault, isolate the fault to the orbital replaceable unit level and provide the necessary information for the power management expert system to optimally determine a degraded-mode load schedule. To accomplish these goals, the electrical power distribution system's architecture can be subdivided into three major classes: DC-DC converter to loads, DC Switching Unit (DCSU) to Main bus Switching Unit (MBSU), and Power Sources to DCSU. Each class which has its own electrical characteristics and operations, requires a unique fault analysis philosophy. This study identifies these philosophies as Riddles 1, 2 and 3 respectively. The results of the on-going study addresses Riddle-1. It is concluded in this study that the combination of the EMTP models of the DDCU, distribution cables and electrical loads yields a more accurate model of the behavior and in addition yielded more accurate fault diagnosis using ANN versus the results obtained with the SPICE models.

  18. Influence of ambient (outdoor) sources on residential indoor and personal PM2.5 concentrations: analyses of RIOPA data.

    PubMed

    Meng, Qing Yu; Turpin, Barbara J; Korn, Leo; Weisel, Clifford P; Morandi, Maria; Colome, Steven; Zhang, Junfeng Jim; Stock, Thomas; Spektor, Dalia; Winer, Arthur; Zhang, Lin; Lee, Jong Hoon; Giovanetti, Robert; Cui, William; Kwon, Jaymin; Alimokhtari, Shahnaz; Shendell, Derek; Jones, Jennifer; Farrar, Corice; Maberti, Silvia

    2005-01-01

    The Relationship of Indoor, Outdoor and Personal Air (RIOPA) study was designed to investigate residential indoor, outdoor and personal exposures to several classes of air pollutants, including volatile organic compounds, carbonyls and fine particles (PM2.5). Samples were collected from summer, 1999 to spring, 2001 in Houston (TX), Los Angeles (CA) and Elizabeth (NJ). Indoor, outdoor and personal PM2.5 samples were collected at 212 nonsmoking residences, 162 of which were sampled twice. Some homes were chosen due to close proximity to ambient sources of one or more target analytes, while others were farther from sources. Median indoor, outdoor and personal PM2.5 mass concentrations for these three sites were 14.4, 15.5 and 31.4 microg/m3, respectively. The contributions of ambient (outdoor) and nonambient sources to indoor and personal concentrations were quantified using a single compartment box model with measured air exchange rate and a random component superposition (RCS) statistical model. The median contribution of ambient sources to indoor PM2.5 concentrations using the mass balance approach was estimated to be 56% for all study homes (63%, 52% and 33% for California, New Jersey and Texas study homes, respectively). Reasonable variations in model assumptions alter median ambient contributions by less than 20%. The mean of the distribution of ambient contributions across study homes agreed well for the mass balance and RCS models, but the distribution was somewhat broader when calculated using the mass balance model with measured air exchange rates.

  19. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  20. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    NASA Astrophysics Data System (ADS)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  1. Aerial Measuring System Sensor Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. S. Detwiler

    2002-04-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less

  2. CLASH-VLT: A highly precise strong lensing model of the galaxy cluster RXC J2248.7-4431 (Abell S1063) and prospects for cosmography

    NASA Astrophysics Data System (ADS)

    Caminha, G. B.; Grillo, C.; Rosati, P.; Balestra, I.; Karman, W.; Lombardi, M.; Mercurio, A.; Nonino, M.; Tozzi, P.; Zitrin, A.; Biviano, A.; Girardi, M.; Koekemoer, A. M.; Melchior, P.; Meneghetti, M.; Munari, E.; Suyu, S. H.; Umetsu, K.; Annunziatella, M.; Borgani, S.; Broadhurst, T.; Caputi, K. I.; Coe, D.; Delgado-Correal, C.; Ettori, S.; Fritz, A.; Frye, B.; Gobat, R.; Maier, C.; Monna, A.; Postman, M.; Sartoris, B.; Seitz, S.; Vanzella, E.; Ziegler, B.

    2016-03-01

    Aims: We perform a comprehensive study of the total mass distribution of the galaxy cluster RXC J2248.7-4431 (z = 0.348) with a set of high-precision strong lensing models, which take advantage of extensive spectroscopic information on many multiply lensed systems. In the effort to understand and quantify inherent systematics in parametric strong lensing modelling, we explore a collection of 22 models in which we use different samples of multiple image families, different parametrizations of the mass distribution and cosmological parameters. Methods: As input information for the strong lensing models, we use the Cluster Lensing And Supernova survey with Hubble (CLASH) imaging data and spectroscopic follow-up observations, with the VIsible Multi-Object Spectrograph (VIMOS) and Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT), to identify and characterize bona fide multiple image families and measure their redshifts down to mF814W ≃ 26. A total of 16 background sources, over the redshift range 1.0-6.1, are multiply lensed into 47 images, 24 of which are spectroscopically confirmed and belong to ten individual sources. These also include a multiply lensed Lyman-α blob at z = 3.118. The cluster total mass distribution and underlying cosmology in the models are optimized by matching the observed positions of the multiple images on the lens plane. Bayesian Markov chain Monte Carlo techniques are used to quantify errors and covariances of the best-fit parameters. Results: We show that with a careful selection of a large sample of spectroscopically confirmed multiple images, the best-fit model can reproduce their observed positions with a rms scatter of 0.̋3 in a fixed flat ΛCDM cosmology, whereas the lack of spectroscopic information or the use of inaccurate photometric redshifts can lead to biases in the values of the model parameters. We find that the best-fit parametrization for the cluster total mass distribution is composed of an elliptical pseudo-isothermal mass distribution with a significant core for the overall cluster halo and truncated pseudo-isothermal mass profiles for the cluster galaxies. We show that by adding bona fide photometric-selected multiple images to the sample of spectroscopic families, one can slightly improve constraints on the model parameters. In particular, we find that the degeneracy between the lens total mass distribution and the underlying geometry of the Universe, which is probed via angular diameter distance ratios between the lens and sources and the observer and sources, can be partially removed. Allowing cosmological parameters to vary together with the cluster parameters, we find (at 68% confidence level) Ωm = 0.25+ 0.13-0.16 and w = -1.07+ 0.16-0.42 for a flat ΛCDM model, and Ωm = 0.31+ 0.12-0.13 and ΩΛ = 0.38+ 0.38-0.27 for a Universe with w = -1 and free curvature. Finally, using toy models mimicking the overall configuration of multiple images and cluster total mass distribution, we estimate the impact of the line-of-sight mass structure on the positional rms to be 0.̋3 ± 0. We argue that the apparent sensitivity of our lensing model to cosmography is due to the combination of the regular potential shape of RXC J2248, a large number of bona fide multiple images out to z = 6.1, and a relatively modest presence of intervening large-scale structure, as revealed by our spectroscopic survey.

  3. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    NASA Astrophysics Data System (ADS)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  4. Long-distance practical quantum key distribution by entanglement swapping.

    PubMed

    Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang

    2011-02-14

    We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.

  5. A simple theoretical model for ⁶³Ni betavoltaic battery.

    PubMed

    Zuo, Guoping; Zhou, Jianliang; Ke, Guotu

    2013-12-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Measurement-device-independent entanglement-based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Yang, Xiuqing; Wei, Kejin; Ma, Haiqiang; Sun, Shihai; Liu, Hongwei; Yin, Zhenqiang; Li, Zuohan; Lian, Shibin; Du, Yungang; Wu, Lingan

    2016-05-01

    We present a quantum key distribution protocol in a model in which the legitimate users gather statistics as in the measurement-device-independent entanglement witness to certify the sources and the measurement devices. We show that the task of measurement-device-independent quantum communication can be accomplished based on monogamy of entanglement, and it is fairly loss tolerate including source and detector flaws. We derive a tight bound for collective attacks on the Holevo information between the authorized parties and the eavesdropper. Then with this bound, the final secret key rate with the source flaws can be obtained. The results show that long-distance quantum cryptography over 144 km can be made secure using only standard threshold detectors.

  7. Global distribution of alkyl nitrates and their impacts on reactive nitrogen in remote regions constrained by aircraft observations and chemical transport modeling

    NASA Astrophysics Data System (ADS)

    Fisher, J. A.; Atlas, E. L.; Blake, D. R.; Barletta, B.; Thompson, C. R.; Peischl, J.; Tzompa Sosa, Z. A.; Ryerson, T. B.; Murray, L. T.

    2017-12-01

    Nitrogen oxides (NO + NO­2 = NOx) are precursors in the formation of tropospheric ozone, contribute to the formation of aerosols, and enhance nitrogen deposition to ecosystems. While direct emissions tend to be localised over continental source regions, a significant source of NOx to the remote troposphere comes from degradation of other forms of reactive nitrogen. Long-lived, small chain alkyl nitrates (RONO2) including methyl, ethyl and propyl nitrates may be particularly significant forms of reactive nitrogen in the remote atmosphere as they are emitted directly by the ocean in regions where reactive nitrogen is otherwise very low. They also act as NOx reservoir species, sequestering NO­x in source regions and releasing it far downwind—and through this process may become increasingly important reservoirs as methane, ethane, and propane emissions grow. However, small RONO2 are not consistently included in global atmospheric chemistry models, and their distributions and impacts remain poorly constrained. In this presentation, we will describe a new RONO2 simulation in the GEOS-Chem chemical transport model evaluated using a large ensemble of aircraft observations collected over a 20-year period. The observations are largely concentrated over the Pacific Ocean, beginning with PEM-Tropics in the late 1990s and continuing through the recent HIPPO and ATom campaigns. Both observations and model show enhanced RONO2 in the tropical Pacific boundary layer that is consistent with a photochemical source in seawater. The model reproduces a similarly large enhancement over the southern ocean by assuming a large pool of oceanic RONO2 here, but the source of the seawater enhancement in this environment remains uncertain. We find that including marine RONO2 in the simulation is necessary to correct a large underestimate in simulated reactive nitrogen throughout the Pacific marine boundary layer. We also find that the impacts on NOx export from continental source regions are limited as RONO2 formation competes with other NO­x reservoirs such as PAN, leading to re-partitioning of reactive nitrogen rather than a net reactive nitrogen source. Further implications for NOx and ozone, as well as the impacts of recent changes in the global distribution of methane, ethane, propane, and NOx emissions, will also be discussed.

  8. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  9. Open Source Bayesian Models. 1. Application to ADME/Tox and Drug Discovery Datasets.

    PubMed

    Clark, Alex M; Dole, Krishna; Coulon-Spektor, Anna; McNutt, Andrew; Grass, George; Freundlich, Joel S; Reynolds, Robert C; Ekins, Sean

    2015-06-22

    On the order of hundreds of absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) models have been described in the literature in the past decade which are more often than not inaccessible to anyone but their authors. Public accessibility is also an issue with computational models for bioactivity, and the ability to share such models still remains a major challenge limiting drug discovery. We describe the creation of a reference implementation of a Bayesian model-building software module, which we have released as an open source component that is now included in the Chemistry Development Kit (CDK) project, as well as implemented in the CDD Vault and in several mobile apps. We use this implementation to build an array of Bayesian models for ADME/Tox, in vitro and in vivo bioactivity, and other physicochemical properties. We show that these models possess cross-validation receiver operator curve values comparable to those generated previously in prior publications using alternative tools. We have now described how the implementation of Bayesian models with FCFP6 descriptors generated in the CDD Vault enables the rapid production of robust machine learning models from public data or the user's own datasets. The current study sets the stage for generating models in proprietary software (such as CDD) and exporting these models in a format that could be run in open source software using CDK components. This work also demonstrates that we can enable biocomputation across distributed private or public datasets to enhance drug discovery.

  10. Open Source Bayesian Models. 1. Application to ADME/Tox and Drug Discovery Datasets

    PubMed Central

    2015-01-01

    On the order of hundreds of absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) models have been described in the literature in the past decade which are more often than not inaccessible to anyone but their authors. Public accessibility is also an issue with computational models for bioactivity, and the ability to share such models still remains a major challenge limiting drug discovery. We describe the creation of a reference implementation of a Bayesian model-building software module, which we have released as an open source component that is now included in the Chemistry Development Kit (CDK) project, as well as implemented in the CDD Vault and in several mobile apps. We use this implementation to build an array of Bayesian models for ADME/Tox, in vitro and in vivo bioactivity, and other physicochemical properties. We show that these models possess cross-validation receiver operator curve values comparable to those generated previously in prior publications using alternative tools. We have now described how the implementation of Bayesian models with FCFP6 descriptors generated in the CDD Vault enables the rapid production of robust machine learning models from public data or the user’s own datasets. The current study sets the stage for generating models in proprietary software (such as CDD) and exporting these models in a format that could be run in open source software using CDK components. This work also demonstrates that we can enable biocomputation across distributed private or public datasets to enhance drug discovery. PMID:25994950

  11. Integrating species distribution models (SDMs) and phylogeography for two species of Alpine Primula

    PubMed Central

    Schorr, G; Holstein, N; Pearman, P B; Guisan, A; Kadereit, J W

    2012-01-01

    The major intention of the present study was to investigate whether an approach combining the use of niche-based palaeodistribution modeling and phylo-geography would support or modify hypotheses about the Quaternary distributional history derived from phylogeographic methods alone. Our study system comprised two closely related species of Alpine Primula. We used species distribution models based on the extant distribution of the species and last glacial maximum (LGM) climate models to predict the distribution of the two species during the LGM. Phylogeographic data were generated using amplified fragment length polymorphisms (AFLPs). In Primula hirsuta, models of past distribution and phylogeographic data are partly congruent and support the hypothesis of widespread nunatak survival in the Central Alps. Species distribution models (SDMs) allowed us to differentiate between alpine regions that harbor potential nunatak areas and regions that have been colonized from other areas. SDMs revealed that diversity is a good indicator for nunataks, while rarity is a good indicator for peripheral relict populations that were not source for the recolonization of the inner Alps. In P. daonensis, palaeo-distribution models and phylogeographic data are incongruent. Besides the uncertainty inherent to this type of modeling approach (e.g., relatively coarse 1-km grain size), disagreement of models and data may partly be caused by shifts of ecological niche in both species. Nevertheless, we demonstrate that the combination of palaeo-distribution modeling with phylogeographical approaches provides a more differentiated picture of the distributional history of species and partly supports (P. hirsuta) and partly modifies (P. daonensis and P. hirsuta) hypotheses of Quaternary distributional history. Some of the refugial area indicated by palaeodistribution models could not have been identified with phylogeographic data. PMID:22833799

  12. Effect of high energy electrons on H{sup −} production and destruction in a high current DC negative ion source for cyclotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onai, M., E-mail: onai@ppl.appi.keio.ac.jp; Fujita, S.; Hatayama, A.

    2016-02-15

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H{sup −} production. The modelling results reasonably explains the dependence of the H{sup −} extraction current on the arc-discharge powermore » in the experiments.« less

  13. Present status of numerical modeling of hydrogen negative ion source plasmas and its comparison with experiments: Japanese activities and their collaboration with experimental groups

    NASA Astrophysics Data System (ADS)

    Hatayama, A.; Nishioka, S.; Nishida, K.; Mattei, S.; Lettry, J.; Miyamoto, K.; Shibata, T.; Onai, M.; Abe, S.; Fujita, S.; Yamada, S.; Fukano, A.

    2018-06-01

    The present status of kinetic modeling of particle dynamics in hydrogen negative ion (H‑) source plasmas and their comparisons with experiments are reviewed and discussed with some new results. The main focus is placed on the following topics, which are important for the research and development of H‑ sources for intense and high-quality H‑ ion beams: (i) effects of non-equilibrium features of electron energy distribution function on volume and surface H‑ production, (ii) the origin of the spatial non-uniformity in giant multi-cusp arc-discharge H‑ sources, (iii) capacitive to inductive (E to H) mode transition in radio frequency-inductively coupled plasma H‑ sources and (iv) extraction physics of H‑ ions and beam optics, especially the present understanding of the meniscus formation in strongly electronegative plasmas (so-called ion–ion plasmas) and its effect on beam optics. For these topics, mainly Japanese modeling activities, and their domestic and international collaborations with experimental studies, are introduced with some examples showing how models have been improved and to what extent the modeling studies can presently contribute to improving the source performance. Close collaboration between experimental and modeling activities is indispensable for the validation/improvement of the modeling and its contribution to the source design/development.

  14. Modular neuron-based body estimation: maintaining consistency over different limbs, modalities, and frames of reference

    PubMed Central

    Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.

    2013-01-01

    This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151

  15. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  16. Sources of lead and zinc associated with metal smelting activities in the Trail area, British Columbia, Canada.

    PubMed

    Goodarzi, Fariborz; Sanei, Hamed; Labonté, Marcel; Duncan, William F

    2002-06-01

    The spatial distribution and deposition of lead and zinc emitted from the Trail smelter, British Columbia, Canada, was studied by strategically locating moss bags in the area surrounding the smelter and monitoring the deposition of elements every three months. A combined diffusion/distribution model was applied to estimate the relative contribution of stack-emitted material and material emitted from the secondary sources (e.g., wind-blown dust from ore/slag storage piles, uncovered transportation/trucking of ore, and historical dust). The results indicate that secondary sources are the major contributor of lead and zinc deposited within a short distance from the smelter. Gradually, the stack emissions become the main source of Pb and Zn at greater distances from the smelter. Typical material originating from each source was characterized by SEM/EDX, which indicated a marked difference in their morphology and chemical composition.

  17. The critical role of uncertainty in projections of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Meresa, Hadush K.; Romanowicz, Renata J.

    2017-08-01

    This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.

  18. Modeling field-scale cosolvent flooding for DNAPL source zone remediation

    NASA Astrophysics Data System (ADS)

    Liang, Hailian; Falta, Ronald W.

    2008-02-01

    A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.

  19. Modeling field-scale cosolvent flooding for DNAPL source zone remediation.

    PubMed

    Liang, Hailian; Falta, Ronald W

    2008-02-19

    A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.

  20. Satellite observations of tropospheric ammonia and carbon monoxide: Global distributions, regional correlations and comparisons to model simulations

    EPA Science Inventory

    Ammonia (NH3) and carbon monoxide (CO) are primary pollutants emitted to the Earth's atmosphere from common as well as distinct sources associated with anthropogenic and natural activities. The seasonal and global distributions and correlations of NH3 and CO from the Tropospheric...

  1. RF model of the distribution system as a communication channel, phase 2. Volume 4: Sofware source program and illustrations ASCII database listings

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    Listings of source programs and some illustrative examples of various ASCII data base files are presented. The listings are grouped into the following categories: main programs, subroutine programs, illustrative ASCII data base files. Within each category files are listed alphabetically.

  2. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    NASA Astrophysics Data System (ADS)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  3. Study of the thermal distribution in vocal cords irradiated by an optical source for the treatment of voice disabilities

    NASA Astrophysics Data System (ADS)

    Arce-Diego, José L.; Fanjul-Vélez, Félix; Borragán-Torre, Alfonso

    2006-02-01

    Vocal cords disorders constitute an important problem for people suffering from them. Particularly the reduction of mucosal wave movement is not appropriately treated by conventional therapies, like drugs administration or surgery. In this work, an alternative therapy, consisting in controlled temperature increases by means of optical sources is proposed. The distribution of heat inside vocal cords when an optical source illuminates them is studied. Optical and thermal properties of tissue are discussed, as a basis for the appropriate knowledge of its behaviour. Propagation of light is shown using the Radiation Transfer Theory (RTT) and a numerical Monte Carlo model. A thermal transfer model, that uses the results of the propagation of radiation, determines the distribution of temperature in the tissue. Two widely used lasers are considered, Nd:YAG (1064 nm) and KTP (532 nm). Adequate amounts of radiation, resulting in temperature rise, must be achieved in order to avoid damage in vocal cords and so to assure an improvement in the vocal functions of the patient. The limits in temperature should be considered with a combined temperature-time and Arrhenius analysis.

  4. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  5. Iterative image reconstruction in elastic inhomogenous media with application to transcranial photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.

    2017-03-01

    Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.

  6. Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Svetlana A.

    2015-01-01

    Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.

  7. Modeling the Absorbing Aerosol Index

    NASA Technical Reports Server (NTRS)

    Penner, Joyce; Zhang, Sophia

    2003-01-01

    We propose a scheme to model the absorbing aerosol index and improve the biomass carbon inventories by optimizing the difference between TOMS aerosol index (AI) and modeled AI with an inverse model. Two absorbing aerosol types are considered, including biomass carbon and mineral dust. A priori biomass carbon source was generated by Liousse et al [1996]. Mineral dust emission is parameterized according to surface wind and soil moisture using the method developed by Ginoux [2000]. In this initial study, the coupled CCM1 and GRANTOUR model was used to determine the aerosol spatial and temporal distribution. With modeled aerosol concentrations and optical properties, we calculate the radiance at the top of the atmosphere at 340 nm and 380 nm with a radiative transfer model. The contrast of radiance at these two wavelengths will be used to calculate AI. Then we compare the modeled AI with TOMS AI. This paper reports our initial modeling for AI and its comparison with TOMS Nimbus 7 AI. For our follow-on project we will model the global AI with aerosol spatial and temporal distribution recomputed from the IMPACT model and DAO GEOS-1 meteorology fields. Then we will build an inverse model, which applies a Bayesian inverse technique to optimize the agreement of between model and observational data. The inverse model will tune the biomass burning source strength to reduce the difference between modelled AI and TOMS AI. Further simulations with a posteriori biomass carbon sources from the inverse model will be carried out. Results will be compared to available observations such as surface concentration and aerosol optical depth.

  8. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  9. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  10. The ALMA-PILS survey: 3D modeling of the envelope, disks and dust filament of IRAS 16293-2422

    NASA Astrophysics Data System (ADS)

    Jacobsen, S. K.; Jørgensen, J. K.; van der Wiel, M. H. D.; Calcutt, H.; Bourke, T. L.; Brinch, C.; Coutens, A.; Drozdovskaya, M. N.; Kristensen, L. E.; Müller, H. S. P.; Wampfler, S. F.

    2018-04-01

    Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims: The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods: We present 13CO, C17O and C18O J = 3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results: Radiative transfer modeling in our sampled parameter space suggests that, while the disk around source A could not be constrained, the disk around source B has to be vertically extended. This puffed-up structure can be obtained with both a protoplanetary disk model with an unexpectedly high scale-height and with the density solution from an infalling, rotating collapse. Combined constraints on our 3D model, from observed dust continuum and CO isotopologue emission between the sources, corroborate that source A should be at least six times more luminous than source B. We also demonstrate that the volume of high-temperature regions where complex organic molecules arise is sensitive to whether or not the total luminosity is in a single radiation source or distributed into two sources, affecting the interpretation of earlier chemical modeling efforts of the IRAS 16293-2422 hot corino which used a single-source approximation. Conclusions: Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA 18 L⊙ and LB 3 L⊙.

  11. A New Seismic Hazard Model for Mainland China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Xu, X.; Chen, G.; Cheng, J.; Magistrale, H.; Shen, Z. K.

    2017-12-01

    We are developing a new seismic hazard model for Mainland China by integrating historical earthquake catalogs, geological faults, geodetic GPS data, and geology maps. To build the model, we construct an Mw-based homogeneous historical earthquake catalog spanning from 780 B.C. to present, create fault models from active fault data, and derive a strain rate model based on the most complete GPS measurements and a new strain derivation algorithm. We divide China and the surrounding regions into about 20 large seismic source zones. For each zone, a tapered Gutenberg-Richter (TGR) magnitude-frequency distribution is used to model the seismic activity rates. The a- and b-values of the TGR distribution are calculated using observed earthquake data, while the corner magnitude is constrained independently using the seismic moment rate inferred from the geodetically-based strain rate model. Small and medium sized earthquakes are distributed within the source zones following the location and magnitude patterns of historical earthquakes. Some of the larger earthquakes are distributed onto active faults, based on their geological characteristics such as slip rate, fault length, down-dip width, and various paleoseismic data. The remaining larger earthquakes are then placed into the background. A new set of magnitude-rupture scaling relationships is developed based on earthquake data from China and vicinity. We evaluate and select appropriate ground motion prediction equations by comparing them with observed ground motion data and performing residual analysis. To implement the modeling workflow, we develop a tool that builds upon the functionalities of GEM's Hazard Modeler's Toolkit. The GEM OpenQuake software is used to calculate seismic hazard at various ground motion periods and various return periods. To account for site amplification, we construct a site condition map based on geology. The resulting new seismic hazard maps can be used for seismic risk analysis and management.

  12. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-07-01

    This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  13. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-12-01

    This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  14. Development of a composite line source emission model for traffic interrupted microenvironments and its application in particle number emissions at a bus station

    NASA Astrophysics Data System (ADS)

    Wang, Lina; Jayaratne, Rohan; Heuff, Darlene; Morawska, Lidia

    A composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. Hence, this model was able to quickly quantify the time spent in each segment within the considered zone, as well as the composition and position of the requisite segments based on the vehicle fleet information, which not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bi-directional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. Although the CLSE model is intended to be applied in traffic management and transport analysis systems for the evaluation of exposure, as well as the simulation of vehicle emissions in traffic interrupted microenvironments, the bus station model can also be used for the input of initial source definitions in future dispersion models.

  15. Delineating floodplain and upload areas for hydrologic models: A comparison of methods

    USDA-ARS?s Scientific Manuscript database

    A spatially distributed representation of basin hydrology and transport processes in eco-hydrological models facilitates the identification of critical source areas and the placement of management and conservation measures. Floodplains are critical landscape features that differ from neighboring up...

  16. Watershed Management Tool for Selection and Spacial Allocation of Non-Point Source Pollution Control Practices

    EPA Science Inventory

    Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...

  17. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  18. The Mars water cycle

    NASA Technical Reports Server (NTRS)

    Davies, D. W.

    1981-01-01

    A model has been developed to test the hypothesis that the observed seasonal and latitudinal distribution of water on Mars is controlled by the sublimation and condensation of surface ice deposits in the Arctic and Antarctic, and the meridional transport of water vapor. Besides reproducing the observed water vapor distribution, the model correctly reproduces the presence of a large permanent ice cap in the Arctic and not in the Antarctic. No permanent ice reservoirs are predicted in the temperate or equatorial zones. Wintertime ice deposits in the Arctic are shown to be the source of the large water vapor abundances observed in the Arctic summertime, and the moderate water vapor abundances in the northern temperate region. Model calculations suggest that a year without dust storms results in very little change in the water vapor distribution. The current water distribution appears to be the equilibrium distribution for present atmospheric conditions.

  19. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  20. Exact Solution of Population Redistributions in a Migration Model

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Wen; Zhang, Li-Jie; Yang, Guo-Hong; Xu, Xin-Jian

    2013-10-01

    We study a migration model, in which individuals migrate from one community to another. The choices of the source community i and the destination one j are proportional to some power of the population of i (kαi) and j (kβj), respectively. Both analytical calculation and numerical simulation show that the population distribution of communities in stationary states is determined by the parameters α and β. The distribution is widely homogeneous with a characteristic size if α > β. Whereas, for α < β, the distribution is highly heterogeneous with the emergence of condensing phenomenon. Between the two regimes, α = β, the distribution gradually shifts from the nonmonotonous (α < 0) to scale-free (α > 0).

  1. The chlorine budget of the present-day atmosphere - A modeling study

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.; Sze, Nien-Dak

    1992-01-01

    The contribution of source gases to the total amount of inorganic chlorine (ClY) is examined analytically with a time-dependent model employing 11 source gases. The source-gas emission data are described, and the modeling methodology is set forth with attention given to the data interpretation. The abundances and distributions are obtained for all 11 source gases with corresponding ClY production rates and mixing ratios. It is shown that the ClY production rate and the ClY mixing ratio for each source gas are spatially dependent, and the change in the relative contributions from 1950 to 1990 is given. Ozone changes in the past decade are characterized by losses in the polar and midlatitude lower stratosphere. The values for CFC-11, CCl4, and CH3CCl3 suggest that they are more evident in the lower stratosphere than is suggested by steady-state estimates based on surface concentrations.

  2. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    NASA Astrophysics Data System (ADS)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  3. EMITTING ELECTRONS AND SOURCE ACTIVITY IN MARKARIAN 501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2012-07-10

    We study the variation of the broadband spectral energy distribution (SED) of the BL Lac object Mrk 501 as a function of source activity, from quiescent to flaring. Through {chi}{sup 2}-minimization we model eight simultaneous SED data sets with a one-zone synchrotron self-Compton (SSC) model, and examine how model parameters vary with source activity. The emerging variability pattern of Mrk 501 is complex, with the Compton component arising from {gamma}-e scatterings that sometimes are (mostly) Thomson and sometimes (mostly) extreme Klein-Nishina. This can be seen from the variation of the Compton to synchrotron peak distance according to source state. Themore » underlying electron spectra are faint/soft in quiescent states and bright/hard in flaring states. A comparison with Mrk 421 suggests that the typical values of the SSC parameters are different in the two sources: however, in both jets the energy density is particle-dominated in all states.« less

  4. Size distribution and coating thickness of black carbon from the Canadian oil sands operations

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan; Li, Shao-Meng; Gordon, Mark; Liu, Peter

    2018-02-01

    Black carbon (BC) plays an important role in the Earth's climate system. However, parameterizations of BC size and mixing state have not been well addressed in aerosol-climate models, introducing substantial uncertainties into the estimation of radiative forcing by BC. In this study, we focused on BC emissions from the oil sands (OS) surface mining activities in northern Alberta, based on an aircraft campaign conducted over the Athabasca OS region in 2013. A total of 14 flights were made over the OS source area, in which the aircraft was typically flown in a four- or five-sided polygon pattern along flight tracks encircling an OS facility. Another 3 flights were performed downwind of the OS source area, each of which involved at least three intercepting locations where the well-mixed OS plume was measured along flight tracks perpendicular to the wind direction. Comparable size distributions were observed for refractory black carbon (rBC) over and downwind of the OS facilities, with rBC mass median diameters (MMDs) between ˜ 135 and 145 nm that were characteristic of fresh urban emissions. This MMD range corresponded to rBC number median diameters (NMDs) of ˜ 60-70 nm, approximately 100 % higher than the NMD settings in some aerosol-climate models. The typical in- and out-of-plume segments of a flight, which had different rBC concentrations and photochemical ages, showed consistent rBC size distributions in terms of MMD, NMD and the corresponding distribution widths. Moreover, rBC size distributions remained unchanged at different downwind distances from the source area, suggesting that atmospheric aging would not necessarily change rBC size distribution. However, aging indeed influenced rBC mixing state. Coating thickness for rBC cores in the diameter range of 130-160 nm was nearly doubled (from ˜ 20 to 40 nm) within 3 h when the OS plume was transported over a distance of 90 km from the source area.

  5. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  6. Identification of potential rockfall source areas at a regional scale using a DEM-based geomorphometric analysis

    NASA Astrophysics Data System (ADS)

    Loye, A.; Jaboyedoff, M.; Pedrazzini, A.

    2009-10-01

    The availability of high resolution Digital Elevation Models (DEM) at a regional scale enables the analysis of topography with high levels of detail. Hence, a DEM-based geomorphometric approach becomes more accurate for detecting potential rockfall sources. Potential rockfall source areas are identified according to the slope angle distribution deduced from high resolution DEM crossed with other information extracted from geological and topographic maps in GIS format. The slope angle distribution can be decomposed in several Gaussian distributions that can be considered as characteristic of morphological units: rock cliffs, steep slopes, footslopes and plains. A terrain is considered as potential rockfall sources when their slope angles lie over an angle threshold, which is defined where the Gaussian distribution of the morphological unit "Rock cliffs" become dominant over the one of "Steep slopes". In addition to this analysis, the cliff outcrops indicated by the topographic maps were added. They contain however "flat areas", so that only the slope angles values above the mode of the Gaussian distribution of the morphological unit "Steep slopes" were considered. An application of this method is presented over the entire Canton of Vaud (3200 km2), Switzerland. The results were compared with rockfall sources observed on the field and orthophotos analysis in order to validate the method. Finally, the influence of the cell size of the DEM is inspected by applying the methodology over six different DEM resolutions.

  7. Physical transport properties of marine microplastic pollution

    NASA Astrophysics Data System (ADS)

    Ballent, A.; Purser, A.; Mendes, P. de Jesus; Pando, S.; Thomsen, L.

    2012-12-01

    Given the complexity of quantitative collection, knowledge of the distribution of microplastic pollution in many regions of the world ocean is patchy, both spatially and temporally, especially for the subsurface environment. However, with knowledge of typical hydrodynamic behavior of waste plastic material, models predicting the dispersal of pelagic and benthic plastics from land sources into the ocean are possible. Here we investigate three aspects of plastic distribution and transport in European waters. Firstly, we assess patterns in the distribution of plastics found in fluvial strandlines of the North Sea and how distribution may be related to flow velocities and distance from source. Second, we model transport of non-buoyant preproduction pellets in the Nazaré Canyon of Portugal using the MOHID system after assessing the density, settling velocity, critical and depositional shear stress characteristics of such waste plastics. Thirdly, we investigate the effect of surface turbulences and high pressures on a range of marine plastic debris categories (various densities, degradation states and shapes tested) in an experimental water column simulator tank and pressure laboratory. Plastics deposited on North Sea strandlines varied greatly spatially, as a function of material composition and distance from source. Model outputs indicated that such dense production pellets are likely transported up and down canyon as a function of tidal forces, with only very minor net down canyon movement. Behaviour of plastic fragments under turbulence varied greatly, with the dimensions of the material, as well as density, playing major determining roles. Pressure was shown to affect hydrodynamic behaviours of only low density foam plastics at pressures ≥ 60 bar.

  8. A diabatic circulation two-dimensional model with photochemistry - Simulations of ozone and long-lived tracers with surface sources

    NASA Technical Reports Server (NTRS)

    Stordal, F.; Isaksen, I. S. A.; Horntveth, K.

    1985-01-01

    Numerous studies have been concerned with the possibility of a reduction of the stratospheric ozone layer. Such a reduction could lead to an enhanced penetration of ultraviolet (UV) radiation to the ground, and, as a result, to damage in the case of several biological processes. It is pointed out that the distributions of many trace gases, such as ozone, are governed in part by transport processes. The present investigation presents a two-dimensional photochemistry-transport model using the residual circulation. The global distribution of both ozone and components with ground sources computed in this model is in good agreement with the observations even though slow diffusion is adopted. The agreement is particularly good in the Northern Hemisphere. The results provide additional support for the idea that tracer transport in the stratosphere is mainly of advective nature.

  9. None of the above: A Bayesian account of the detection of novel categories.

    PubMed

    Navarro, Daniel J; Kemp, Charles

    2017-10-01

    Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Global distribution and sources of dissolved inorganic nitrogen export to the coastal zone: Results from a spatially explicit, global model

    NASA Astrophysics Data System (ADS)

    Dumont, E.; Harrison, J. A.; Kroeze, C.; Bakker, E. J.; Seitzinger, S. P.

    2005-12-01

    Here we describe, test, and apply a spatially explicit, global model for predicting dissolved inorganic nitrogen (DIN) export by rivers to coastal waters (NEWS-DIN). NEWS-DIN was developed as part of an internally consistent suite of global nutrient export models. Modeled and measured DIN export values agree well (calibration R2 = 0.79), and NEWS-DIN is relatively free of bias. NEWS-DIN predicts: DIN yields ranging from 0.0004 to 5217 kg N km-2 yr-1 with the highest DIN yields occurring in Europe and South East Asia; global DIN export to coastal waters of 25 Tg N yr-1, with 16 Tg N yr-1 from anthropogenic sources; biological N2 fixation is the dominant source of exported DIN; and globally, and on every continent except Africa, N fertilizer is the largest anthropogenic source of DIN export to coastal waters.

  11. Modeling unsteady sound refraction by coherent structures in a high-speed jet

    NASA Astrophysics Data System (ADS)

    Kan, Pinqing; Lewalle, Jacques

    2011-11-01

    We construct a visual model for the unsteady refraction of sound waves from point sources in a Ma = 0.6 jet. The mass and inviscid momentum equations give an equation governing acoustic fluctuations, including anisotropic propagation, attenuation and sources; differences with Lighthill's equation will be discussed. On this basis, the theory of characteristics gives canonical equations for the acoustic paths from any source into the far field. We model a steady mean flow in the near-jet region including the potential core and the mixing region downstream of its collapse, and model the convection of coherent structures as traveling wave perturbations of this mean flow. For a regular distribution of point sources in this region, we present a visual rendition of fluctuating distortion, lensing and deaf spots from the viewpoint of a far-field observer. Supported in part by AFOSR Grant FA-9550-10-1-0536 and by a Syracuse University Graduate Fellowship.

  12. NLTE Model Atmospheres for Super-Soft X-ray Sources

    NASA Astrophysics Data System (ADS)

    Rauch, Thomas; Werner, Klaus

    2009-09-01

    Spectral analysis by means of fully line-blanketed Non-LTE model atmospheres has arrived at a high level of sophistication. The Tübingen NLTE Model Atmosphere Package (TMAP) is used to calculate plane-parallel NLTE model atmospheres which are in radiative and hydrostatic equilibrium. Although TMAP is not especially designed for the calculation of burst spectra of novae, spectral energy distributions (SEDs) calculated from TMAP models are well suited e.g. for abundance determinations of Super Soft X-ray Sources like nova V4743 Sgr or line identifications in observations of neutron stars with low magnetic fields in low-mass X-ray binaries (LMXBs) like EXO 0748-676.

  13. Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars

    NASA Astrophysics Data System (ADS)

    Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan

    2013-04-01

    The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.

  14. Modeling responses of large-river fish populations to global climate change through downscaling and incorporation of predictive uncertainty

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia

    2012-01-01

    Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.

  15. Full implementation of a distributed hydrological model based on check dam trapped sediment volumes

    NASA Astrophysics Data System (ADS)

    Bussi, Gianbattista; Francés, Félix

    2014-05-01

    Lack of hydrometeorological data is one of the most compelling limitations to the implementation of distributed environmental models. Mediterranean catchments, in particular, are characterised by high spatial variability of meteorological phenomena and soil characteristics, which may prevents from transferring model calibrations from a fully gauged catchment to a totally o partially ungauged one. For this reason, new sources of data are required in order to extend the use of distributed models to non-monitored or low-monitored areas. An important source of information regarding the hydrological and sediment cycle is represented by sediment deposits accumulated at the bottom of reservoirs. Since the 60s, reservoir sedimentation volumes were used as proxy data for the estimation of inter-annual total sediment yield rates, or, in more recent years, as a reference measure of the sediment transport for sediment model calibration and validation. Nevertheless, the possibility of using such data for constraining the calibration of a hydrological model has not been exhaustively investigated so far. In this study, the use of nine check dam reservoir sedimentation volumes for hydrological and sedimentological model calibration and spatio-temporal validation was examined. Check dams are common structures in Mediterranean areas, and are a potential source of spatially distributed information regarding both hydrological and sediment cycle. In this case-study, the TETIS hydrological and sediment model was implemented in a medium-size Mediterranean catchment (Rambla del Poyo, Spain) by taking advantage of sediment deposits accumulated behind the check dams located in the catchment headwaters. Reservoir trap efficiency was taken into account by coupling the TETIS model with a pond trap efficiency model. The model was calibrated by adjusting some of its parameters in order to reproduce the total sediment volume accumulated behind a check dam. Then, the model was spatially validated by obtaining the simulated sedimentation volume at the other eight check dams and comparing it to the observed sedimentation volumes. Lastly, the simulated water discharge at the catchment outlet was compared with observed water discharge records in order to check the hydrological sub-model behaviour. Model results provided highly valuable information concerning the spatial distribution of soil erosion and sediment transport. Spatial validation of the sediment sub-model provided very good results at seven check dams out of nine. This study shows that check dams can be a useful tool also for constraining hydrological model calibration, as model results agree with water discharge observations. In fact, the hydrological model validation at a downstream water flow gauge obtained a Nash-Sutcliffe efficiency of 0.8. This technique is applicable to all catchments with presence of check dams, and only requires rainfall and temperature data and soil characteristics maps.

  16. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  17. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  18. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  19. Analysis of geodetic interseismic coupling models to estimate tsunami inundation and runup: a study case of Maule seismic gap, Chile

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Catalan, P. A.; Cienfuegos, R.; Urrutia, A.; Shrivastava, M. N.; Yagi, Y.; Moreno, M.

    2015-12-01

    Tsunami inundation maps are a powerful tool to design evacuation plans of coastal communities, additionally can be used as a guide to territorial planning and assessment of structural damages in port facilities and critical infrastructure (Borrero et al., 2003; Barberopoulou et al., 2011; Power et al., 2012; Mueller et al., 2015). The accuracy of inundation estimation is highly correlated with tsunami initial conditions, e.g. seafloor vertical deformation, displaced water volume and potential energy (Bolshakova et al., 2011). Usually, the initial conditions are estimated using homogeneous rupture models based in historical worst-case scenario. However tsunamigenic events occurred in central Chilean continental margin showed a heterogeneous slip distribution of source with patches of high slip, correlated with fully-coupled interseismic zones (Moreno et al., 2012). The main objective of this work is to evaluate the predictive capacity of interseismic coupling models based on geodetic data comparing them with homogeneous fault slip model constructed using scaling laws (Blaser et al., 2010) to estimate inundation and runup in coastal areas. To test our hypothesis we select a seismic gap of Maule, where occurred the last large tsunamigenic earthquake in the chilean subduction zone, using the interseismic coupling models (ISC) proposed by Moreno et al., 2011 and Métois et al., 2013. We generate a slip deficit distribution to build a tsunami source supported by geological information such as slab depth (Hayes et al., 2012), strike, rake and dip (Dziewonski et al., 1981; Ekström et al., 2012) to model tsunami generation, propagation and shoreline impact using Neowave 2D (Yamazaki et al., 2009). We compare the tsunami scenario of Mw 8.8, Maule based in coseismic slip distribution proposed by Moreno et al., 2012 with homogeneous and heterogeneous models to identify the accuracy of our results with sea level time series and regional runup data (Figure 1). The estimation of tsunami source using ISC model can be useful to improve the analysis of tsunami threat, based in more realistic slip distribution.

  20. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  1. Sensitivity of stream water age to climatic variability and land use change: implications for water quality

    NASA Astrophysics Data System (ADS)

    Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe

    2016-04-01

    Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.

  2. Modeling diffuse phosphorus emissions to assist in best management practice designing

    NASA Astrophysics Data System (ADS)

    Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne

    2010-05-01

    A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.

  3. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  4. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  5. SHEDS-PM: A POPULATION EXPOSURE MODEL FOR PREDICTING DISTRIBUTIONS OF PM EXPOSURE AND DOSE FROM BOTH OUTDOOR AND INDOOR SOURCES

    EPA Science Inventory

    The US EPA National Exposure Research Laboratory (NERL) has developed a population exposure and dose model for particulate matter (PM), called the Stochastic Human Exposure and Dose Simulation (SHEDS) model. SHEDS-PM uses a probabilistic approach that incorporates both variabi...

  6. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Y Churmakov, D.; Meglinski, I. V.; Piletsky, S. A.; Greenhalgh, D. A.

    2003-07-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an `effective' depth.

  7. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  8. A matrix-inversion method for gamma-source mapping from gamma-count data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adsley, Ian; Burgess, Claire; Bull, Richard K

    In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less

  9. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  10. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    NASA Astrophysics Data System (ADS)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  11. GLISSANDO: GLauber Initial-State Simulation AND mOre…

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Rybczyński, Maciej; Bożek, Piotr

    2009-01-01

    We present a Monte Carlo generator for a variety of Glauber-like models (the wounded-nucleon model, binary collisions model, mixed model, model with hot spots). These models describe the early stages of relativistic heavy-ion collisions, in particular the spatial distribution of the transverse energy deposition which ultimately leads to production of particles from the interaction region. The original geometric distribution of sources in the transverse plane can be superimposed with a statistical distribution simulating the dispersion in the generated transverse energy in each individual collision. The program generates inter alia the fixed-axes (standard) and variable-axes (participant) two-dimensional profiles of the density of sources in the transverse plane and their azimuthal Fourier components. These profiles can be used in further analysis of physical phenomena, such as the jet quenching, event-by-event hydrodynamics, or analysis of the elliptic flow and its fluctuations. Characteristics of the event (multiplicities, eccentricities, Fourier coefficients, etc.) are stored in a ROOT file and can be analyzed off-line. In particular, event-by-event studies can be carried out in a simple way. A number of ROOT scripts is provided for that purpose. Supplied variants of the code can also be used for the proton-nucleus and deuteron-nucleus collisions. Program summaryProgram title: GLISSANDO Catalogue identifier: AEBS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4452 No. of bytes in distributed program, including test data, etc.: 34 766 Distribution format: tar.gz Programming language: C++ Computer: any computer with a C++ compiler and the ROOT environment [R. Brun, et al., Root Users Guide 5.16, CERN, 2007, http://root.cern.ch[1

  12. Use of Combined A-Train Observations to Validate GEOS Model Simulated Dust Distributions During NAMMA

    NASA Technical Reports Server (NTRS)

    Nowottnick, E.

    2007-01-01

    During August 2006, the NASA African Multidisciplinary Analyses Mission (NAMMA) field experiment was conducted to characterize the structure of African Easterly Waves and their evolution into tropical storms. Mineral dust aerosols affect tropical storm development, although their exact role remains to be understood. To better understand the role of dust on tropical cyclogenesis, we have implemented a dust source, transport, and optical model in the NASA Goddard Earth Observing System (GEOS) atmospheric general circulation model and data assimilation system. Our dust source scheme is more physically based scheme than previous incarnations of the model, and we introduce improved dust optical and microphysical processes through inclusion of a detailed microphysical scheme. Here we use A-Train observations from MODIS, OMI, and CALIPSO with NAMMA DC-8 flight data to evaluate the simulated dust distributions and microphysical properties. Our goal is to synthesize the multi-spectral observations from the A-Train sensors to arrive at a consistent set of optical properties for the dust aerosols suitable for direct forcing calculations.

  13. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  14. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  15. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.

  16. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  17. Numeric stratigraphic modeling: Testing sequence Numeric stratigraphic modeling: Testing sequence stratigraphic concepts using high resolution geologic examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.

    1996-08-01

    Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less

  18. Relating Land Use and Human Intra-City Mobility

    PubMed Central

    Lee, Minjin; Holme, Petter

    2015-01-01

    Understanding human mobility patterns—how people move in their everyday lives—is an interdisciplinary research field. It is a question with roots back to the 19th century that has been dramatically revitalized with the recent increase in data availability. Models of human mobility often take the population distribution as a starting point. Another, sometimes more accurate, data source is land-use maps. In this paper, we discuss how the intra-city movement patterns, and consequently population distribution, can be predicted from such data sources. As a link between land use and mobility, we show that the purposes of people’s trips are strongly correlated with the land use of the trip’s origin and destination. We calibrate, validate and discuss our model using survey data. PMID:26445147

  19. Stratospheric water vapor in the NCAR CCM2

    NASA Technical Reports Server (NTRS)

    Mote, Philip W.; Holton, James R.

    1992-01-01

    Results are presented of the water vapor distribution in a 3D GCM with good vertical resolution, a state-of-the-art transport scheme, and a realistic water vapor source in the middle atmosphere. In addition to water vapor, the model transported methane and an idealized clock tracer, which provides transport times to and within the middle atmosphere. The water vapor and methane distributions are compared with Nimbus 7 SAMS and LIMS data and with in situ measurements. It is argued that the hygropause in the model is maintained not by 'freeze-drying' at the tops of tropical cumulonimbus, but by a balance between two sources and one sink. Since the southern winter dehydration is unrealistically intense, this balance most likely does not resemble the balance in the real atmosphere.

  20. Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution from NAAPS

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Reid, J. S.

    2006-12-01

    As more forecast models aim to include aerosol and chemical species, there is a need for source functions for biomass burning emissions that are accurate, robust, and operable in real-time. NAAPS is a global aerosol forecast model running every six hours and forecasting distributions of biomass burning, industrial sulfate, dust, and sea salt aerosols. This model is run operationally by the U.S. Navy as an aid to planning. The smoke emissions used as input to the model are calculated from the data collected by the FLAMBE system, driven by near-real-time active fire data from GOES WF_ABBA and MODIS Rapid Response. The smoke source function uses land cover data to predict properties of detected fires based on literature data from experimental burns. This scheme is very sensitive to the choice of land cover data sets. In areas of rapid land cover change, the use of static land cover data can produce artifactual changes in emissions unrelated to real changes in fire patterns. In South America, this change may be as large as 40% over five years. We demonstrate the impact of a modified land cover scheme on FLAMBE emissions and NAAPS forecasts, including a fire size algorithm developed using MODIS burned area data. We also describe the effects of corrections to emissions estimates for cloud and satellite coverage. We outline areas where existing data sources are incomplete and improvements are required to achieve accurate modeling of biomass burning emissions in real time.

  1. Turbulent Transport in a Three-dimensional Solar Wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiota, D.; Zank, G. P.; Adhikari, L.

    2017-03-01

    Turbulence in the solar wind can play essential roles in the heating of coronal and solar wind plasma and the acceleration of the solar wind and energetic particles. Turbulence sources are not well understood and thought to be partly enhanced by interaction with the large-scale inhomogeneity of the solar wind and the interplanetary magnetic field and/or transported from the solar corona. To investigate the interaction with background inhomogeneity and the turbulence sources, we have developed a new 3D MHD model that includes the transport and dissipation of turbulence using the theoretical model of Zank et al. We solve for themore » temporal and spatial evolution of three moments or variables, the energy in the forward and backward fluctuating modes and the residual energy and their three corresponding correlation lengths. The transport model is coupled to our 3D model of the inhomogeneous solar wind. We present results of the coupled solar wind-turbulence model assuming a simple tilted dipole magnetic configuration that mimics solar minimum conditions, together with several comparative intermediate cases. By considering eight possible solar wind and turbulence source configurations, we show that the large-scale solar wind and IMF inhomogeneity and the strength of the turbulence sources significantly affect the distribution of turbulence in the heliosphere within 6 au. We compare the predicted turbulence distribution results from a complete solar minimum model with in situ measurements made by the Helios and Ulysses spacecraft, finding that the synthetic profiles of the turbulence intensities show reasonable agreement with observations.« less

  2. Global View of Aerosol Vertical Distributions from CALIPSO Lidar Measurements and GOCART Simulations: Regional and Seasonal Variations

    NASA Technical Reports Server (NTRS)

    Yu, Hongbin; Chin, Mian; Winker, David M.; Omar, Ali H.; Liu, Zhaoyan; Kittaka, Chieko; Diehl, Thomas

    2010-01-01

    This study examines seasonal variations of the vertical distribution of aerosols through a statistical analysis of the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar observations from June 2006 to November 2007. A data-screening scheme is developed to attain good quality data in cloud-free conditions, and the polarization measurement is used to separate dust from non-dust aerosol. The CALIPSO aerosol observations are compared with aerosol simulations from the Goddard Chemistry Aerosol Radiation Transport (GOCART) model and aerosol optical depth (AOD) measurements from the MODerate resolution Imaging Spectroradiometer (MODIS). The CALIPSO observations of geographical patterns and seasonal variations of AOD are generally consistent with GOCART simulations and MODIS retrievals especially near source regions, while the magnitude of AOD shows large discrepancies in most regions. Both the CALIPSO observation and GOCART model show that the aerosol extinction scale heights in major dust and smoke source regions are generally higher than that in industrial pollution source regions. The CALIPSO aerosol lidar ratio also generally agrees with GOCART model within 30% on regional scales. Major differences between satellite observations and GOCART model are identified, including (1) an underestimate of aerosol extinction by GOCART over the Indian sub-continent, (2) much larger aerosol extinction calculated by GOCART than observed by CALIPSO in dust source regions, (3) much weaker in magnitude and more concentrated aerosol in the lower atmosphere in CALIPSO observation than GOCART model over transported areas in midlatitudes, and (4) consistently lower aerosol scale height by CALIPSO observation than GOCART model. Possible factors contributing to these differences are discussed.

  3. Optimal placement and sizing of wind / solar based DG sources in distribution system

    NASA Astrophysics Data System (ADS)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  4. Geospatial modeling of plant stable isotope ratios - the development of isoscapes

    NASA Astrophysics Data System (ADS)

    West, J. B.; Ehleringer, J. R.; Hurley, J. M.; Cerling, T. E.

    2007-12-01

    Large-scale spatial variation in stable isotope ratios can yield critical insights into the spatio-temporal dynamics of biogeochemical cycles, animal movements, and shifts in climate, as well as anthropogenic activities such as commerce, resource utilization, and forensic investigation. Interpreting these signals requires that we understand and model the variation. We report progress in our development of plant stable isotope ratio landscapes (isoscapes). Our approach utilizes a GIS, gridded datasets, a range of modeling approaches, and spatially distributed observations. We synthesize findings from four studies to illustrate the general utility of the approach, its ability to represent observed spatio-temporal variability in plant stable isotope ratios, and also outline some specific areas of uncertainty. We also address two basic, but critical questions central to our ability to model plant stable isotope ratios using this approach: 1. Do the continuous precipitation isotope ratio grids represent reasonable proxies for plant source water?, and 2. Do continuous climate grids (as is or modified) represent a reasonable proxy for the climate experienced by plants? Plant components modeled include leaf water, grape water (extracted from wine), bulk leaf material ( Cannabis sativa; marijuana), and seed oil ( Ricinus communis; castor bean). Our approaches to modeling the isotope ratios of these components varied from highly sophisticated process models to simple one-step fractionation models to regression approaches. The leaf water isosocapes were produced using steady-state models of enrichment and continuous grids of annual average precipitation isotope ratios and climate. These were compared to other modeling efforts, as well as a relatively sparse, but geographically distributed dataset from the literature. The latitudinal distributions and global averages compared favorably to other modeling efforts and the observational data compared well to model predictions. These results yield confidence in the precipitation isoscapes used to represent plant source water, the modified climate grids used to represent leaf climate, and the efficacy of this approach to modeling. Further work confirmed these observations. The seed oil isoscape was produced using a simple model of lipid fractionation driven with the precipitation grid, and compared well to widely distributed observations of castor bean oil, again suggesting that the precipitation grids were reasonable proxies for plant source water. The marijuana leaf δ2H observations distributed across the continental United States were regressed against the precipitation δ2H grids and yielded a strong relationship between them, again suggesting that plant source water was reasonably well represented by the precipitation grid. Finally, the wine water δ18O isoscape was developed from regressions that related precipitation isotope ratios and climate to observations from a single vintage. Favorable comparisons between year-specific wine water isoscapes and inter-annual variations in previous vintages yielded confidence in the climate grids. Clearly significant residual variability remains to be explained in all of these cases and uncertainties vary depending on the component modeled, but we conclude from this synthesis that isoscapes are capable of representing real spatial and temporal variability in plant stable isotope ratios.

  5. Rise of Buoyant Emissions from Low-Level Sources in the Presence of Upstream and Downstream Obstacles

    NASA Astrophysics Data System (ADS)

    Pournazeri, Sam; Princevac, Marko; Venkatram, Akula

    2012-08-01

    Field and laboratory studies have been conducted to investigate the effect of surrounding buildings on the plume rise from low-level buoyant sources, such as distributed power generators. The field experiments were conducted in Palm Springs, California, USA in November 2010 and plume rise from a 9.3 m stack was measured. In addition to the field study, a laboratory study was conducted in a water channel to investigate the effects of surrounding buildings on plume rise under relatively high wind-speed conditions. Different building geometries and source conditions were tested. The experiments revealed that plume rise from low-level buoyant sources is highly affected by the complex flows induced by buildings stationed upstream and downstream of the source. The laboratory results were compared with predictions from a newly developed numerical plume-rise model. Using the flow measurements associated with each building configuration, the numerical model accurately predicted plume rise from low-level buoyant sources that are influenced by buildings. This numerical plume rise model can be used as a part of a computational fluid dynamics model.

  6. Landscape and flow metrics affecting the distribution of a federally-threatened fish: Improving management, model fit, and model transferability

    USGS Publications Warehouse

    Brewer, Shannon K.; Worthington, Thomas A.; Zhang, Tianjioa; Logue, Daniel R.; Mittelstet, Aaron R.

    2016-01-01

    Truncated distributions of pelagophilic fishes have been observed across the Great Plains of North America, with water use and landscape fragmentation implicated as contributing factors. Developing conservation strategies for these species is hindered by the existence of multiple competing flow regime hypotheses related to species persistence. Our primary study objective was to compare the predicted distributions of one pelagophil, the Arkansas River Shiner Notropis girardi, constructed using different flow regime metrics. Further, we investigated different approaches for improving temporal transferability of the species distribution model (SDM). We compared four hypotheses: mean annual flow (a baseline), the 75th percentile of daily flow, the number of zero-flow days, and the number of days above 55th percentile flows, to examine the relative importance of flows during the spawning period. Building on an earlier SDM, we added covariates that quantified wells in each catchment, point source discharges, and non-native species presence to a structured variable framework. We assessed the effects on model transferability and fit by reducing multicollinearity using Spearman’s rank correlations, variance inflation factors, and principal component analysis, as well as altering the regularization coefficient (β) within MaxEnt. The 75th percentile of daily flow was the most important flow metric related to structuring the species distribution. The number of wells and point source discharges were also highly ranked. At the default level of β, model transferability was improved using all methods to reduce collinearity; however, at higher levels of β, the correlation method performed best. Using β = 5 provided the best model transferability, while retaining the majority of variables that contributed 95% to the model. This study provides a workflow for improving model transferability and also presents water-management options that may be considered to improve the conservation status of pelagophils.

  7. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  8. Urban dust in the Guanzhong Basin of China, part I: A regional distribution of dust sources retrieved using satellite data.

    PubMed

    Long, Xin; Li, Nan; Tie, Xuexi; Cao, Junji; Zhao, Shuyu; Huang, Rujin; Zhao, Mudan; Li, Guohui; Feng, Tian

    2016-01-15

    Urban dust pollution has been becoming an outstanding environmental problem due to rapid urbanization in China. However, it is very difficult to construct an urban dust inventory, owing to its small horizontal scale and strong temporal/spatial variability. With the analysis of visual interpretation, maximum likelihood classification, extrapolation and spatial overlaying, we quantified dust source distributions of urban constructions, barrens and croplands in the Guanzhong Basin using various satellite data, including VHR (0.5m), Lansat-8 OLI (30 m) and MCD12Q1 (500 m). The croplands were the dominant dust sources, accounting for 40% (17,913 km(2)) of the study area in summer and 36% (17,913 km(2)) in winter, followed by barrens, accounting for 5% in summer and 10% in winter. Moreover, the total constructions were 126 km(2), including 84% of active and 16% inactive. In addition, 59% of the constructions aggregated on the only megacity of the study area, Xi'an. With high accuracy exceeding 88%, the proposed satellite-data based method is feasible and valuable to quantify distributions of dust sources. This study provides a new perspective to evaluate regional urban dust, which is seldom quantified and reported. In a companied paper (Part-2 of the study), the detailed distribution of the urban dust sources is applied in a dynamical/aerosol model (WRF-Dust) to assess the effect of dust sources on aerosol pollution. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Harvesting implementation for the GI-cat distributed catalog

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo

    2010-05-01

    GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.

  10. SU-E-T-102: Determination of Dose Distributions and Water-Equivalence of MAGIC-F Polymer Gel for 60Co and 192Ir Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quevedo, A; Nicolucci, P

    2014-06-01

    Purpose: Analyse the water-equivalence of MAGIC-f polymer gel for {sup 60}Co and {sup 192}Ir clinical brachytherapy sources, through dose distributions simulated with PENELOPE Monte Carlo code. Methods: The real geometry of {sup 60} (BEBIG, modelo Co0.A86) and {sup 192}192Ir (Varian, model GammaMed Plus) clinical brachytherapy sources were modelled on PENELOPE Monte Carlo simulation code. The most probable emission lines of photons were used for both sources: 17 emission lines for {sup 192}Ir and 12 lines for {sup 60}. The dose distributions were obtained in a cubic water or gel homogeneous phantom (30 × 30 × 30 cm{sup 3}), with themore » source positioned in the middle of the phantom. In all cases the number of simulation showers remained constant at 10{sup 9} particles. A specific material for gel was constructed in PENELOPE using weight fraction components of MAGIC-f: wH = 0,1062, wC = 0,0751, wN = 0,0139, wO = 0,8021, wS = 2,58×10{sup −6} e wCu = 5,08 × 10{sup −6}. The voxel size in the dose distributions was 0.6 mm. Dose distribution maps on the longitudinal and radial direction through the centre of the source were used to analyse the water-equivalence of MAGIC-f. Results: For the {sup 60} source, the maximum diferences in relative doses obtained in the gel and water were 0,65% and 1,90%, for radial and longitudinal direction, respectively. For {sup 192}Ir, the maximum difereces in relative doses were 0,30% and 1,05%, for radial and longitudinal direction, respectively. The materials equivalence can also be verified through the effective atomic number and density of each material: Zef-MAGIC-f = 7,07 e .MAGIC-f = 1,060 g/cm{sup 3} and Zef-water = 7,22. Conclusion: The results showed that MAGIC-f is water equivalent, consequently being suitable to simulate soft tissue, for Cobalt and Iridium energies. Hence, gel can be used as a dosimeter in clinical applications. Further investigation to its use in a clinical protocol is needed.« less

  11. Development of Accommodation Models for Soldiers in Vehicles: Squad

    DTIC Science & Technology

    2014-09-01

    existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments...Distribution Statement A. Approved for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Data from a previous study...body armor and body borne gear. 15. SUBJECT TERMS Anthropometry , Posture, Vehicle Occupants, Accommodation 16. SECURITY CLASSIFICATION OF

  12. New Measurements of Aerosol Vertical Structure from Space Using the NASA Geoscience Laser Altimeter System (GLAS): Applications for Aerosol Transport Models

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Ginoux, Paul; Colarco, Peter; Chin, Mian; Spinhirne, James D.; Palm, Steven P.; Hlavka, Dennis; Hart, William

    2003-01-01

    In the past, satellite measurements of aerosols have only been possible using passive sensors. Analysis of passive satellite data has lead to an improved understanding of aerosol properties, spatial distribution, and their effect on the earth s climate. However, direct measurement of aerosol vertical distribution has not been possible using only the passive data. Knowledge of aerosol vertical distribution is important to correctly assess the impact of aerosol absorption, for certain atmospheric correction procedures, and to help constrain height profiles in aerosol transport models. On January 12,2003 NASA launched the first satellite-based lidar, the Geoscience Laser Altimeter System (GLAS), onboard the ICESat spacecraft. GLAS is both an altimeter and an atmospheric lidar, and obtains direct measurements of aerosol and cloud heights. Here we show an overview of GLAS, provide an update of its current status, and discuss how GUS data will be useful for modeling efforts. In particular, a strategy of using GLAS to characterize the height profile of dust plumes over source regions will be presented, along with initial results. Such information can be used to validate and improve output from aerosol transport models. Aerosol height profile comparisons between GLAS and transport models will be shown for regions downwind of aerosol sources. We will also discuss the feasibility of assimilating GLAS profiles into the models in order to improve their output,

  13. New Measurements of Aerosol Vertical Structure from Space using the NASA Geoscience Laser Altimeter System (GLAS): Applications for Aerosol Transport Models

    NASA Technical Reports Server (NTRS)

    Welton, E. J.; Spinhime, J.; Palm, S.; Hlavka, D.; Hart, W.; Ginoux, P.; Chin, M.; Colarco, P.

    2004-01-01

    In the past, satellite measurements of aerosols have only been possible using passive sensors. Analysis of passive satellite data has lead to an improved understanding of aerosol properties, spatial distribution, and their effect on the earth,s climate. However, direct measurement of aerosol vertical distribution has not been possible using only the passive data. Knowledge of aerosol vertical distribution is important to correctly assess the impact of aerosol absorption, for certain atmospheric correction procedures, and to help constrain height profiles in aerosol transport models. On January 12,2003 NASA launched the first satellite-based lidar, the Geoscience Laser Altimeter System (GLAS), onboard the ICESat spacecraft. GLAS is both an altimeter and an atmospheric lidar, and obtains direct measurements of aerosol and cloud heights. Here we show an overview of GLAS, provide an update of its current status, and discuss how GLAS data will be useful for modeling efforts. In particular, a strategy of using GLAS to characterize the height profile of dust plumes over source regions will be presented, along with initial results. Such information can be used to validate and improve output from aerosol transport models. Aerosol height profile comparisons between GLAS and transport models will be shown for regions downwind of aerosol sources. We will also discuss the feasibility of assimilating GLAS profiles into the models in order to improve their output.

  14. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  15. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  16. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  17. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  19. Seasonal Variability of Middle Latitude Ozone in the Lowermost Stratosphere Derived from Probability Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.

    1999-01-01

    We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.

  20. Performance evaluation of parallel electric field tunnel field-effect transistor by a distributed-element circuit model

    NASA Astrophysics Data System (ADS)

    Morita, Yukinori; Mori, Takahiro; Migita, Shinji; Mizubayashi, Wataru; Tanabe, Akihito; Fukuda, Koichi; Matsukawa, Takashi; Endo, Kazuhiko; O'uchi, Shin-ichi; Liu, Yongxun; Masahara, Meishoku; Ota, Hiroyuki

    2014-12-01

    The performance of parallel electric field tunnel field-effect transistors (TFETs), in which band-to-band tunneling (BTBT) was initiated in-line to the gate electric field was evaluated. The TFET was fabricated by inserting an epitaxially-grown parallel-plate tunnel capacitor between heavily doped source wells and gate insulators. Analysis using a distributed-element circuit model indicated there should be a limit of the drain current caused by the self-voltage-drop effect in the ultrathin channel layer.

  1. Niches, models, and climate change: Assessing the assumptions and uncertainties

    PubMed Central

    Wiens, John A.; Stralberg, Diana; Jongsomjit, Dennis; Howell, Christine A.; Snyder, Mark A.

    2009-01-01

    As the rate and magnitude of climate change accelerate, understanding the consequences becomes increasingly important. Species distribution models (SDMs) based on current ecological niche constraints are used to project future species distributions. These models contain assumptions that add to the uncertainty in model projections stemming from the structure of the models, the algorithms used to translate niche associations into distributional probabilities, the quality and quantity of data, and mismatches between the scales of modeling and data. We illustrate the application of SDMs using two climate models and two distributional algorithms, together with information on distributional shifts in vegetation types, to project fine-scale future distributions of 60 California landbird species. Most species are projected to decrease in distribution by 2070. Changes in total species richness vary over the state, with large losses of species in some “hotspots” of vulnerability. Differences in distributional shifts among species will change species co-occurrences, creating spatial variation in similarities between current and future assemblages. We use these analyses to consider how assumptions can be addressed and uncertainties reduced. SDMs can provide a useful way to incorporate future conditions into conservation and management practices and decisions, but the uncertainties of model projections must be balanced with the risks of taking the wrong actions or the costs of inaction. Doing this will require that the sources and magnitudes of uncertainty are documented, and that conservationists and resource managers be willing to act despite the uncertainties. The alternative, of ignoring the future, is not an option. PMID:19822750

  2. Effect of polarization on the evolution of electromagnetic hollow Gaussian Schell-model beam

    NASA Astrophysics Data System (ADS)

    Long, Xuewen; Lu, Keqing; Zhang, Yuhong; Guo, Jianbang; Li, Kehao

    2011-02-01

    Based on the theory of coherence, an analytical propagation formula for partially polarized and partially coherent hollow Gaussian Schell-model beams (HGSMBs) passing through a paraxial optical system is derived. Furthermore, we show that the degree of polarization of source may affect the evolution of HGSMBs and a tunable dark region may exist. For two special cases of fully coherent and partially coherent δxx = δyy, normalized intensity distributions are independent of the polarization of source.

  3. Design and Implementation of a Distributed Version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.

    1994-01-01

    Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.

  4. Modifications to the NASA SP-8072 Distributed Source Method II for Ares I Lift-off Environment Predictions

    NASA Technical Reports Server (NTRS)

    Haynes, Jared; Kenny, Jeremy

    2009-01-01

    Lift-off acoustic environments for NASA's Ares I - Crew Launch Vehicle are predicted using the second source distribution methodology described in the NASA SP-8072. Three modifications made to the model include a shorter core length approximation, a core termination procedure upon plume deflection, and a new set of directivity indices measured from static test firings of the Reusable Solid Rocket Motor (RSRM). The modified sound pressure level predictions increased more than 5 dB overall, and the peak levels shifted two third-octave bands higher in frequency.

  5. Modeling Insights into Deuterium Excess as an Indicator of Water Vapor Source Conditions

    NASA Technical Reports Server (NTRS)

    Lewis, Sophie C.; Legrande, Allegra Nicole; Kelley, Maxwell; Schmidt, Gavin A.

    2013-01-01

    Deuterium excess (d) is interpreted in conventional paleoclimate reconstructions as a tracer of oceanic source region conditions, such as temperature, where precipitation originates. Previous studies have adopted co-isotopic approaches to estimate past changes in both site and oceanic source temperatures for ice core sites using empirical relationships derived from conceptual distillation models, particularly Mixed Cloud Isotopic Models (MCIMs). However, the relationship between d and oceanic surface conditions remains unclear in past contexts. We investigate this climate-isotope relationship for sites in Greenland and Antarctica using multiple simulations of the water isotope-enabled Goddard Institute for Space Studies (GISS) ModelE-R general circulation model and apply a novel suite of model vapor source distribution (VSD) tracers to assess d as a proxy for source temperature variability under a range of climatic conditions. Simulated average source temperatures determined by the VSDs are compared to synthetic source temperature estimates calculated using MCIM equations linking d to source region conditions. We show that although deuterium excess is generally a faithful tracer of source temperatures as estimated by the MCIM approach, large discrepancies in the isotope-climate relationship occur around Greenland during the Last Glacial Maximum simulation, when precipitation seasonality and moisture source regions were notably different from present. This identified sensitivity in d as a source temperature proxy suggests that quantitative climate reconstructions from deuterium excess should be treated with caution for some sites when boundary conditions are significantly different from the present day. Also, the exclusion of the influence of humidity and other evaporative source changes in MCIM regressions may be a limitation of quantifying source temperature fluctuations from deuterium excess in some instances.

  6. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  7. Simultaneous reconstruction of emission activity and attenuation coefficient distribution from TOF data, acquired with external transmission source

    NASA Astrophysics Data System (ADS)

    Panin, V. Y.; Aykac, M.; Casey, M. E.

    2013-06-01

    The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.

  8. Global two dimensional chemistry model and simulation of atmospheric chemical composition

    NASA Astrophysics Data System (ADS)

    Zhang, Renjian; Wang, Mingxing; Zeng, Qingcun

    2000-03-01

    A global two-dimensional zonally averaged chemistry model is developed to study the chemi-cal composition of atmosphere. The region of the model is from 90°S to 90°N and from the ground to the altitude of 20 km with a resolution of 5° x 1 km. The wind field is residual circulation calcu-lated from diabatic rate. 34 species and 104 chemical and photochemical reactions are considered in the model. The sources of CH4, CO and NOx, which are divided into seasonal sources and non-seasonal sources, are parameterized as a function of latitude and time. The chemical composi-tion of atmosphere was simulated with emission level of CH4, CO and NOx in 1990. The results are compared with observations and other model results, showing that the model is successful to simu-late the atmospheric chemical composition and distribution of CH4.

  9. Visualization of Green's Function Anomalies for Megathrust Source in Nankai Trough by Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Miyakoshi, K.; Tsurugi, M.; Kawase, H.; Kamae, K.

    2014-12-01

    Effect of various areas (asperities or SMGA) in the source of a megathrust subduction zone earthquake on the simulated long-period ground motions is studied. For this case study we employed a source fault model proposed by HERP (2012) for future M9-class event in the Nankai trough. Velocity structure is 3-D JIVSM model developed for long-period ground motion simulations. The target site OSKH02 "Konohana" is located in center of the Osaka basin. Green's functions for large number of sub-sources (>1000) were calculated by FDM using the reciprocity approach. Depths, strike and dip angles of sub-sources are adjusted to the shape of upper boundary of the Philippine Sea plate. The target period range is 4-20sec. Strongly nonuniform distribution of peak amplitudes of Green's functions is observed (see Figure), and two areas have anomalously large amplitudes: (1) a large along-strike elongated area just south of Kii peninsula and (2) a similar area south of Kii peninsula but shifted toward the Nankai trough. Elongation of the first anomaly fits well 10-15km isolines of the depth distribution of the Philippine Sea plate, while target site is located in the direction perpendicular to these isolines. For this reason, preliminarily we suppose that plate shape may have critical effect on the simulated ground motions, via a cumulative effect of sub-source radiation patterns and specific strike and dip angle distributions. Analysis of the time delay of the peak arrivals at OKSH02 demonstrates that Green's functions from the second anomaly, located in shallow part of plate boundary, are mostly composed of surface waves.

  10. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  11. An open-source model and solution method to predict co-contraction in the finger.

    PubMed

    MacIntosh, Alexander R; Keir, Peter J

    2017-10-01

    A novel open-source biomechanical model of the index finger with an electromyography (EMG)-constrained static optimization solution method are developed with the goal of improving co-contraction estimates and providing means to assess tendon tension distribution through the finger. The Intrinsic model has four degrees of freedom and seven muscles (with a 14 component extensor mechanism). A novel plugin developed for the OpenSim modelling software applied the EMG-constrained static optimization solution method. Ten participants performed static pressing in three finger postures and five dynamic free motion tasks. Index finger 3D kinematics, force (5, 15, 30 N), and EMG (4 extrinsic muscles and first dorsal interosseous) were used in the analysis. The Intrinsic model predicted co-contraction increased by 29% during static pressing over the existing model. Further, tendon tension distribution patterns and forces, known to be essential to produce finger action, were determined by the model across all postures. The Intrinsic model and custom solution method improved co-contraction estimates to facilitate force propagation through the finger. These tools improve our interpretation of loads in the finger to develop better rehabilitation and workplace injury risk reduction strategies.

  12. Assimilation of ground and satellite snow observations in a distributed hydrologic model to improve water supply forecasts in the Upper Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Micheletty, P. D.; Day, G. N.; Quebbeman, J.; Carney, S.; Park, G. H.

    2016-12-01

    The Upper Colorado River Basin above Lake Powell is a major source of water supply for 25 million people and provides irrigation water for 3.5 million acres. Approximately 85% of the annual runoff is produced from snowmelt. Water supply forecasts of the April-July runoff produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC), are critical to basin water management. This project leverages advanced distributed models, datasets, and snow data assimilation techniques to improve operational water supply forecasts made by CBRFC in the Upper Colorado River Basin. The current work will specifically focus on improving water supply forecasts through the implementation of a snow data assimilation process coupled with the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM). Three types of observations will be used in the snow data assimilation system: satellite Snow Covered Area (MODSCAG), satellite Dust Radiative Forcing in Snow (MODDRFS), and SNOTEL Snow Water Equivalent (SWE). SNOTEL SWE provides the main source of high elevation snowpack information during the snow season, however, these point measurement sites are carefully selected to provide consistent indices of snowpack, and may not be representative of the surrounding watershed. We address this problem by transforming the SWE observations to standardized deviates and interpolating the standardized deviates using a spatial regression model. The interpolation process will also take advantage of the MODIS Snow Covered Area and Grainsize (MODSCAG) product to inform the model on the spatial distribution of snow. The interpolated standardized deviates are back-transformed and used in an Ensemble Kalman Filter (EnKF) to update the model simulated SWE. The MODIS Dust Radiative Forcing in Snow (MODDRFS) product will be used more directly through temporary adjustments to model snowmelt parameters, which should improve melt estimates in areas affected by dust on snow. In order to assess the value of different data sources, reforecasts will be produced for a historical period and performance measures will be computed to assess forecast skill. The existing CBRFC Ensemble Streamflow Prediction (ESP) reforecasts will provide a baseline for comparison to determine the added-value of the data assimilation process.

  13. Intercomparison of Meteorological Forcing Data from Empirical and Mesoscale Model Sources in the N.F. American River Basin in northern California

    NASA Astrophysics Data System (ADS)

    Wayand, N. E.; Hamlet, A. F.; Hughes, M. R.; Feld, S.; Lundquist, J. D.

    2012-12-01

    The data required to drive distributed hydrological models is significantly limited within mountainous terrain due to a scarcity of observations. This study evaluated three common configurations of forcing data: a) one low-elevation station, combined with empirical techniques, b) gridded output from the Weather Research and Forecasting (WRF) model, and c) a combination of the two. Each configuration was evaluated within the heavily-instrumented North Fork American River Basin in northern California, during October-June 2000-2010. Simulations of streamflow and snowpack using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted precipitation and radiation as variables whose sources resulted in significant differences. The best source of precipitation data varied between years. On average, the performance of WRF and the single station distributed using the Parameter Regression on Independent Slopes Model (PRISM), were not significantly different. The average percent biases in simulated streamflow were 3.4% and 0.9%, for configurations a) and b) respectively, even though precipitation compared directly with gauge measurements was biased high by 6% and 17%, suggesting that gauge undercatch may explain part of the bias. Simulations of snowpack using empirically-estimated long-wave irradiance resulted in melt rates lower than those observed at high-elevation sites, while at lower-elevations the same forcing caused significant mid-winter melt that was not observed (Figure 1). These results highlight the complexity of how forcing data sources impact hydrology over different areas (high vs. low elevation snow) and different time-periods. Overall, results support the use of output from the WRF model over empirical techniques in regions with limited station data. FIG. 1. (a,b) Simulated SWE from DHSVM compared to observations at the Sierra Snow Lab (2100m) and Blue Canyon (1609m) during 2008 - 2009. Modeled (c,d) internal pack temperature, (e,f) downward short-wave irradiance, (g,h) downward long-wave irradiance, and (i,k) net-irradiance. Note that the timeperiod of plots e,g,i focus on the melt season (March-May), and plots f,h,j focus on the erroneous mid-winter melt event during January - time-periods marked with vertical dashed lines in (a) and (b).

  14. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  15. Modeling runoff and erosion risk in a~small steep cultivated watershed using different data sources: from on-site measurements to farmers' perceptions

    NASA Astrophysics Data System (ADS)

    Auvet, B.; Lidon, B.; Kartiwa, B.; Le Bissonnais, Y.; Poussin, J.-C.

    2015-09-01

    This paper presents an approach to model runoff and erosion risk in a context of data scarcity, whereas the majority of available models require large quantities of physical data that are frequently not accessible. To overcome this problem, our approach uses different sources of data, particularly on agricultural practices (tillage and land cover) and farmers' perceptions of runoff and erosion. The model was developed on a small (5 ha) cultivated watershed characterized by extreme conditions (slopes of up to 55 %, extreme rainfall events) on the Merapi volcano in Indonesia. Runoff was modelled using two versions of STREAM. First, a lumped version was used to determine the global parameters of the watershed. Second, a distributed version used three parameters for the production of runoff (slope, land cover and roughness), a precise DEM, and the position of waterways for runoff distribution. This information was derived from field observations and interviews with farmers. Both surface runoff models accurately reproduced runoff at the outlet. However, the distributed model (Nash-Sutcliffe = 0.94) was more accurate than the adjusted lumped model (N-S = 0.85), especially for the smallest and biggest runoff events, and produced accurate spatial distribution of runoff production and concentration. Different types of erosion processes (landslides, linear inter-ridge erosion, linear erosion in main waterways) were modelled as a combination of a hazard map (the spatial distribution of runoff/infiltration volume provided by the distributed model), and a susceptibility map combining slope, land cover and tillage, derived from in situ observations and interviews with farmers. Each erosion risk map gives a spatial representation of the different erosion processes including risk intensities and frequencies that were validated by the farmers and by in situ observations. Maps of erosion risk confirmed the impact of the concentration of runoff, the high susceptibility of long steep slopes, and revealed the critical role of tillage direction. Calibrating and validating models using in situ measurements, observations and farmers' perceptions made it possible to represent runoff and erosion risk despite the initial scarcity of hydrological data. Even if the models mainly provided orders of magnitude and qualitative information, they significantly improved our understanding of the watershed dynamics. In addition, the information produced by such models is easy for farmers to use to manage runoff and erosion by using appropriate agricultural practices.

  16. Tests and consequences of disk plus halo models of gamma-ray burst sources

    NASA Technical Reports Server (NTRS)

    Smith, I. A.

    1995-01-01

    The gamma-ray burst observations made by the Burst and Transient Source Experiment (BATSE) and by previous experiments are still consistent with a combined Galactic disk (or Galactic spiral arm) plus extended Galactic halo model. Testable predictions and consequences of the disk plus halo model are discussed here; tests performed on the expanded BATSE database in the future will constrain the allowed model parameters and may eventually rule out the disk plus halo model. Using examples, it is shown that if the halo has an appropriate edge, BATSE will never detect an anisotropic signal from the halo of the Andromeda galaxy. A prediction of the disk plus halo model is that the fraction of the bursts observed to be in the 'disk' population rises as the detector sensitivity improves. A careful reexamination of the numbers of bursts in the two populations for the pre-BATSE databases could rule out this class of models. Similarly, it is predicted that different satellites will observe different relative numbers of bursts in the two classes for any model in which there are two different spatial distribiutions of the sources, or for models in which there is one spatial distribution of the sources that is sampled to different depths for the two classes. An important consequence of the disk plus halo model is that for the birthrate of the halo sources to be small compared to the birthrate of the disk sources, it is necessary for the halo sources to release many orders of magnitude more energy over their bursting lifetime than the disk sources. The halo bursts must also be much more luminous than the disk bursts; if this disk-halo model is correct, it is necessary to explain why the disk sources do not produce halo-type bursts.

  17. Tracing Large-Scale Structure with Radio Sources

    NASA Astrophysics Data System (ADS)

    Lindsay, S. N.

    2015-02-01

    In this thesis, I investigate the spatial distribution of radio sources, and quantify their clustering strength over a range of redshifts, up to z _ 2:2, using various forms of the correlation function measured with data from several multi-wavelength surveys. I present the optical spectra of 30 radio AGN (S1:4 > 100 mJy) in the GAMA/H-ATLAS fields, for which emission line redshifts could be deduced, from observations of 79 target sources with the EFOSC2 spectrograph on the NTT. The mean redshift of these sources is z = 1:2; 12 were identified as quasars (40 per cent), and 6 redshifts (out of 24 targets) were found for AGN hosts to multiple radio components. While obtaining spectra for hosts of these multi-component sources is possible, their lower success rate highlights the difficulty in acheiving a redshift-complete radio sample. Taking an existing spectroscopic redshift survey (GAMA) and radio sources from the FIRST survey (S1:4 > 1 mJy), I then present a cross-matched radio sample with 1,635 spectroscopic redshifts with a median value of z = 0:34. The spatial correlation function of this sample is used to find the redshiftspace (s0) and real-space correlation lengths (r0 _ 8:2 h Mpc), and a mass bias of _1.9. Insight into the redshift-dependence of these quantities is gained by using the angular correlation function and Limber inversion to measure the same spatial clustering parameters. Photometric redshifts! from SDSS/UKIDSS are incorporated to produce a larger matched radio sample at z ' 0:48 (and low- and high-redshift subsamples at z ' 0:30 and z ' 0:65), while their redshift distribution is subtracted from that taken from the SKADS radio simulations to estimate the redshift distribution of the remaining unmatched sources (z ' 1:55). The observed bias evolution over this redshift range is compared with model predictions based on the SKADS simulations, with good agreement at low redshift. The bias found at high redshift significantly exceeds these predictions, however, suggesting a more massive population of galaxies than expected, either due to the relative proportions of different radio sources, or a greater typical halo mass for the high-redshift sources. Finally, the reliance on a model redshift distribution to reach to higher redshifts is removed, as the angular cross-correlation function is used with deep VLA data (S1:4 > 90 _Jy) and optical/IR data from VIDEO/CFHTLS (Ks < 23:5) over 1 square degree. With high-quality photometric redshifts up to z _ 4, and a high signal-to-noise clustering measurement (due to the _100,000 Ks-selected galaxies), I am able to find the bias of a matched sample of only 766 radio sources (as well as of v vi the VIDEO sources), divided into 4 redshift bins reaching a median bias at z ' 2:15. Again, at high redshift, the measured bias appears to exceed the prediction made from the SKADS simulations. Applying luminosity cuts to the radio sample at L > 1023 WHz and higher (removing any non-AGN sources), I find a bias of 8-10 at z _ 1:5, considerably higher than for the full sample, and consistent with the more numerous FRI AGN having similar mass to the FRIIs (M _ 10^14 M_), contrary to the assumptions made in the SKADS simulations. Applying this adjustment to the model bias produces a better fit to the observations for the FIRST radio sources cross-matched with GAMA/SDSS/UKIDSS, as well as for the high-redshift radio sources in VIDEO. Therefore, I have shown that we require a more robust model of the evolution of AGN, and their relation to the underlying dark matter distribution. In particular, understanding these quantities for the abundant FRI population is crucial if we are to use such sources to probe the cosmological model as has been suggested by a number of authors (e.g. Raccanelli et al., 2012; Camera et al., 2012; Ferramacho et al., 2014).

  18. Slip distribution of the 1952 Tokachi-Oki earthquake (M 8.1) along the Kuril Trench deduced from tsunami waveform inversion

    USGS Publications Warehouse

    Hirata, K.; Geist, E.; Satake, K.; Tanioka, Y.; Yamaki, S.

    2003-01-01

    We inverted 13 tsunami waveforms recorded in Japan to estimate the slip distribution of the 1952 Tokachi-Oki earthquake (M 8.1), which occurred southeast off Hokkaido along the southern Kuril subduction zone. The previously estimated source area determined from tsunami travel times [Hatori, 1973] did not coincide with the observed aftershock distribution. Our results show that a large amount of slip occurred in the aftershock area east of Hatori's tsunami source area, suggesting that a portion of the interplate thrust near the trench was ruptured by the main shock. We also found more than 5 m of slip along the deeper part of the seismogenic interface, just below the central part of Hatori's tsunami source area. This region, which also has the largest stress drop during the main shock, had few aftershocks. Large tsunami heights on the eastern Hokkaido coast are better explained by the heterogeneous slip model than previous uniform-slip fault models. The total seismic moment is estimated to be 1.87 ?? 1021 N m, giving a moment magnitude of Mw = 8.1. The revised tsunami source area is estimated to be 25.2 ?? 103 km2, ???3 times larger than the previous tsunami source area. Out of four large earthquakes with M ??? 7 that subsequently occurred in and around the rupture area of the 1952 event, three were at the edges of regions with relatively small amount of slip. We also found that a subducted seamount near the edge of the rupture area possibly impeded slip along the plate interface.

  19. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of SOA model, although the absolute amount of SOA can change significantly. Therefore, the spatial distributions of PM0. 1 POA and SOA over the 9-year study period provide useful information for epidemiological studies to further investigate the associations with health outcomes.

  20. Debiased orbit and absolute-magnitude distributions for near-Earth objects

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce; Bottke, William F.; Beshore, Edward; Vokrouhlický, David; Nesvorný, David; Michel, Patrick

    2018-09-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies of individual NEOs and more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis, eccentricity, inclination, and absolute magnitude H in the range 17 < H < 25. The modeling approach improves upon the methodology originally developed by Bottke et al. (2000, Science 288, 2190-2194) in that it is, for example, based on more realistic orbit distributions and uses source-specific absolute-magnitude distributions that allow for a power-law slope that varies with H. We divide the main asteroid belt into six different entrance routes or regions (ER) to the NEO region: the ν6, 3:1J, 5:2J and 2:1J resonance complexes as well as Hungarias and Phocaeas. In addition we include the Jupiter-family comets as the primary cometary source of NEOs. We calibrate the model against NEO detections by Catalina Sky Surveys' stations 703 and G96 during 2005-2012, and utilize the complementary nature of these two systems to quantify the systematic uncertainties associated to the resulting model. We find that the (fitted) H distributions have significant differences, although most of them show a minimum power-law slope at H ∼ 20. As a consequence of the differences between the ER-specific H distributions we find significant variations in, for example, the NEO orbit distribution, average lifetime, and the relative contribution of different ERs as a function of H. The most important ERs are the ν6 and 3:1J resonance complexes with JFCs contributing a few percent of NEOs on average. A significant contribution from the Hungaria group leads to notable changes compared to the predictions by Bottke et al. in, for example, the orbit distribution and average lifetime of NEOs. We predict that there are 962-56+52 (802-42+48 ×103) NEOs with H < 17.75 (H < 25) and these numbers are in agreement with the most recent estimates found in the literature (the uncertainty estimates only account for the random component). Based on our model we find that relative shares between different NEO groups (Amor, Apollo, Aten, Atira, Vatira) are (39.4,54.4,3.5,1.2,0.3)%, respectively, for the considered H range and that these ratios have a negligible dependence on H. Finally, we find an agreement between our estimate for the rate of Earth impacts by NEOs and recent estimates in the literature, but there remains a potentially significant discrepancy in the frequency of Tunguska-sized and Chelyabinsk-sized impacts.

Top