Sample records for source model results

  1. Comparison of hybrid receptor models to locate PCB sources in Chicago

    NASA Astrophysics Data System (ADS)

    Hsu, Ying-Kuang; Holsen, Thomas M.; Hopke, Philip K.

    Results of three hybrid receptor models, potential source contribution function (PSCF), concentration weighted trajectory (CWT), and residence time weighted concentration (RTWC), were compared for locating polychlorinated biphenyl (PCB) sources contributing to the atmospheric concentrations in Chicago. Variations of these models, including PSCF using mean and 75% criterion concentrations, joint probability PSCF (JP-PSCF), changes of point filters and grid cell sizes for RTWC, and PSCF using wind trajectories started at different altitudes, are also discussed. Modeling results were relatively consistent between models. However, no single model provided as complete information as was obtained by using all of them. CWT and 75% PSCF appears to be able to distinguish between larger sources and moderate ones. RTWC resolved high potential source areas. RTWC and JP-PSCF pooling data from all sampling sites removed the trailing effect often seen in PSCF modeling. PSCF results using average concentration criteria, appears to identify both moderate and major sources. Each model has advantages and disadvantages. However, used in combination, they provide information that is not available if only one of them is used. For short-range atmospheric transport, PSCF results were consistent when using wind trajectories starting at different heights. Based on the archived PCB data, the modeling results indicate there is a large potential source area between Joliet and Kankakee, IL, and two moderate sources to the northwest and south of Chicago. On the south side of Chicago in the neighborhood of Lake Calumet, several PCB sources were identified. Other unidentified potential source location(s) will require additional upwind/downwind field sampling to verify modeling results.

  2. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    NASA Astrophysics Data System (ADS)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the little detail about the chemical components of this source used in the models. The sensitivity tests show that chemical transport models show better performances when displaying a detailed set of sources (14) than when using a simplified one (only 8). It was also observed that an enhanced vertical profiling can improve the estimation of specific sources, such as industry, under complex meteorological conditions and that an insufficient spatial resolution in urban areas can impact on the capabilities of models to estimate the contribution of diffuse primary sources (e.g. traffic). Both families of models identify traffic and biomass burning as the first and second most contributing categories, respectively, to elemental carbon. The results of this study demonstrate that the source apportionment assessment methodology developed by the JRC is applicable to any kind of SA model. The same methodology is implemented in the on-line DeltaSA tool to support source apportionment model evaluation (http://source-apportionment.jrc.ec.europa.eu/).

  3. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  4. Using Model Comparisons to Understand Sources of Nitrogen Delivered to US Coastal Areas

    NASA Astrophysics Data System (ADS)

    McCrackin, M. L.; Harrison, J.; Compton, J. E.

    2011-12-01

    Nitrogen loading to water bodies can result in eutrophication-related hypoxia and degraded water quality. The relative contributions of different anthropogenic and natural sources of in-stream N cannot be directly measured at whole-watershed scales; hence, N source attribution estimates at scales beyond a small catchment must rely on models. Although such estimates have been accomplished using individual N loading models, there has not yet been a comparison of source attribution by multiple regional- and continental-scale models. We compared results from two models applied at large spatial scales: Nutrient Export from WatershedS (NEWS) and SPAtially Referenced Regressions On Watersheds (SPARROW). Despite widely divergent approaches to source attribution, NEWS and SPARROW identified the same dominant sources of N for 65% of the modeled drainage area of the continental US. Human activities accounted for over two-thirds of N delivered to the coastal zone. Regionally, the single largest sources of N predicted by both models reflect land-use patterns across the country. Sewage was an important source in densely populated regions along the east and west coasts of the US. Fertilizer and livestock manure were dominant in the Mississippi River Basin, where the bulk of agricultural areas are located. Run-off from undeveloped areas was the largest source of N delivered to coastal areas in the northwestern US. Our analysis shows that comparisons of source apportionment between models can increase confidence in modeled output by revealing areas of agreement and disagreement. We found predictions for agriculture and atmospheric deposition to be comparable between models; however, attribution to sewage was greater by SPARROW than by NEWS, while the reverse was true for natural N sources. Such differences in predictions resulted from differences in model structure and sources of input data. Nonetheless, model comparisons provide strong evidence that anthropogenic activities have a profound effect on N delivered to coastal areas of the US, especially along the Atlantic coast and Gulf of Mexico.

  5. Ion-source modeling and improved performance of the CAMS high-intensity Cs-sputter ion source

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    2000-10-01

    The interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS) has been computer modeled using the program NEDLab, with the aim of improving negative ion output. Space charge effects on ion trajectories within the source were modeled through a successive iteration process involving the calculation of ion trajectories through Poisson-equation-determined electric fields, followed by calculation of modified electric fields incorporating the charge distribution from the previously calculated ion trajectories. The program has several additional features that are useful in ion source modeling: (1) averaging of space charge distributions over successive iterations to suppress instabilities, (2) Child's Law modeling of space charge limited ion emission from surfaces, and (3) emission of particular ion groups with a thermal energy distribution and at randomized angles. The results of the modeling effort indicated that significant modification of the interior geometry of the source would double Cs + ion production from our spherical ionizer and produce a significant increase in negative ion output from the source. The results of the implementation of the new geometry were found to be consistent with the model results.

  6. SU-D-19A-05: The Dosimetric Impact of Using Xoft Axxent® Electronic Brachytherapy Source TG-43 Dosimetry Parameters for Treatment with the Xoft 30 Mm Diameter Vaginal Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, S; Micka, J; Culberson, W

    2014-06-01

    Purpose: A full TG-43 dosimetric characterization has not been performed for the Xoft Axxent ® electronic brachytherapy source (Xoft, a subsidiary of iCAD, San Jose, CA) within the Xoft 30 mm diameter vaginal applicator. Currently, dose calculations are performed using the bare-source TG-43 parameters and do not account for the presence of the applicator. This work focuses on determining the difference between the bare-source and sourcein- applicator TG-43 parameters. Both the radial dose function (RDF) and polar anisotropy function (PAF) were computationally determined for the source-in-applicator and bare-source models to determine the impact of using the bare-source dosimetry data. Methods:more » MCNP5 was used to model the source and the Xoft 30 mm diameter vaginal applicator. All simulations were performed using 0.84p and 0.03e cross section libraries. All models were developed based on specifications provided by Xoft. The applicator is made of a proprietary polymer material and simulations were performed using the most conservative chemical composition. An F6 collision-kerma tally was used to determine the RDF and PAF values in water at various dwell positions. The RDF values were normalized to 2.0 cm from the source to accommodate the applicator radius. Source-in-applicator results were compared with bare-source results from this work as well as published baresource results. Results: For a 0 mm source pullback distance, the updated bare-source model and source-in-applicator RDF values differ by 2% at 3 cm and 4% at 5 cm. The largest PAF disagreements were observed at the distal end of the source and applicator with up to 17% disagreement at 2 cm and 8% at 8 cm. The bare-source model had RDF values within 2.6% of the published TG-43 data and PAF results within 7.2% at 2 cm. Conclusion: Results indicate that notable differences exist between the bare-source and source-in-applicator TG-43 simulated parameters. Xoft Inc. provided partial funding for this work.« less

  7. Source apportionment for fine particulate matter in a Chinese city using an improved gas-constrained method and comparison with multiple receptor models.

    PubMed

    Shi, Guoliang; Liu, Jiayuan; Wang, Haiting; Tian, Yingze; Wen, Jie; Shi, Xurong; Feng, Yinchang; Ivey, Cesunica E; Russell, Armistead G

    2018-02-01

    PM 2.5 is one of the most studied atmospheric pollutants due to its adverse impacts on human health and welfare and the environment. An improved model (the chemical mass balance gas constraint-Iteration: CMBGC-Iteration) is proposed and applied to identify source categories and estimate source contributions of PM 2.5. The CMBGC-Iteration model uses the ratio of gases to PM as constraints and considers the uncertainties of source profiles and receptor datasets, which is crucial information for source apportionment. To apply this model, samples of PM 2.5 were collected at Tianjin, a megacity in northern China. The ambient PM 2.5 dataset, source information, and gas-to-particle ratios (such as SO 2 /PM 2.5 , CO/PM 2.5 , and NOx/PM 2.5 ratios) were introduced into the CMBGC-Iteration to identify the potential sources and their contributions. Six source categories were identified by this model and the order based on their contributions to PM 2.5 was as follows: secondary sources (30%), crustal dust (25%), vehicle exhaust (16%), coal combustion (13%), SOC (7.6%), and cement dust (0.40%). In addition, the same dataset was also calculated by other receptor models (CMB, CMB-Iteration, CMB-GC, PMF, WALSPMF, and NCAPCA), and the results obtained were compared. Ensemble-average source impacts were calculated based on the seven source apportionment results: contributions of secondary sources (28%), crustal dust (20%), coal combustion (18%), vehicle exhaust (17%), SOC (11%), and cement dust (1.3%). The similar results of CMBGC-Iteration and ensemble method indicated that CMBGC-Iteration can produce relatively appropriate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Modeling of neutrals in the Linac4 H- ion source plasma: Hydrogen atom production density profile and Hα intensity by collisional radiative model

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Shibata, T.; Ohta, M.; Yasumoto, M.; Nishida, K.; Hatayama, A.; Mattei, S.; Lettry, J.; Sawada, K.; Fantz, U.

    2014-02-01

    To control the H0 atom production profile in the H- ion sources is one of the important issues for the efficient and uniform surface H- production. The purpose of this study is to construct a collisional radiative (CR) model to calculate the effective production rate of H0 atoms from H2 molecules in the model geometry of the radio-frequency (RF) H- ion source for Linac4 accelerator. In order to validate the CR model by comparison with the experimental results from the optical emission spectroscopy, it is also necessary for the model to calculate Balmer photon emission rate in the source. As a basic test of the model, the time evolutions of H0 production and the Balmer Hα photon emission rate are calculated for given electron energy distribution functions in the Linac4 RF H- ion source. Reasonable test results are obtained and basis for the detailed comparisons with experimental results have been established.

  9. Studying the highly bent spectra of FR II-type radio galaxies with the KDA EXT model

    NASA Astrophysics Data System (ADS)

    Kuligowska, Elżbieta

    2018-04-01

    Context. The Kaiser, Dennett-Thorpe & Alexander (KDA, 1997, MNRAS, 292, 723) EXT model, that is, the extension of the KDA model of Fanaroff & Riley (FR) II-type source evolution, is applied and confronted with the observational data for selected FR II-type radio sources with significantly aged radio spectra. Aim. A sample of FR II-type radio galaxies with radio spectra strongly bent at their highest frequencies is used for testing the usefulness of the KDA EXT model. Methods: The dynamical evolution of FR II-type sources predicted with the KDA EXT model is briefly presented and discussed. The results are then compared to the ones obtained with the classical KDA approach, assuming the source's continuous injection and self-similarity. Results: The results and corresponding diagrams obtained for the eight sample sources indicate that the KDA EXT model predicts the observed radio spectra significantly better than the best spectral fit provided by the original KDA model.

  10. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    PubMed

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  11. Estimation of source locations of total gaseous mercury measured in New York State using trajectory-based models

    NASA Astrophysics Data System (ADS)

    Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.

    Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.

  12. Development of an on-line source-tagged model for sulfate, nitrate and ammonium: A modeling study for highly polluted periods in Shanghai, China.

    PubMed

    Wu, Jian-Bin; Wang, Zifa; Wang, Qian; Li, Jie; Xu, Jianming; Chen, HuanSheng; Ge, Baozhu; Zhou, Guangqiang; Chang, Luyu

    2017-02-01

    An on-line source-tagged model coupled with an air quality model (Nested Air Quality Prediction Model System, NAQPMS) was applied to estimate source contributions of primary and secondary sulfate, nitrate and ammonium (SNA) during a representative winter period in Shanghai. This source-tagged model system could simultaneously track spatial and temporal sources of SNA, which were apportioned to their respective primary precursors in a simulation run. The results indicate that in the study period, local emissions in Shanghai accounted for over 20% of SNA contributions and that Jiangsu and Shandong were the two major non-local sources. In particular, non-local emissions had higher contributions during recorded pollution periods. This suggests that the transportation of pollutants plays a key role in air pollution in Shanghai. The temporal contributions show that the emissions from the "current day" (emission contribution from the current day during which the model was simulating) contributed 60%-70% of the sulfate and ammonium concentrations but only 10%-20% of the nitrate concentration, while the previous days' contributions increased during the recorded pollution periods. Emissions that were released within three days contributed over 85% averagely for SNA in January 2013. To evaluate the source-tagged model system, the results were compared by sensitivity analysis (emission perturbation of -30%) and backward trajectory analysis. The consistency of the comparison results indicated that the source-tagged model system can track sources of SNA with reasonable accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Update on BioVapor Analysis (Draft Deliberative Document)

    EPA Science Inventory

    An update is given on EPA ORD's evaluation of the BioVapor model for petroleum vapor intrusion assessment. Results from two scenarios are presented: a strong petroleum source and a weaker source. Model results for the strong source are shown to depend on biodegradation rate, oxyg...

  14. Assessing the differences in public health impact of salmonella subtypes using a bayesian microbial subtyping approach for source attribution.

    PubMed

    Pires, Sara M; Hald, Tine

    2010-02-01

    Salmonella is a major cause of human gastroenteritis worldwide. To prioritize interventions and assess the effectiveness of efforts to reduce illness, it is important to attribute salmonellosis to the responsible sources. Studies have suggested that some Salmonella subtypes have a higher health impact than others. Likewise, some food sources appear to have a higher impact than others. Knowledge of variability in the impact of subtypes and sources may provide valuable added information for research, risk management, and public health strategies. We developed a Bayesian model that attributes illness to specific sources and allows for a better estimation of the differences in the ability of Salmonella subtypes and food types to result in reported salmonellosis. The model accommodates data for multiple years and is based on the Danish Salmonella surveillance. The number of sporadic cases caused by different Salmonella subtypes is estimated as a function of the prevalence of these subtypes in the animal-food sources, the amount of food consumed, subtype-related factors, and source-related factors. Our results showed relative differences between Salmonella subtypes in their ability to cause disease. These differences presumably represent multiple factors, such as differences in survivability through the food chain and/or pathogenicity. The relative importance of the source-dependent factors varied considerably over the years, reflecting, among others, variability in the surveillance programs for the different animal sources. The presented model requires estimation of fewer parameters than a previously developed model, and thus allows for a better estimation of these factors to result in reported human disease. In addition, a comparison of the results of the same model using different sets of typing data revealed that the model can be applied to data with less discriminatory power, which is the only data available in many countries. In conclusion, the model allows for the estimation of relative differences between Salmonella subtypes and sources, providing results that will benefit future risk assessment or risk ranking purposes.

  15. The Earthquake‐Source Inversion Validation (SIV) Project

    USGS Publications Warehouse

    Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf

    2016-01-01

    Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.

  16. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    NASA Astrophysics Data System (ADS)

    Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.

    2011-12-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  17. Inferring source attribution from a multiyear multisource data set of Salmonella in Minnesota.

    PubMed

    Ahlstrom, C; Muellner, P; Spencer, S E F; Hong, S; Saupe, A; Rovira, A; Hedberg, C; Perez, A; Muellner, U; Alvarez, J

    2017-12-01

    Salmonella enterica is a global health concern because of its widespread association with foodborne illness. Bayesian models have been developed to attribute the burden of human salmonellosis to specific sources with the ultimate objective of prioritizing intervention strategies. Important considerations of source attribution models include the evaluation of the quality of input data, assessment of whether attribution results logically reflect the data trends and identification of patterns within the data that might explain the detailed contribution of different sources to the disease burden. Here, more than 12,000 non-typhoidal Salmonella isolates from human, bovine, porcine, chicken and turkey sources that originated in Minnesota were analysed. A modified Bayesian source attribution model (available in a dedicated R package), accounting for non-sampled sources of infection, attributed 4,672 human cases to sources assessed here. Most (60%) cases were attributed to chicken, although there was a spike in cases attributed to a non-sampled source in the second half of the study period. Molecular epidemiological analysis methods were used to supplement risk modelling, and a visual attribution application was developed to facilitate data exploration and comprehension of the large multiyear data set assessed here. A large amount of within-source diversity and low similarity between sources was observed, and visual exploration of data provided clues into variations driving the attribution modelling results. Results from this pillared approach provided first attribution estimates for Salmonella in Minnesota and offer an understanding of current data gaps as well as key pathogen population features, such as serotype frequency, similarity and diversity across the sources. Results here will be used to inform policy and management strategies ultimately intended to prevent and control Salmonella infection in the state. © 2017 Blackwell Verlag GmbH.

  18. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  19. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  20. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    NASA Astrophysics Data System (ADS)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  1. A 1D ion species model for an RF driven negative ion source

    NASA Astrophysics Data System (ADS)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  2. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  3. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  4. Hydrodynamic modelling of the microbial water quality in a drinking water source as input for risk reduction management

    NASA Astrophysics Data System (ADS)

    Sokolova, Ekaterina; Pettersson, Thomas J. R.; Bergstedt, Olof; Hermansson, Malte

    2013-08-01

    To mitigate the faecal contamination of drinking water sources and, consequently, to prevent waterborne disease outbreaks, an estimation of the contribution from different sources to the total faecal contamination at the raw water intake of a drinking water treatment plant is needed. The aim of this article was to estimate how much different sources contributed to the faecal contamination at the water intake in a drinking water source, Lake Rådasjön in Sweden. For this purpose, the fate and transport of faecal indicator Escherichia coli within Lake Rådasjön were simulated by a three-dimensional hydrodynamic model. The calibrated hydrodynamic model described the measured data on vertical temperature distribution in the lake well (the Pearson correlation coefficient was 0.99). The data on the E. coli load from the identified contamination sources were gathered and the fate and transport of E. coli released from these sources within the lake were simulated using the developed hydrodynamic model, taking the decay of the E. coli into account. The obtained modelling results were compared to the observed E. coli concentrations at the water intake. The results illustrated that the sources that contributed the most to the faecal contamination at the water intake in Lake Rådasjön were the discharges from the on-site sewers and the main inflow to the lake - the river Mölndalsån. Based on the modelling results recommendations for water producers were formulated. The study demonstrated that this modelling approach is a useful tool for estimating the contribution from different sources to the faecal contamination at the water intake of a drinking water treatment plant and provided decision-support information for the reduction of risks posed to the drinking water source.

  5. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  6. Do forests represent a long-term source of contaminated particulate matter in the Fukushima Prefecture?

    PubMed

    Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier

    2016-12-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. PM2.5 pollution from household solid fuel burning practices in Central India: 2. Application of receptor models for source apportionment.

    PubMed

    Matawle, Jeevan Lal; Pervez, Shamsh; Deb, Manas Kanti; Shrivastava, Anjali; Tiwari, Suresh

    2018-02-01

    USEPA's UNMIX, positive matrix factorization (PMF) and effective variance-chemical mass balance (EV-CMB) receptor models were applied to chemically speciated profiles of 125 indoor PM 2.5 measurements, sampled longitudinally during 2012-2013 in low-income group households of Central India which uses solid fuels for cooking practices. Three step source apportionment studies were carried out to generate more confident source characterization. Firstly, UNMIX6.0 extracted initial number of source factors, which were used to execute PMF5.0 to extract source-factor profiles in second step. Finally, factor analog locally derived source profiles were supplemented to EV-CMB8.2 with indoor receptor PM 2.5 chemical profile to evaluate source contribution estimates (SCEs). The results of combined use of three receptor models clearly describe that UNMIX and PMF are useful tool to extract types of source categories within small receptor dataset and EV-CMB can pick those locally derived source profiles for source apportionment which are analog to PMF-extracted source categories. The source apportionment results have also shown three fold higher relative contribution of solid fuel burning emissions to indoor PM 2.5 compared to those measurements reported for normal households with LPG stoves. The previously reported influential source marker species were found to be comparatively similar to those extracted from PMF fingerprint plots. The comparison between PMF and CMB SCEs results were also found to be qualitatively similar. The performance fit measures of all three receptor models were cross-verified and validated and support each other to gain confidence in source apportionment results.

  8. Source apportionments of PM2.5 organic carbon using molecular marker Positive Matrix Factorization and comparison of results from different receptor models

    NASA Astrophysics Data System (ADS)

    Heo, Jongbae; Dulger, Muaz; Olson, Michael R.; McGinnis, Jerome E.; Shelton, Brandon R.; Matsunaga, Aiko; Sioutas, Constantinos; Schauer, James J.

    2013-07-01

    Four hundred fine particulate matter (PM2.5) samples collected over a 1-year period at two sites in the Los Angeles Basin were analyzed for organic carbon (OC), elemental carbon (EC), water soluble organic carbon (WSOC) and organic molecular markers. The results were used in a Positive Matrix Factorization (PMF) receptor model to obtain daily, monthly and annual average source contributions to PM2.5 OC. Results of the PMF model showed similar source categories with comparable year-long contributions to PM2.5 OC across the sites. Five source categories providing reasonably stable profiles were identified: mobile, wood smoke, primary biogenic, and two types of secondary organic carbon (SOC) (i.e., anthropogenic and biogenic emissions). Total primary emission factors and total SOC factors contributed approximately 60% and 40%, respectively, to the annual-average OC concentrations. Primary sources showed strong seasonal patterns with high winter peaks and low summer peaks, while SOC showed a reverse pattern with highs in the spring and summer in the region. Interestingly, smoke from forest fires which occurred episodically in California during the summer and fall of 2009 was identified and combined with the primary biogenic source as one distinct factor to the OC budget. The PMF resolved factors were further investigated and compared to a chemical mass balance (CMB) model and a second multi-variant receptor model (UNMIX) using molecular markers considered in the PMF. Good agreement between the source contribution from mobile sources and biomass burning for three models were obtained, providing additional weight of evidence that these source apportionment techniques are sufficiently accurate for policy development. However, the CMB model did not quantify primary biogenic emissions, which were included in other sources with the SOC. Both multivariate receptor models, the PMF and the UNMIX, were unable to separate source contributions from diesel and gasoline engines.

  9. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com

    2015-06-15

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Methods: Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Montemore » Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. Results: The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an {sup 125}I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.« less

  10. Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.

    PubMed

    Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G

    2014-12-01

    PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.

  11. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.

  12. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    NASA Astrophysics Data System (ADS)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  13. Analysis of classical Fourier, SPL and DPL heat transfer model in biological tissues in presence of metabolic and external heat source

    NASA Astrophysics Data System (ADS)

    Kumar, Dinesh; Singh, Surjan; Rai, K. N.

    2016-06-01

    In this paper, the temperature distribution in a finite biological tissue in presence of metabolic and external heat source when the surface subjected to different type of boundary conditions is studied. Classical Fourier, single-phase-lag (SPL) and dual-phase-lag (DPL) models were developed for bio-heat transfer in biological tissues. The analytical solution obtained for all the three models using Laplace transform technique and results are compared. The effect of the variability of different parameters such as relaxation time, metabolic heat source, spatial heat source, different type boundary conditions on temperature distribution in different type of the tissues like muscle, tumor, fat, dermis and subcutaneous based on three models are analyzed and discussed in detail. The result obtained in three models is compared with experimental observation of Stolwijk and Hardy (Pflug Arch 291:129-162, 1966). It has been observe that the DPL bio-heat transfer model provides better result in comparison of other two models. The value of metabolic and spatial heat source in boundary condition of first, second and third kind for different type of thermal therapies are evaluated.

  14. Effects of topography and crustal heterogeneities on the source estimation of LP event at Kilauea volcano

    USGS Publications Warehouse

    Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.

    2008-01-01

    The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.

  15. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  16. Discussion of Source Reconstruction Models Using 3D MCG Data

    NASA Astrophysics Data System (ADS)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  17. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  18. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  19. A two dimensional analytical modeling of surface potential in triple metal gate (TMG) fully-depleted Recessed-Source/Drain (Re-S/D) SOI MOSFET

    NASA Astrophysics Data System (ADS)

    Priya, Anjali; Mishra, Ram Awadh

    2016-04-01

    In this paper, analytical modeling of surface potential is proposed for new Triple Metal Gate (TMG) fully depleted Recessed-Source/Dain Silicon On Insulator (SOI) Metal Oxide Semiconductor Field Effect Transistor (MOSFET). The metal with the highest work function is arranged near the source region and the lowest one near the drain. Since Recessed-Source/Drain SOI MOSFET has higher drain current as compared to conventional SOI MOSFET due to large source and drain region. The surface potential model developed by 2D Poisson's equation is verified by comparison to the simulation result of 2-dimensional ATLAS simulator. The model is compared with DMG and SMG devices and analysed for different device parameters. The ratio of metal gate length is varied to optimize the result.

  20. Fine particle receptor modeling in the atmosphere of Mexico City.

    PubMed

    Vega, Elizabeth; Lowenthal, Douglas; Ruiz, Hugo; Reyes, Elizabeth; Watson, John G; Chow, Judith C; Viana, Mar; Querol, Xavier; Alastuey, Andrés

    2009-12-01

    Source apportionment analyses were carried out by means of receptor modeling techniques to determine the contribution of major fine particulate matter (PM2.5) sources found at six sites in Mexico City. Thirty-six source profiles were determined within Mexico City to establish the fingerprints of particulate matter sources. Additionally, the profiles under the same source category were averaged using cluster analysis and the fingerprints of 10 sources were included. Before application of the chemical mass balance (CMB), several tests were carried out to determine the best combination of source profiles and species used for the fitting. CMB results showed significant spatial variations in source contributions among the six sites that are influenced by local soil types and land use. On average, 24-hr PM2.5 concentrations were dominated by mobile source emissions (45%), followed by secondary inorganic aerosols (16%) and geological material (17%). Industrial emissions representing oil combustion and incineration contributed less than 5%, and their contribution was higher at the industrial areas of Tlalnepantla (11%) and Xalostoc (8%). Other sources such as cooking, biomass burning, and oil fuel combustion were identified at lower levels. A second receptor model (principal component analysis, [PCA]) was subsequently applied to three of the monitoring sites for comparison purposes. Although differences were obtained between source contributions, results evidence the advantages of the combined use of different receptor modeling techniques for source apportionment, given the complementary nature of their results. Further research is needed in this direction to reach a better agreement between the estimated source contributions to the particulate matter mass.

  1. Assessing Model Characterization of Single Source ...

    EPA Pesticide Factsheets

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  2. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  3. Targeted versus statistical approaches to selecting parameters for modelling sediment provenance

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick

    2017-04-01

    One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.

  4. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  5. DEVELOPMENT AND EVALUATION OF PM 2.5 SOURCE APPORTIONMENT METHODOLOGIES

    EPA Science Inventory

    The receptor model called Positive Matrix Factorization (PMF) has been extensively used to apportion sources of ambient fine particulate matter (PM2.5), but the accuracy of source apportionment results currently remains unknown. In addition, air quality forecast model...

  6. A New Simplified Source Model to Explain Strong Ground Motions from a Mega-Thrust Earthquake - Application to the 2011 Tohoku Earthquake (Mw9.0) -

    NASA Astrophysics Data System (ADS)

    Nozu, A.

    2013-12-01

    A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.

  7. Future-year ozone prediction for the United States using updated models and inputs.

    PubMed

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  8. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  9. Three dimensional global modeling of atmospheric CO2

    NASA Technical Reports Server (NTRS)

    Fung, I.; Hansen, J.; Rind, D.

    1983-01-01

    A model was developed to study the prospects of extracting information on carbon dioxide sources and sinks from observed CO2 variations. The approach uses a three dimensional global transport model, based on winds from a 3-D general circulation model (GCM), to advect CO2 noninteractively, i.e., as a tracer, with specified sources and sinks of CO2 at the surface. The 3-D model employed is identified and biosphere, ocean and fossil fuel sources and sinks are discussed. Some preliminary model results are presented.

  10. Start-up Characteristics of Swallow-tailed Axial-grooved Heat Pipe under the conditions of Multiple Heat Sources

    NASA Astrophysics Data System (ADS)

    Zhang, Renping

    2017-12-01

    A mathematical model was developed for predicting start-up characteristics of Swallow-tailed Axial-grooved Heat Pipe under the conditions of Multiple Heat Sources. The effects of heat capacitance of heat source, liquid-vapour interfacial evaporation-condensation heat transfer, shear stress at the interface was considered in current model. The interfacial evaporating mass flow rate is based on the kinetic analysis. Time variations of evaporating mass rate, wall temperature and liquid velocity are studied from the start-up to steady state. The calculated results show that wall temperature demonstrates step transition at the junction between the heat source and non-existent heat source on the evaporator. The liquid velocity changes drastically at the evaporator section, however, it has slight variation at the evaporator section without heat source. When the effect of heat source is ignored, the numerical temperature demonstrates a quicker response. With the consideration of capacitance of the heat source, the data obtained from the proposed model agree well with the experimental results.

  11. SeaQuaKE: Sea-optimized Quantum Key Exchange

    DTIC Science & Technology

    2014-11-01

    ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair source based on spontaneous four-wave mixing for...Distribution Special Notice (13-SN- 0004 under ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair ...areas over the last quarter include (i) development of a wavelength-dependent, entangled photon - pair source model and (ii) end-to-end system modeling

  12. Comparing geological and statistical approaches for element selection in sediment tracing research

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only the Lexys Creek modelling results differing significantly (35%). The statistical model with expanded elemental selection (DFA0.35) differed from the geological model by an average of 30% for all 6 models. Elemental selection for sediment fingerprinting therefore has the potential to impact modeling results. Accordingly is important to incorporate both robust geological and statistical approaches when selecting elements for sediment fingerprinting. For the Baroon Pocket Dam, management should focus on reducing the supply of sediments derived from felsic sources in each of the subcatchments.

  13. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model.

    PubMed

    Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J

    2015-06-01

    The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10(10) histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an (125)I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  14. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiatt, JR; Rivard, MJ

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devisemore » the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.« less

  15. Using ensemble models to identify and apportion heavy metal pollution sources in agricultural soils on a local scale.

    PubMed

    Wang, Qi; Xie, Zhiyi; Li, Fangbai

    2015-11-01

    This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Influence of Elevation Data Source on 2D Hydraulic Modelling

    NASA Astrophysics Data System (ADS)

    Bakuła, Krzysztof; StĘpnik, Mateusz; Kurczyński, Zdzisław

    2016-08-01

    The aim of this paper is to analyse the influence of the source of various elevation data on hydraulic modelling in open channels. In the research, digital terrain models from different datasets were evaluated and used in two-dimensional hydraulic models. The following aerial and satellite elevation data were used to create the representation of terrain-digital terrain model: airborne laser scanning, image matching, elevation data collected in the LPIS, EuroDEM, and ASTER GDEM. From the results of five 2D hydrodynamic models with different input elevation data, the maximum depth and flow velocity of water were derived and compared with the results of the most accurate ALS data. For such an analysis a statistical evaluation and differences between hydraulic modelling results were prepared. The presented research proved the importance of the quality of elevation data in hydraulic modelling and showed that only ALS and photogrammetric data can be the most reliable elevation data source in accurate 2D hydraulic modelling.

  17. A stable isotope model for combined source apportionment and degradation quantification of environmental pollutants

    NASA Astrophysics Data System (ADS)

    Lutz, Stefanie; Van Breukelen, Boris

    2014-05-01

    Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.

  18. Effect of high energy electrons on H⁻ production and destruction in a high current DC negative ion source for cyclotron.

    PubMed

    Onai, M; Etoh, H; Aoki, Y; Shibata, T; Mattei, S; Fujita, S; Hatayama, A; Lettry, J

    2016-02-01

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H(-) production. The modelling results reasonably explains the dependence of the H(-) extraction current on the arc-discharge power in the experiments.

  19. New (125)I brachytherapy source IsoSeed I25.S17plus: Monte Carlo dosimetry simulation and comparison to sources of similar design.

    PubMed

    Pantelis, Evaggelos; Papagiannis, Panagiotis; Anagnostopoulos, Giorgos; Baltas, Dimos

    2013-12-01

    To determine the relative dose rate distribution around the new (125)I brachytherapy source IsoSeed I25.S17plus and report results in a form suitable for clinical use. Results for the new source are also compared to corresponding results for other commercially available (125)I sources of similar design. Monte Carlo simulations were performed using the MCNP5 v.1.6 general purpose code. The model of the new source was prepared from information provided by the manufacturer and verified by imaging a sample of ten non-radioactive sources. Corresponding simulations were also performed for the 6711 (125)I brachytherapy source, using updated geometric information presented recently in the literature. The uncertainty of the dose distribution around the new source, as well as the dosimetric quantities derived from it according to the Task Group 43 formalism, were determined from the standard error of the mean of simulations for a sample of fifty source models. These source models were prepared by randomly selecting values of geometric parameters from uniform distributions defined by manufacturer stated tolerances. Results are presented in the form of the quantities defined in the update of the Task Group 43 report, as well as a relative dose rate table in Cartesian coordinates. The dose rate distribution of the new source is comparable to that of sources of similar design (IsoSeed I25.S17, Oncoseed 6711, SelectSeed 130.002, Advantage IAI-125A, I-Seed AgX100, Thinseed 9011). Noticeable differences were observed only for the IsoSeed I25.S06 and Best 2301 sources.

  20. Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Lapusta, N.

    2017-12-01

    Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.

  1. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  2. Source partitioning of anthropogenic groundwater nitrogen in a mixed-use landscape, Tutuila, American Samoa

    NASA Astrophysics Data System (ADS)

    Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph

    2017-12-01

    This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.

  3. Measurement and modeling of the acoustic field near an underwater vehicle and implications for acoustic source localization.

    PubMed

    Lepper, Paul A; D'Spain, Gerald L

    2007-08-01

    The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.

  4. SU-E-T-254: Development of a HDR-BT QA Tool for Verification of Source Position with Oncentra Applicator Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumazaki, Y; Miyaura, K; Hirai, R

    2015-06-15

    Purpose: To develop a High Dose Rate Brachytherapy (HDR-BT) quality assurance (QA) tool for verification of source position with Oncentra applicator modeling, and to report the results of radiation source positions with this tool. Methods: We developed a HDR-BT QA phantom and automated analysis software for verification of source position with Oncentra applicator modeling for the Fletcher applicator used in the MicroSelectron HDR system. This tool is intended for end-to-end tests that mimic the clinical 3D image-guided brachytherapy (3D-IGBT) workflow. The phantom is a 30x30x3 cm cuboid phantom with radiopaque markers, which are inserted into the phantom to evaluate applicatormore » tips and reference source positions; positions are laterally shifted 10 mm from the applicator axis. The markers are lead-based and scatter radiation to expose the films. Gafchromic RTQA2 films are placed on the applicators. The phantom includes spaces to embed the applicators. The source position is determined as the distance between the exposed source position and center position of two pairs of the first radiopaque markers. We generated a 3D-IGBT plan with applicator modeling. The first source position was 6 mm from the applicator tips, and the second source position was 10 mm from the first source position. Results: All source positions were consistent with the exposed positions within 1 mm for all Fletcher applicators using in-house software. Moreover, the distance between source positions was in good agreement with the reference distance. Applicator offset, determined as the distance from the applicator tips at the first source position in the treatment planning system, was accurate. Conclusion: Source position accuracy of applicator modeling used in 3D-IGBT was acceptable. This phantom and software will be useful as a HDR-BT QA tool for verification of source position with Oncentra applicator modeling.« less

  5. Inverse modelling of fluvial sediment connectivity identifies characteristics and spatial distribution of sediment sources in a large river network.

    NASA Astrophysics Data System (ADS)

    Schmitt, R. J. P.; Bizzi, S.; Kondolf, G. M.; Rubin, Z.; Castelletti, A.

    2016-12-01

    Field and laboratory evidence indicates that the spatial distribution of transport in both alluvial and bedrock rivers is an adaptation to sediment supply. Sediment supply, in turn, depends on spatial distribution and properties (e.g., grain sizes and supply rates) of individual sediment sources. Analyzing the distribution of transport capacity in a river network could hence clarify the spatial distribution and properties of sediment sources. Yet, challenges include a) identifying magnitude and spatial distribution of transport capacity for each of multiple grain sizes being simultaneously transported, and b) estimating source grain sizes and supply rates, both at network scales. Herein, we approach the problem of identifying the spatial distribution of sediment sources and the resulting network sediment fluxes in a major, poorly monitored tributary (80,000 km2) of the Mekong. Therefore, we apply the CASCADE modeling framework (Schmitt et al. (2016)). CASCADE calculates transport capacities and sediment fluxes for multiple grainsizes on the network scale based on remotely-sensed morphology and modelled hydrology. CASCADE is run in an inverse Monte Carlo approach for 7500 random initializations of source grain sizes. In all runs, supply of each source is inferred from the minimum downstream transport capacity for the source grain size. Results for each realization are compared to sparse available sedimentary records. Only 1 % of initializations reproduced the sedimentary record. Results for these realizations revealed a spatial pattern in source supply rates, grain sizes, and network sediment fluxes that correlated well with map-derived patterns in lithology and river-morphology. Hence, we propose that observable river hydro-morphology contains information on upstream source properties that can be back-calculated using an inverse modeling approach. Such an approach could be coupled to more detailed models of hillslope processes in future to derive integrated models of hillslope production and fluvial transport processes, which is particularly useful to identify sediment provenance in poorly monitored river basins.

  6. Integrating multiple remote sensing and surface measurements with models, to quantify and constrain the past decade's total 4D aerosol source profile and impacts

    NASA Astrophysics Data System (ADS)

    Cohen, J. B.; Lan, R.; Lin, C.; Ng, D. H. L.; Lim, A.

    2017-12-01

    A multi-instrument, inverse modeling approach, is employed to identify and quantify large-scale global biomass urban aerosol emissions profiles. The approach uses MISR, MODIS, OMI and MOPITT, with data from 2006 to 2016, to generate spatial and temporal loads, as well as some information about composition. The method is able to identify regions impacted by stable urban sources, changing urban sources, intense fires, and linear-combinations. Subsequent quantification is a unified field, leading to a less biased profile, with the result not requiring arbitrary scaling to match long-term means. Additionally, the result reasonably reproduces inter and intra annual variation. Both meso-scale (WRF-CHEM) and global (MIT-AERO, multi-mode, multi-mixing state aerosol model) models of aerosol transport, chemistry, and physics, are used to generate resulting 4D aerosol fields. Comparisons with CALIOP, AERONET, and surface chemical and aerosol networks, provide unbiased confirmation, while column and vertical loadings provide additional feedback. There are three significant results. First, there is a reduction in sources over existing urban areas in East Asia. Second, there is an increase in sources over new urban areas in South, South East, and East Asia. Third, that there is an increase in fire sources in South and South East Asia. There are other initial findings relevant to the global tropics, which have not been as deeply investigated. The results improve the model match with both the mean and variation, which is essential if we hope to understand seasonal extremes. The results also quantify impacts of both local and long-range sources. This is of extreme urgency, in particular in developing nations, where there are considerable contributions from long-range or otherwise unknown sources, that impact hundreds of millions of people throughout Asia. It is hoped that the approach provided here can help us to make critical decisions about total sources, as well as point out the many missing scientific and analytical issues still required to address.

  7. Visible and near-infrared laser radiation in a biological tissue. A forward model for medical imaging by optical tomography.

    PubMed

    Trabelsi, H; Gantri, M; Sediki, E

    2010-01-01

    We present a numerical model for the study of a general, two-dimensional, time-dependent, laser radiation transfer problem in a biological tissue. The model is suitable for many situations, especially when the external laser source is pulsed or continuous. We used a control volume discrete-ordinate method associated with an implicit, three-level, second-order, time-differencing scheme. In medical imaging by laser techniques, this could be an optical tomography forward model. We considered a very thin rectangular biological tissue-like medium submitted to a visible or a near-infrared laser source. Different cases were treated numerically. The source was assumed to be monochromatic and collimated. We used either a continuous source or a short-pulsed source. The transmitted radiance was computed in detector points on the boundaries. Also, the distribution of the internal radiation intensity for different instants is presented. According to the source type, we examined either the steady-state response or the transient response of the medium. First, our model was validated by experimental results from the literature for a homogeneous biological tissue. The space and angular grid independency of our results is shown. Next, the proposed model was used to study changes in transmitted radiation for a homogeneous background medium in which were imbedded two heterogeneous objects. As a last investigation, we studied a multilayered biological tissue. We simulated near-infrared radiation in human skin, fat and muscle. Some results concerning the effects of fat thickness and positions of the detector source on the reflected radiation are presented.

  8. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  9. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  10. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms.

    PubMed

    Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling

    2015-09-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.

  11. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  12. Petroleum generation and migration in the Mesopotamian Basin and Zagros fold belt of Iraq: Results from a basin-modeling study

    USGS Publications Warehouse

    Pitman, Janet K.; Steinshouer, D.; Lewan, M.D.

    2004-01-01

    A regional 3-D total petroleum-system model was developed to evaluate petroleum generation and migration histories in the Mesopotamian Basin and Zagros fold belt in Iraq. The modeling was undertaken in conjunction with Middle East petroleum assessment studies conducted by the USGS. Regional structure maps, isopach and facies maps, and thermal maturity data were used as input to the model. The oil-generation potential of Jurassic source-rocks, the principal known source of the petroleum in Jurassic, Cretaceous, and Tertiary reservoirs in these regions, was modeled using hydrous pyrolysis (Type II-S) kerogen kinetics. Results showed that oil generation in source rocks commenced in the Late Cretaceous in intrashelf basins, peak expulsion took place in the late Miocene and Pliocene when these depocenters had expanded along the Zagros foredeep trend, and generation ended in the Holocene when deposition in the foredeep ceased. The model indicates that, at present, the majority of Jurassic source rocks in Iraq have reached or exceeded peak oil generation and most rocks have completed oil generation and expulsion. Flow-path simulations demonstrate that virtually all oil and gas fields in the Mesopotamian Basin and Zagros fold belt overlie mature Jurassic source rocks (vertical migration dominated) and are situated on, or close to, modeled migration pathways. Fields closest to modeled pathways associated with source rocks in local intrashelf basins were charged earliest from Late Cretaceous through the middle Miocene, and other fields filled later when compression-related traps were being formed. Model results confirm petroleum migration along major, northwest-trending folds and faults, and oil migration loss at the surface.

  13. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  14. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  15. The spectra of ten galactic X-ray sources in the southern sky

    NASA Technical Reports Server (NTRS)

    Cruddace, R.; Bowyer, S.; Lampton, M.; Mack, J. E., Jr.; Margon, B.

    1971-01-01

    Data on ten galactic X-ray sources were obtained during a rocket flight from Brazil in June 1969. Detailed spectra of these sources have been compared with bremsstrahlung, black body, and power law models, each including interstellar absorption. Six of the sources were fitted well by one or more of these models. In only one case were the data sufficient to distinguish the best model. Three of the sources were not fitted by any of the models, which suggests that more complex emission mechanisms are applicable. A comparison of our results with those of previous investigations provides evidence that five of the sources vary in intensity by a factor of 2 or more, and that three have variable spectra. New or substantially improved positions have been derived for four of the sources observed.

  16. Assessment of source-specific health effects associated with an unknown number of major sources of multiple air pollutants: a unified Bayesian approach.

    PubMed

    Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H

    2014-07-01

    There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Gauging Through the Crowd: A Crowd-Sourcing Approach to Urban Rainfall Measurement and Storm Water Modeling Implications

    NASA Astrophysics Data System (ADS)

    Yang, Pan; Ng, Tze Ling

    2017-11-01

    Accurate rainfall measurement at high spatial and temporal resolutions is critical for the modeling and management of urban storm water. In this study, we conduct computer simulation experiments to test the potential of a crowd-sourcing approach, where smartphones, surveillance cameras, and other devices act as precipitation sensors, as an alternative to the traditional approach of using rain gauges to monitor urban rainfall. The crowd-sourcing approach is promising as it has the potential to provide high-density measurements, albeit with relatively large individual errors. We explore the potential of this approach for urban rainfall monitoring and the subsequent implications for storm water modeling through a series of simulation experiments involving synthetically generated crowd-sourced rainfall data and a storm water model. The results show that even under conservative assumptions, crowd-sourced rainfall data lead to more accurate modeling of storm water flows as compared to rain gauge data. We observe the relative superiority of the crowd-sourcing approach to vary depending on crowd participation rate, measurement accuracy, drainage area, choice of performance statistic, and crowd-sourced observation type. A possible reason for our findings is the differences between the error structures of crowd-sourced and rain gauge rainfall fields resulting from the differences between the errors and densities of the raw measurement data underlying the two field types.

  18. Prediction of the Acoustic Field Associated with Instability Wave Source Model for a Compressible Jet

    NASA Technical Reports Server (NTRS)

    Golubev, Vladimir; Mankbadi, Reda R.; Dahl, Milo D.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    This paper provides preliminary results of the study of the acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. The source model is briefly discussed first followed by the analysis of the produced acoustic directivity pattern. Two integral surface techniques are discussed and compared for prediction of the jet acoustic radiation field.

  19. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  20. Role of the source to building lateral separation distance in petroleum vapor intrusion.

    PubMed

    Verginelli, Iason; Capobianco, Oriana; Baciocchi, Renato

    2016-06-01

    The adoption of source to building separation distances to screen sites that need further field investigation is becoming a common practice for the evaluation of the vapor intrusion pathway at sites contaminated by petroleum hydrocarbons. Namely, for the source to building vertical distance, the screening criteria for petroleum vapor intrusion have been deeply investigated in the recent literature and fully addressed in the recent guidelines issued by ITRC and U.S.EPA. Conversely, due to the lack of field and modeling studies, the source to building lateral distance received relatively low attention. To address this issue, in this work we present a steady-state vapor intrusion analytical model incorporating a piecewise first-order aerobic biodegradation limited by oxygen availability that accounts for lateral source to building separation. The developed model can be used to evaluate the role and relevance of lateral vapor attenuation as well as to provide a site-specific assessment of the lateral screening distances needed to attenuate vapor concentrations to risk-based values. The simulation outcomes showed to be consistent with field data and 3-D numerical modeling results reported in previous studies and, for shallow sources, with the screening criteria recommended by U.S.EPA for the vertical separation distance. Indeed, although petroleum vapors can cover maximum lateral distances up to 25-30m, as highlighted by the comparison of model outputs with field evidences of vapor migration in the subsurface, simulation results by this new model indicated that, regardless of the source concentration and depth, 6m and 7m lateral distances are sufficient to attenuate petroleum vapors below risk-based values for groundwater and soil sources, respectively. However, for deep sources (>5m) and for low to moderate source concentrations (benzene concentrations lower than 5mg/L in groundwater and 0.5mg/kg in soil) the above criteria were found extremely conservative as the model results indicated that for such scenarios the lateral screening distance may be set equal to zero. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  2. Meteorological and air pollution modeling for an urban airport

    NASA Technical Reports Server (NTRS)

    Swan, P. R.; Lee, I. Y.

    1980-01-01

    Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.

  3. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  4. Modeling Nutrient Loading to Watersheds in the Great Lakes Basin: A Detailed Source Model at the Regional Scale

    NASA Astrophysics Data System (ADS)

    Luscz, E.; Kendall, A. D.; Martin, S. L.; Hyndman, D. W.

    2011-12-01

    Watershed nutrient loading models are important tools used to address issues including eutrophication, harmful algal blooms, and decreases in aquatic species diversity. Such approaches have been developed to assess the level and source of nutrient loading across a wide range of scales, yet there is typically a tradeoff between the scale of the model and the level of detail regarding the individual sources of nutrients. To avoid this tradeoff, we developed a detailed source nutrient loading model for every watershed in Michigan's lower peninsula. Sources considered include atmospheric deposition, septic tanks, waste water treatment plants, combined sewer overflows, animal waste from confined animal feeding operations and pastured animals, as well as fertilizer from agricultural, residential, and commercial sources and industrial effluents . Each source is related to readily-available GIS inputs that may vary through time. This loading model was used to assess the importance of sources and landscape factors in nutrient loading rates to watersheds, and how these have changed in recent decades. The results showed the value of detailed source inputs, revealing regional trends while still providing insight to the existence of variability at smaller scales.

  5. Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Dai, Fa Foster

    Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.

  6. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  7. Combined analysis of modeled and monitored SO2 concentrations at a complex smelting facility.

    PubMed

    Rehbein, Peter J G; Kennedy, Michael G; Cotsman, David J; Campeau, Madonna A; Greenfield, Monika M; Annett, Melissa A; Lepage, Mike F

    2014-03-01

    Vale Canada Limited owns and operates a large nickel smelting facility located in Sudbury, Ontario. This is a complex facility with many sources of SO2 emissions, including a mix of source types ranging from passive building roof vents to North America's tallest stack. In addition, as this facility performs batch operations, there is significant variability in the emission rates depending on the operations that are occurring. Although SO2 emission rates for many of the sources have been measured by source testing, the reliability of these emission rates has not been tested from a dispersion modeling perspective. This facility is a significant source of SO2 in the local region, making it critical that when modeling the emissions from this facility for regulatory or other purposes, that the resulting concentrations are representative of what would actually be measured or otherwise observed. To assess the accuracy of the modeling, a detailed analysis of modeled and monitored data for SO2 at the facility was performed. A mobile SO2 monitor sampled at five locations downwind of different source groups for different wind directions resulting in a total of 168 hr of valid data that could be used for the modeled to monitored results comparison. The facility was modeled in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model) using site-specific meteorological data such that the modeled periods coincided with the same times as the monitored events. In addition, great effort was invested into estimating the actual SO2 emission rates that would likely be occurring during each of the monitoring events. SO2 concentrations were modeled for receptors around each monitoring location so that the modeled data could be directly compared with the monitored data. The modeled and monitored concentrations were compared and showed that there were no systematic biases in the modeled concentrations. This paper is a case study of a Combined Analysis of Modelled and Monitored Data (CAMM), which is an approach promulgated within air quality regulations in the Province of Ontario, Canada. Although combining dispersion models and monitoring data to estimate or refine estimates of source emission rates is not a new technique, this study shows how, with a high degree of rigor in the design of the monitoring and filtering of the data, it can be applied to a large industrial facility, with a variety of emission sources. The comparison of modeled and monitored SO2 concentrations in this case study also provides an illustration of the AERMOD model performance for a large industrial complex with many sources, at short time scales in comparison with monitored data. Overall, this analysis demonstrated that the AERMOD model performed well.

  8. Source apportionment of indoor air pollution

    NASA Astrophysics Data System (ADS)

    Sexton, Ken; Hayward, Steven B.

    An understanding of the relative contributions from important pollutant sources to human exposures is necessary for the design and implementation of effective control strategies. In the past, societal efforts to control air pollution have focused almost exclusively on the outdoor (ambient) environment. As a result, substantial amounts of time and money have been spent to limit airborne discharges from mobile and stationary sources. Yet it is now recognized that exposures to elevated pollutant concentrations often occur as a result of indoor, rather than outdoor, emissions. While the major indoor sources have been identified, their relative impacts on indoor air quality have not been well defined. Application of existing source apportionment models to nonindustrial indoor environments is only just beginning. It is possible that these models might be used to distinguish between indoor and outdoor emissions, as well as to distinguish among indoor sources themselves. However, before the feasibility and suitability of source-apportionment methods for indoor applications can be assessed adequately, it is necessary to take account of model assumptions and associated data requirements. This paper examines the issue of indoor source apportionment and reviews the need for emission characterization studies to support such source-apportionment efforts.

  9. Geospatial Analysis of Atmospheric Haze Effect by Source and Sink Landscape

    NASA Astrophysics Data System (ADS)

    Yu, T.; Xu, K.; Yuan, Z.

    2017-09-01

    Based on geospatial analysis model, this paper analyzes the relationship between the landscape patterns of source and sink in urban areas and atmospheric haze pollution. Firstly, the classification result and aerosol optical thickness (AOD) of Wuhan are divided into a number of square grids with the side length of 6 km, and the category level landscape indices (PLAND, PD, COHESION, LPI, FRAC_MN) and AOD of each grid are calculated. Then the source and sink landscapes of atmospheric haze pollution are selected based on the analysis of the correlation between landscape indices and AOD. Next, to make the following analysis more efficient, the indices selected before should be determined through the correlation coefficient between them. Finally, due to the spatial dependency and spatial heterogeneity of the data used in this paper, spatial autoregressive model and geo-weighted regression model are used to analyze atmospheric haze effect by source and sink landscape from the global and local level. The results show that the source landscape of atmospheric haze pollution is the building, and the sink landscapes are shrub and woodland. PLAND, PD and COHESION are suitable for describing the atmospheric haze effect by source and sink landscape. Comparing these models, the fitting effect of SLM, SEM and GWR is significantly better than that of OLS model. The SLM model is superior to the SEM model in this paper. Although the fitting effect of GWR model is more unsuited than that of SLM, the influence degree of influencing factors on atmospheric haze of different geography can be expressed clearer. Through the analysis results of these models, following conclusions can be summarized: Reducing the proportion of source landscape area and increasing the degree of fragmentation could cut down aerosol optical thickness; And distributing the source and sink landscape evenly and interspersedly could effectively reduce aerosol optical thickness which represents atmospheric haze pollution; For Wuhan City, the method of adjusting the built-up area slightly and planning the non-built-up areas reasonably can be taken to reduce atmospheric haze pollution.

  10. Factors affecting stream nutrient loads: A synthesis of regional SPARROW model results for the continental United States

    USGS Publications Warehouse

    Preston, Stephen D.; Alexander, Richard B.; Schwarz, Gregory E.; Crawford, Charles G.

    2011-01-01

    We compared the results of 12 recently calibrated regional SPARROW (SPAtially Referenced Regressions On Watershed attributes) models covering most of the continental United States to evaluate the consistency and regional differences in factors affecting stream nutrient loads. The models - 6 for total nitrogen and 6 for total phosphorus - all provide similar levels of prediction accuracy, but those for major river basins in the eastern half of the country were somewhat more accurate. The models simulate long-term mean annual stream nutrient loads as a function of a wide range of known sources and climatic (precipitation, temperature), landscape (e.g., soils, geology), and aquatic factors affecting nutrient fate and transport. The results confirm the dominant effects of urban and agricultural sources on stream nutrient loads nationally and regionally, but reveal considerable spatial variability in the specific types of sources that control water quality. These include regional differences in the relative importance of different types of urban (municipal and industrial point vs. diffuse urban runoff) and agriculture (crop cultivation vs. animal waste) sources, as well as the effects of atmospheric deposition, mining, and background (e.g., soil phosphorus) sources on stream nutrients. Overall, we found that the SPARROW model results provide a consistent set of information for identifying the major sources and environmental factors affecting nutrient fate and transport in United States watersheds at regional and subregional scales. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  11. Simulations of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Demerdjiev, A.; Goutev, N.; Tonev, D.

    2018-05-01

    The development and the optimisation of negative hydrogen/deuterium ion sources goes hand in hand with modelling. In this paper a brief introduction on the physics and types of different sources, and on the Kinetic and Fluid theories for plasma description is made. Examples of some recent models are considered whereas the main emphasis is on the model behind the concept and design of a matrix source of negative hydrogen ions. At the Institute for Nuclear Research and Nuclear Energy of the Bulgarian Academy of Sciences a new cyclotron center is under construction which opens new opportunities for research. One of them is the development of plasma sources for additional proton beam acceleration. We have applied the modelling technique implemented in the aforementioned model of the matrix source to a microwave plasma source exemplifying a plasma filled array of cavities made of a dielectric material with high permittivity. Preliminary results for the distribution of the plasma parameters and the φ component of the electric field in the plasma are obtained.

  12. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  13. Investigation of the potential for long-range transport of mercury to the Everglades using the organic chemistry integrated dispersion (ORCHID) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, D.S.; Kienzle, M.A.; Ferris, D.C.

    1996-12-31

    The objective of this study is to identify potential long-range sources of mercury within the southeast region of the United States. Preliminary results of a climatological study using the Short-range Layered Atmospheric Model (SLAM) transport model from a select source in the southeast U.S. are presented. The potential for long-range transport from Oak Ridge, Tennessee to Florida is discussed. The transport and transformation of mercury during periods of favorable transport to south Florida is modeled using the Organic Chemistry Integrated Dispersion (ORCHID) model, which contains the transport model used in the climatology study. SLAM/ORCHID results indicate the potential for mercurymore » reaching southeast Florida from the source and the atmospheric oxidation of mercury during transport.« less

  14. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    PubMed

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi

    2017-04-01

    In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.

  16. Atmospheric Aerosol Source-Receptor Relationships: The Role of Coal-Fired Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen L. Robinson; Spyros N. Pandis; Cliff I. Davidson

    2005-12-01

    This report describes the technical progress made on the Pittsburgh Air Quality Study (PAQS) during the period of March 2005 through August 2005. Significant progress was made this project period on the source characterization, source apportionment, and deterministic modeling activities. This report highlights new data on road dust, vegetative detritus and motor vehicle emissions. For example, the results show significant differences in the composition in urban and rural road dust. A comparison of the organic of the fine particulate matter in the tunnel with the ambient provides clear evidence of the significant contribution of vehicle emissions to ambient PM. Themore » source profiles developed from this work are being used by the source-receptor modeling activities. The report presents results on the spatial distribution of PMF-factors. The results can be grouped into three different categories: regional sources, local sources, or potentially both regional and local sources. Examples of the regional sources are the sulfate and selenium PMF-factors which most likely-represent coal fired power plants. Examples of local sources are the specialty steel and lead factors. There is reasonable correspondence between these apportionments and data from the EPA TRI and AIRS emission inventories. Detailed comparisons between PMCAMx predictions and measurements by the STN and IMPROVE measurements in the Eastern US are presented. Comparisons were made for the major aerosol components and PM{sub 2.5} mass in July 2001, October 2001, January 2002, and April 2002. The results are encouraging with average fraction biases for most species less than 0.25. The improvement of the model performance during the last two years was mainly due to the comparison of the model predictions with the continuous measurements in the Pittsburgh Supersite. Major improvements have included the descriptions: of ammonia emissions (CMU inventory), night time nitrate chemistry, EC emissions and their diurnal variation, and nitric acid dry removal.« less

  17. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  18. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  19. Overview of the Mathematical and Empirical Receptor Models Workshop (Quail Roost II)

    NASA Astrophysics Data System (ADS)

    Stevens, Robert K.; Pace, Thompson G.

    On 14-17 March 1982, the U.S. Environmental Protection Agency sponsored the Mathematical and Empirical Receptor Models Workshop (Quail Roost II) at the Quail Roost Conference Center, Rougemont, NC. Thirty-five scientists were invited to participate. The objective of the workshop was to document and compare results of source apportionment analyses of simulated and real aerosol data sets. The simulated data set was developed by scientists from the National Bureau of Standards. It consisted of elemental mass data generated using a dispersion model that simulated transport of aerosols from a variety of sources to a receptor site. The real data set contained the mass, elemental, and ionic species concentrations of samples obtained in 18 consecutive 12-h sampling periods in Houston, TX. Some participants performed additional analyses of the Houston filters by X-ray powder diffraction, scanning electron microscopy, or light microscopy. Ten groups analyzed these data sets using a variety of modeling procedures. The results of the modeling exercises were evaluated and structured in a manner that permitted model intercomparisons. The major conclusions and recommendations derived from the intercomparisons were: (1) using aerosol elemental composition data, receptor models can resolve major emission sources, but additional analyses (including light microscopy and X-ray diffraction) significantly increase the number of sources that can be resolved; (2) simulated data sets that contain up to 6 dissimilar emission sources need to be generated, so that different receptor models can be adequately compared; (3) source apportionment methods need to be modified to incorporate a means of apportioning such aerosol species as sulfate and nitrate formed from SO 2 and NO, respectively, because current models tend to resolve particles into chemical species rather than to deduce their sources and (4) a source signature library may be required to be compiled for each airshed in order to improve the resolving capabilities of receptor models.

  20. Effect of high energy electrons on H{sup −} production and destruction in a high current DC negative ion source for cyclotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onai, M., E-mail: onai@ppl.appi.keio.ac.jp; Fujita, S.; Hatayama, A.

    2016-02-15

    Recently, a filament driven multi-cusp negative ion source has been developed for proton cyclotrons in medical applications. In this study, numerical modeling of the filament arc-discharge source plasma has been done with kinetic modeling of electrons in the ion source plasmas by the multi-cusp arc-discharge code and zero dimensional rate equations for hydrogen molecules and negative ions. In this paper, main focus is placed on the effects of the arc-discharge power on the electron energy distribution function and the resultant H{sup −} production. The modelling results reasonably explains the dependence of the H{sup −} extraction current on the arc-discharge powermore » in the experiments.« less

  1. Timing and petroleum sources for the Lower Cretaceous Mannville Group oil sands of northern Alberta based on 4-D modeling

    USGS Publications Warehouse

    Higley, D.K.; Lewan, M.D.; Roberts, L.N.R.; Henry, M.

    2009-01-01

    The Lower Cretaceous Mannville Group oil sands of northern Alberta have an estimated 270.3 billion m3 (BCM) (1700 billion bbl) of in-place heavy oil and tar. Our study area includes oil sand accumulations and downdip areas that partially extend into the deformation zone in western Alberta. The oil sands are composed of highly biodegraded oil and tar, collectively referred to as bitumen, whose source remains controversial. This is addressed in our study with a four-dimensional (4-D) petroleum system model. The modeled primary trap for generated and migrated oil is subtle structures. A probable seal for the oil sands was a gradual updip removal of the lighter hydrocarbon fractions as migrated oil was progressively biodegraded. This is hypothetical because the modeling software did not include seals resulting from the biodegradation of oil. Although the 4-D model shows that source rocks ranging from the Devonian-Mississippian Exshaw Formation to the Lower Cretaceous Mannville Group coals and Ostracode-zone-contributed oil to Mannville Group reservoirs, source rocks in the Jurassic Fernie Group (Gordondale Member and Poker Chip A shale) were the initial and major contributors. Kinetics associated with the type IIS kerogen in Fernie Group source rocks resulted in the early generation and expulsion of oil, as early as 85 Ma and prior to the generation from the type II kerogen of deeper and older source rocks. The modeled 50% peak transformation to oil was reached about 75 Ma for the Gordondale Member and Poker Chip A shale near the west margin of the study area, and prior to onset about 65 Ma from other source rocks. This early petroleum generation from the Fernie Group source rocks resulted in large volumes of generated oil, and prior to the Laramide uplift and onset of erosion (???58 Ma), which curtailed oil generation from all source rocks. Oil generation from all source rocks ended by 40 Ma. Although the modeled study area did not include possible western contributions of generated oil to the oil sands, the amount generated by the Jurassic source rocks within the study area was 475 BCM (2990 billion bbl). Copyright ?? 2009. The American Association of Petroleum Geologists. All rights reserved.

  2. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  3. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  4. Contamination characteristics and source apportionment of trace metals in soils around Miyun Reservoir.

    PubMed

    Chen, Haiyang; Teng, Yanguo; Chen, Ruihui; Li, Jiao; Wang, Jinsheng

    2016-08-01

    Due to their toxicity and bioaccumulation, trace metals in soils can result in a wide range of toxic effects on animals, plants, microbes, and even humans. Recognizing the contamination characteristics of soil metals and especially apportioning their potential sources are the necessary preconditions for pollution prevention and control. Over the past decades, several receptor models have been developed for source apportionment. Among them, positive matrix factorization (PMF) has gained popularity and was recommended by the US Environmental Protection Agency as a general modeling tool. In this study, an extended chemometrics model, multivariate curve resolution-alternating least squares based on maximum likelihood principal component analysis (MCR-ALS/MLPCA), was proposed for source apportionment of soil metals and applied to identify the potential sources of trace metals in soils around Miyun Reservoir. Similar to PMF, the MCR-ALS/MLPCA model can incorporate measurement error information and non-negativity constraints in its calculation procedures. Model validation with synthetic dataset suggested that the MCR-ALS/MLPCA could extract acceptable recovered source profiles even considering relatively larger error levels. When applying to identify the sources of trace metals in soils around Miyun Reservoir, the MCR-ALS/MLPCA model obtained the highly similar profiles with PMF. On the other hand, the assessment results of contamination status showed that the soils around reservoir were polluted by trace metals in slightly moderate degree but potentially posed acceptable risks to the public. Mining activities, fertilizers and agrochemicals, and atmospheric deposition were identified as the potential anthropogenic sources with contributions of 24.8, 14.6, and 13.3 %, respectively. In order to protect the drinking water source of Beijing, special attention should be paid to the metal inputs to soils from mining and agricultural activities.

  5. An improved model to predict bandwidth enhancement in an inductively tuned common source amplifier.

    PubMed

    Reza, Ashif; Misra, Anuraag; Das, Parnika

    2016-05-01

    This paper presents an improved model for the prediction of bandwidth enhancement factor (BWEF) in an inductively tuned common source amplifier. In this model, we have included the effect of drain-source channel resistance of field effect transistor along with load inductance and output capacitance on BWEF of the amplifier. A frequency domain analysis of the model is performed and a closed-form expression is derived for BWEF of the amplifier. A prototype common source amplifier is designed and tested. The BWEF of amplifier is obtained from the measured frequency response as a function of drain current and load inductance. In the present work, we have clearly demonstrated that inclusion of drain-source channel resistance in the proposed model helps to estimate the BWEF, which is accurate to less than 5% as compared to the measured results.

  6. Imaging irregular magma reservoirs with InSAR and GPS observations: Application to Kilauea and Copahue volcanoes

    NASA Astrophysics Data System (ADS)

    Lundgren, P.; Camacho, A.; Poland, M. P.; Miklius, A.; Samsonov, S. V.; Milillo, P.

    2013-12-01

    The availability of synthetic aperture radar (SAR) interferometry (InSAR) data has increased our awareness of the complexity of volcano deformation sources. InSAR's spatial completeness helps identify or clarify source process mechanisms at volcanoes (i.e. Mt. Etna east flank motion; Lazufre crustal magma body; Kilauea dike complexity) and also improves potential model realism. In recent years, Bayesian inference methods have gained widespread use because of their ability to constrain not only source model parameters, but also their uncertainties. They are computationally intensive, however, which tends to limit them to a few geometrically rather simple source representations (for example, spheres). An alternative approach involves solving for irregular pressure and/or density sources from a three-dimensional (3-D) grid of source/density cells. This method has the ability to solve for arbitrarily shaped bodies of constant absolute pressure/density difference. We compare results for both Bayesian (a Markov chain Monte Carlo algorithm) and the irregular source methods for two volcanoes: Kilauea, Hawaii, and Copahue, Argentina-Chile border. Kilauea has extensive InSAR and GPS databases from which to explore the results for the irregular method with respect to the Bayesian approach, prior models, and an extensive set of ancillary data. One caveat, however, is the current restriction in the irregular model inversion to volume-pressure sources (and at a single excess pressure change), which limits its application in cases where sources such as faults or dikes are present. Preliminary results for Kilauea summit deflation during the March 2011 Kamoamoa eruption suggests a northeast-elongated magma body lying roughly 1-1.5 km below the surface. Copahue is a southern Andes volcano that has been inflating since early 2012, with intermittent summit eruptive activity since late 2012. We have an extensive InSAR time series from RADARSAT-2 and COSMO-SkyMed data, although both are from descending tracks. Preliminary modeling suggests a very irregular magma body that extends from the volcanic edifice to less than 5 km depth and located slightly north of the summit at shallow depths but to the ENE at greater depths. In our preliminary analysis, we find that there are potential limitations and trade-offs in the Bayesian results that suggest the simplicity of the assumed analytic source may generate systematic biases in source parameters. Instead, the irregular 3-D solution appears to provide greater realism, but is limited in the number and type of sources that can be modeled.

  7. An Applied Framework for Incorporating Multiple Sources of Uncertainty in Fisheries Stock Assessments.

    PubMed

    Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago

    2016-01-01

    Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.

  8. Toward disentangling the effect of hydrologic and nitrogen source changes from 1992 to 2001 on incremental nitrogen yield in the contiguous United States

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Goodall, Jonathan L.

    2012-04-01

    The goal of this research was to quantify the relative impact of hydrologic and nitrogen source changes on incremental nitrogen yield in the contiguous United States. Using nitrogen source estimates from various federal data bases, remotely sensed land use data from the National Land Cover Data program, and observed instream loadings from the United States Geological Survey National Stream Quality Accounting Network program, we calibrated and applied the spatially referenced regression model SPARROW to estimate incremental nitrogen yield for the contiguous United States. We ran different model scenarios to separate the effects of changes in source contributions from hydrologic changes for the years 1992 and 2001, assuming that only state conditions changed and that model coefficients describing the stream water-quality response to changes in state conditions remained constant between 1992 and 2001. Model results show a decrease of 8.2% in the median incremental nitrogen yield over the period of analysis with the vast majority of this decrease due to changes in hydrologic conditions rather than decreases in nitrogen sources. For example, when we changed the 1992 version of the model to have nitrogen source data from 2001, the model results showed only a small increase in median incremental nitrogen yield (0.12%). However, when we changed the 1992 version of the model to have hydrologic conditions from 2001, model results showed a decrease of approximately 8.7% in median incremental nitrogen yield. We did, however, find notable differences in incremental yield estimates for different sources of nitrogen after controlling for hydrologic changes, particularly for population related sources. For example, the median incremental yield for population related sources increased by 8.4% after controlling for hydrologic changes. This is in contrast to a 2.8% decrease in population related sources when hydrologic changes are included in the analysis. Likewise we found that median incremental yield from urban watersheds increased by 6.8% after controlling for hydrologic changes—in contrast to the median incremental nitrogen yield from cropland watersheds, which decreased by 2.1% over the same time period. These results suggest that, after accounting for hydrologic changes, population related sources became a more significant contributor of nitrogen yield to streams in the contiguous United States over the period of analysis. However, this study was not able to account for the influence of human management practices such as improvements in wastewater treatment plants or Best Management Practices that likely improved water quality, due to a lack of data for quantifying the impact of these practices for the study area.

  9. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    PubMed

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  10. Relation between the neutrino flux from Centaurus A and the associated diffuse neutrino flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koers, Hylke B. J.; Tinyakov, Peter; Institute for Nuclear Research, 60th October Anniversary Prospect 7a, 117312, Moscow

    2008-10-15

    Based on recent results obtained by the Pierre Auger Observatory (PAO), it has been hypothesized that Centaurus A (Cen A) is a source of ultrahigh-energy cosmic rays (UHECRs) and associated neutrinos. We point out that the diffuse neutrino flux may be used to constrain the source model if one assumes that the ratio between the UHECR and neutrino fluxes outputted by Cen A is representative for other sources. Under this assumption we investigate the relation between the neutrino flux from Cen A and the diffuse neutrino flux. Assuming furthermore that Cen A is the source of two UHECR events observedmore » by PAO, we estimate the all-sky diffuse neutrino flux to be {approx}200-5000 times larger than the neutrino flux from Cen A. As a result, the diffuse neutrino fluxes associated with some of the recently proposed models of UHECR-related neutrino production in Cen A are above existing limits. Regardless of the underlying source model, our results indicate that the detection of neutrinos from Cen A without the accompanying diffuse flux would mean that Cen A is an exceptionally efficient neutrino source.« less

  11. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  12. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  13. Wet deposition of mercury at a New York state rural site: Concentrations, fluxes, and source areas

    NASA Astrophysics Data System (ADS)

    Lai, Soon-onn; Holsen, Thomas M.; Hopke, Philip K.; Liu, Peng

    Event-based mercury (Hg) precipitation samples were collected with a modified MIC-B sampler between September 2003 and April 2005 at Potsdam, NY to investigate Hg in wet deposition and identify potential source areas using the potential source contribution function (PCSF) and residence time weighted concentration (RTWC) models. The volume-weighted mean (VWM) concentration and wet deposition flux were 5.5ngL-1 and 7.6μgm-2 during the study period, and 5.5ngL-1 and 5.9μgm-2 in 2004, respectively, and show seasonal trends with larger values in the spring and summer. The PSCF model results matched known source areas based on an emission inventory better than did the RTWC results based on the spatial correlation index. Both modeling results identified large Hg source areas that contain a number of coal-fired power plants located in the Upper Ohio River Valley and in southeastern Michigan, as well as in Quebec and Ontario where there are metal production facilities, waste incinerators and paper mills. Emissions from the Atlantic Ocean were also determined to be a potential source.

  14. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.

  15. Human genome and open source: balancing ethics and business.

    PubMed

    Marturano, Antonio

    2011-01-01

    The Human Genome Project has been completed thanks to a massive use of computer techniques, as well as the adoption of the open-source business and research model by the scientists involved. This model won over the proprietary model and allowed a quick propagation and feedback of research results among peers. In this paper, the author will analyse some ethical and legal issues emerging by the use of such computer model in the Human Genome property rights. The author will argue that the Open Source is the best business model, as it is able to balance business and human rights perspectives.

  16. Application of Molecular Typing Results in Source Attribution Models: The Case of Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) of Salmonella Isolates Obtained from Integrated Surveillance in Denmark.

    PubMed

    de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine

    2016-03-01

    Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.

  17. Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert

    2011-12-01

    General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.

  18. The Relationship Between Partial Contaminant Source Zone Remediation and Groundwater Plume Attenuation

    NASA Astrophysics Data System (ADS)

    Falta, R. W.

    2004-05-01

    Analytical solutions are developed that relate changes in the contaminant mass in a source area to the behavior of biologically reactive dissolved contaminant groundwater plumes. Based on data from field experiments, laboratory experiments, numerical streamtube models, and numerical multiphase flow models, the chemical discharge from a source region is assumed to be a nonlinear power function of the fraction of contaminant mass removed from the source zone. This function can approximately represent source zone mass discharge behavior over a wide range of site conditions ranging from simple homogeneous systems, to complex heterogeneous systems. A mass balance on the source zone with advective transport and first order decay leads to a nonlinear differential equation that is solved analytically to provide a prediction of the time-dependent contaminant mass discharge leaving the source zone. The solution for source zone mass discharge is coupled semi-analytically with a modified version of the Domenico (1987) analytical solution for three-dimensional reactive advective and dispersive transport in groundwater. The semi-analytical model then employs the BIOCHLOR (Aziz et al., 2000; Sun et al., 1999) transformations to model sequential first order parent-daughter biological decay reactions of chlorinated ethenes and ethanes in the groundwater plume. The resulting semi-analytic model thus allows for transient simulation of complex source zone behavior that is fully coupled to a dissolved contaminant plume undergoing sequential biological reactions. Analyses of several realistic scenarios show that substantial changes in the ground water plume can result from the partial removal of contaminant mass from the source zone. These results, however, are sensitive to the nature of the source mass reduction-source discharge reduction curve, and to the rates of degradation of the primary contaminant and its daughter products in the ground water plume. Aziz, C.E., C.J. Newell, J.R. Gonzales, P. Haas, T.P. Clement, and Y. Sun, 2000, BIOCHLOR Natural Attenuation Decision Support System User's Manual Version 1.0, US EPA Report EPA/600/R-00/008 Domenico, P.A., 1987, An analytical model for multidimensional transport of a decaying contaminant species, J. Hydrol., 91: 49-58. Sun, Y., J.N. Petersen, T.P. Clement, and R.S. Skeen, 1999, A new analytical solution for multi-species transport equations with serial and parallel reactions, Water Resour. Res., 35(1): 185-190.

  19. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  20. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  1. GIS-MODFLOW: Ein kleines OpenSource-Werkzeug zur Anbindung von GIS-Daten an MODFLOW

    NASA Astrophysics Data System (ADS)

    Gossel, Wolfgang

    2013-06-01

    The numerical model MODFLOW (Harbaugh 2005) is an efficient and up-to-date tool for groundwater flow modelling. On the other hand, Geo-Information-Systems (GIS) provide useful tools for data preparation and visualization that can also be incorporated in numerical groundwater modelling. An interface between both would therefore be useful for many hydrogeological investigations. To date, several integrated stand-alone tools have been developed that rely on MODFLOW, MODPATH and transport modelling tools. Simultaneously, several open source-GIS codes were developed to improve functionality and ease of use. These GIS tools can be used as pre- and post-processors of the numerical model MODFLOW via a suitable interface. Here we present GIS-MODFLOW as an open-source tool that provides a new universal interface by using the ESRI ASCII GRID data format that can be converted into MODFLOW input data. This tool can also treat MODFLOW results. Such a combination of MODFLOW and open-source GIS opens new possibilities to render groundwater flow modelling, and simulation results, available to larger circles of hydrogeologists.

  2. Constrained positive matrix factorization: Elemental ratios, spatial distinction, and chemical transport model source contributions

    NASA Astrophysics Data System (ADS)

    Sturtz, Timothy M.

    Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.

  3. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  4. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  5. Identifying PM2.5 and PM0.1 sources for epidemiological studies in California.

    PubMed

    Hu, Jianlin; Zhang, Hongliang; Chen, Shuhua; Ying, Qi; Wiedinmyer, Christine; Vandenberghe, Francois; Kleeman, Michael J

    2014-05-06

    The University of California-Davis_Primary (UCD_P) model was applied to simultaneously track ∼ 900 source contributions to primary particulate matter (PM) in California for seven continuous years (January 1st, 2000 to December 31st, 2006). Predicted source contributions to primary PM2.5 mass, PM1.8 elemental carbon (EC), PM1.8 organic carbon (OC), PM0.1 EC, and PM0.1 OC were in general agreement with the results from previous source apportionment studies using receptor-based techniques. All sources were further subjected to a constraint check based on model performance for PM trace elemental composition. A total of 151 PM2.5 sources and 71 PM0.1 sources contained PM elements that were predicted at concentrations in general agreement with measured values at nearby monitoring sites. Significant spatial heterogeneity was predicted among the 151 PM2.5 and 71 PM0.1 source concentrations, and significantly different seasonal profiles were predicted for PM2.5 and PM0.1 in central California vs southern California. Population-weighted concentrations of PM emitted from various sources calculated using the UCD_P model spatial information differed from the central monitor estimates by up to 77% for primary PM2.5 mass and 148% for PM2.5 EC because the central monitor concentration is not representative of exposure for nearby population. The results from the UCD_P model provide enhanced source apportionment information for epidemiological studies to examine the relationship between health effects and concentrations of primary PM from individual sources.

  6. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  7. SOURCE APPORTIONMENT RESULTS, UNCERTAINTIES, AND MODELING TOOLS

    EPA Science Inventory

    Advanced multivariate receptor modeling tools are available from the U.S. Environmental Protection Agency (EPA) that use only speciated sample data to identify and quantify sources of air pollution. EPA has developed both EPA Unmix and EPA Positive Matrix Factorization (PMF) and ...

  8. Global two dimensional chemistry model and simulation of atmospheric chemical composition

    NASA Astrophysics Data System (ADS)

    Zhang, Renjian; Wang, Mingxing; Zeng, Qingcun

    2000-03-01

    A global two-dimensional zonally averaged chemistry model is developed to study the chemi-cal composition of atmosphere. The region of the model is from 90°S to 90°N and from the ground to the altitude of 20 km with a resolution of 5° x 1 km. The wind field is residual circulation calcu-lated from diabatic rate. 34 species and 104 chemical and photochemical reactions are considered in the model. The sources of CH4, CO and NOx, which are divided into seasonal sources and non-seasonal sources, are parameterized as a function of latitude and time. The chemical composi-tion of atmosphere was simulated with emission level of CH4, CO and NOx in 1990. The results are compared with observations and other model results, showing that the model is successful to simu-late the atmospheric chemical composition and distribution of CH4.

  9. Comparison between PVI2D and Abreu–Johnson’s Model for Petroleum Vapor Intrusion Assessment

    PubMed Central

    Yao, Yijun; Wang, Yue; Verginelli, Iason; Suuberg, Eric M.; Ye, Jianfeng

    2018-01-01

    Recently, we have developed a two-dimensional analytical petroleum vapor intrusion model, PVI2D (petroleum vapor intrusion, two-dimensional), which can help users to easily visualize soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, reaction rate constant, soil characteristics, and building features. In this study, we made a full comparison of the results returned by PVI2D and those obtained using Abreu and Johnson’s three-dimensional numerical model (AJM). These comparisons, examined as a function of the source strength, source depth, and reaction rate constant, show that PVI2D can provide similar soil gas concentration profiles and source-to-indoor air attenuation factors (within one order of magnitude difference) as those by the AJM. The differences between the two models can be ascribed to some simplifying assumptions used in PVI2D and to some numerical limitations of the AJM in simulating strictly piecewise aerobic biodegradation and no-flux boundary conditions. Overall, the obtained results show that for cases involving homogenous source and soil, PVI2D can represent a valid alternative to more rigorous three-dimensional numerical models. PMID:29398981

  10. Characterization of the ITER model negative ion source during long pulse operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemsworth, R.S.; Boilson, D.; Crowley, B.

    2006-03-15

    It is foreseen to operate the neutral beam system of the International Thermonuclear Experimental Reactor (ITER) for pulse lengths extending up to 1 h. The performance of the KAMABOKO III negative ion source, which is a model of the source designed for ITER, is being studied on the MANTIS test bed at Cadarache. This article reports the latest results from the characterization of the ion source, in particular electron energy distribution measurements and the comparison between positive ion and negative ion extraction from the source.

  11. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  12. Different approaches to modeling the LANSCE H{sup −} ion source filament performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draganic, I. N., E-mail: draganic@lanl.gov; O’Hara, J. F.; Rybarcyk, L. J.

    2016-02-15

    An overview of different approaches to modeling of hot tungsten filament performance in the Los Alamos Neutron Science Center (LANSCE) H{sup −} surface converter ion source is presented. The most critical components in this negative ion source are two specially shaped wire filaments heated up to the working temperature range of 2600 K–2700 K during normal beam production. In order to prevent catastrophic filament failures (creation of hot spots, wire breaking, excessive filament deflection towards source body, etc.) and to improve understanding of the material erosion processes, we have simulated the filament performance using three different models: a semi-empirical model,more » a thermal finite-element analysis model, and an analytical model. Results of all three models were compared with data taken during LANSCE beam production. The models were used to support the recent successful transition from the beam pulse repetition rate of 60 Hz–120 Hz.« less

  13. Different approaches to modeling the LANSCE H- ion source filament performance

    NASA Astrophysics Data System (ADS)

    Draganic, I. N.; O'Hara, J. F.; Rybarcyk, L. J.

    2016-02-01

    An overview of different approaches to modeling of hot tungsten filament performance in the Los Alamos Neutron Science Center (LANSCE) H- surface converter ion source is presented. The most critical components in this negative ion source are two specially shaped wire filaments heated up to the working temperature range of 2600 K-2700 K during normal beam production. In order to prevent catastrophic filament failures (creation of hot spots, wire breaking, excessive filament deflection towards source body, etc.) and to improve understanding of the material erosion processes, we have simulated the filament performance using three different models: a semi-empirical model, a thermal finite-element analysis model, and an analytical model. Results of all three models were compared with data taken during LANSCE beam production. The models were used to support the recent successful transition from the beam pulse repetition rate of 60 Hz-120 Hz.

  14. Fine Particle Sources and Cardiorespiratory Morbidity: An Application of Chemical Mass Balance and Factor Analytical Source-Apportionment Methods

    PubMed Central

    Sarnat, Jeremy A.; Marmur, Amit; Klein, Mitchel; Kim, Eugene; Russell, Armistead G.; Sarnat, Stefanie E.; Mulholland, James A.; Hopke, Philip K.; Tolbert, Paige E.

    2008-01-01

    Background Interest in the health effects of particulate matter (PM) has focused on identifying sources of PM, including biomass burning, power plants, and gasoline and diesel emissions that may be associated with adverse health risks. Few epidemiologic studies, however, have included source-apportionment estimates in their examinations of PM health effects. We analyzed a time-series of chemically speciated PM measurements in Atlanta, Georgia, and conducted an epidemiologic analysis using data from three distinct source-apportionment methods. Objective The key objective of this analysis was to compare epidemiologic findings generated using both factor analysis and mass balance source-apportionment methods. Methods We analyzed data collected between November 1998 and December 2002 using positive-matrix factorization (PMF), modified chemical mass balance (CMB-LGO), and a tracer approach. Emergency department (ED) visits for a combined cardiovascular (CVD) and respiratory disease (RD) group were assessed as end points. We estimated the risk ratio (RR) associated with same day PM concentrations using Poisson generalized linear models. Results There were significant, positive associations between same-day PM2.5 (PM with aero-dynamic diameter ≤ 2.5 μm) concentrations attributed to mobile sources (RR range, 1.018–1.025) and biomass combustion, primarily prescribed forest burning and residential wood combustion, (RR range, 1.024–1.033) source categories and CVD-related ED visits. Associations between the source categories and RD visits were not significant for all models except sulfate-rich secondary PM2.5 (RR range, 1.012–1.020). Generally, the epidemiologic results were robust to the selection of source-apportionment method, with strong agreement between the RR estimates from the PMF and CMB-LGO models, as well as with results from models using single-species tracers as surrogates of the source-apportioned PM2.5 values. Conclusions Despite differences among the source-apportionment methods, these findings suggest that modeled source-apportioned data can produce robust estimates of acute health risk. In Atlanta, there were consistent associations across methods between PM2.5 from mobile sources and biomass burning with both cardiovascular and respiratory ED visits, and between sulfate-rich secondary PM2.5 with respiratory visits. PMID:18414627

  15. A Bayesian Multivariate Receptor Model for Estimating Source Contributions to Particulate Matter Pollution using National Databases.

    PubMed

    Hackstadt, Amber J; Peng, Roger D

    2014-11-01

    Time series studies have suggested that air pollution can negatively impact health. These studies have typically focused on the total mass of fine particulate matter air pollution or the individual chemical constituents that contribute to it, and not source-specific contributions to air pollution. Source-specific contribution estimates are useful from a regulatory standpoint by allowing regulators to focus limited resources on reducing emissions from sources that are major contributors to air pollution and are also desired when estimating source-specific health effects. However, researchers often lack direct observations of the emissions at the source level. We propose a Bayesian multivariate receptor model to infer information about source contributions from ambient air pollution measurements. The proposed model incorporates information from national databases containing data on both the composition of source emissions and the amount of emissions from known sources of air pollution. The proposed model is used to perform source apportionment analyses for two distinct locations in the United States (Boston, Massachusetts and Phoenix, Arizona). Our results mirror previous source apportionment analyses that did not utilize the information from national databases and provide additional information about uncertainty that is relevant to the estimation of health effects.

  16. Variations in AmLi source spectra and their estimation utilizing the 5 Ring Multiplicity Counter

    NASA Astrophysics Data System (ADS)

    Weinmann-Smith, R.; Beddingfield, D. H.; Enqvist, A.; Swinhoe, M. T.

    2017-06-01

    Active-mode assay systems are widely used for the safeguards of uranium items to verify compliance with the Non-Proliferation Treaty. Systems such as the Active-Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL) use americium-lithium (AmLi) neutron sources to induce fissions which are measured to determine the sample mass. These systems have historically relied on calibrations derived from well-defined standards. Recently, restricted access to standards or more difficult measurements have resulted in a reliance on modeling and simulation for the calibration of systems, which introduces potential simulation biases. The AmLi source energy spectra commonly used in the safeguards community do not accurately represent measurement results and the spectrum uncertainty can represent a large contribution to the total modeling uncertainty in active-mode systems. The 5-Ring Multiplicity Counter (5RMC) has been used to measure 17 AmLi sources. The measurements showed a significant spectral variation between different sources. Utilization of a spectrum that is specific to an individual source or a series of sources will give improved results over historical general spectra when modeling AmLi sources. Candidate AmLi neutron spectra were calculated in MCNP and SOURCES4C for a range of physical AmLi characteristics. The measurement and simulation data were used to fit reliable and accurate AmLi spectra for use in the simulation of active-mode systems. Spectra were created for average Gammatron C, Gammatron N, and MRC series sources, and for individual sources. The systematic uncertainty introduced by physical aspects of the AmLi source were characterized through simulations. The accuracy of spectra from the literature was compared.

  17. Surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, D.R.; Johnson, H.M.

    2011-01-01

    The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.

  18. Potential sources of nitrous acid (HONO) and their impacts on ozone: A WRF-Chem study in a polluted subtropical region

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Wang, Tao; Zhang, Qiang; Zheng, Junyu; Xu, Zheng; Lv, Mengyao

    2016-04-01

    Current chemical transport models commonly undersimulate the atmospheric concentration of nitrous acid (HONO), which plays an important role in atmospheric chemistry, due to the lack or inappropriate representations of some sources in the models. In the present study, we parameterized up-to-date HONO sources into a state-of-the-art three-dimensional chemical transport model (Weather Research and Forecasting model coupled with Chemistry: WRF-Chem). These sources included (1) heterogeneous reactions on ground surfaces with the photoenhanced effect on HONO production, (2) photoenhanced reactions on aerosol surfaces, (3) direct vehicle and vessel emissions, (4) potential conversion of NO2 at the ocean surface, and (5) emissions from soil bacteria. The revised WRF-Chem was applied to explore the sources of the high HONO concentrations (0.45-2.71 ppb) observed at a suburban site located within complex land types (with artificial land covers, ocean, and forests) in Hong Kong. With the addition of these sources, the revised model substantially reproduced the observed HONO levels. The heterogeneous conversions of NO2 on ground surfaces dominated HONO sources contributing about 42% to the observed HONO mixing ratios, with emissions from soil bacterial contributing around 29%, followed by the oceanic source (~9%), photochemical formation via NO and OH (~6%), conversion on aerosol surfaces (~3%), and traffic emissions (~2%). The results suggest that HONO sources in suburban areas could be more complex and diverse than those in urban or rural areas and that the bacterial and/or ocean processes need to be considered in HONO production in forested and/or coastal areas. Sensitivity tests showed that the simulated HONO was sensitive to the uptake coefficient of NO2 on the surfaces. Incorporation of the aforementioned HONO sources significantly improved the simulations of ozone, resulting in increases of ground-level ozone concentrations by 6-12% over urban areas in Hong Kong and the Pearl River Delta region. This result highlights the importance of accurately representing HONO sources in simulations of secondary pollutants over polluted regions.

  19. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  20. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  1. Fermi Large Area Telescope First Source Catalog

    DOE PAGES

    Abdo, A. A.; Ackermann, M.; Ajello, M.; ...

    2010-05-25

    Here, we present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions,more » defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. In conclusion, care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.« less

  2. A Novel Approach for Determining Source-Receptor Relationships of Aerosols in Model Simulations

    NASA Astrophysics Data System (ADS)

    Ma, P.; Gattiker, J.; Liu, X.; Rasch, P. J.

    2013-12-01

    The climate modeling community usually performs sensitivity studies in the 'one-factor-at-a-time' fashion. However, owing to the a-priori unknown complexity and nonlinearity of the climate system and simulation response, it is computationally expensive to systematically identify the cause-and-effect of multiple factors in climate models. In this study, we use a Gaussian Process emulator, based on a small number of Community Atmosphere Model Version 5.1 (CAM5) simulations (constrained by meteorological reanalyses) using a Latin Hypercube experimental design, to demonstrate that it is possible to characterize model behavior accurately and very efficiently without any modifications to the model itself. We use the emulator to characterize the source-receptor relationships of black carbon (BC), focusing specifically on describing the constituent burden and surface deposition rates from emissions in various regions. Our results show that the emulator is capable of quantifying the contribution of aerosol burden and surface deposition from different source regions, finding that most of current Arctic BC comes from remote sources. We also demonstrate that the sensitivity of the BC burdens to emission perturbations differs for various source regions. For example, the emission growth in Africa where dry convections are strong results in a moderate increase of BC burden over the globe while the same emission growth in the Arctic leads to a significant increase of local BC burdens and surface deposition rates. These results provide insights into the dynamical, physical, and chemical processes of the climate model, and the conclusions may have policy implications for making cost-effective global and regional pollution management strategies.

  3. 4D volcano gravimetry

    USGS Publications Warehouse

    Battaglia, Maurizio; Gottsmann, J.; Carbone, D.; Fernandez, J.

    2008-01-01

    Time-dependent gravimetric measurements can detect subsurface processes long before magma flow leads to earthquakes or other eruption precursors. The ability of gravity measurements to detect subsurface mass flow is greatly enhanced if gravity measurements are analyzed and modeled with ground-deformation data. Obtaining the maximum information from microgravity studies requires careful evaluation of the layout of network benchmarks, the gravity environmental signal, and the coupling between gravity changes and crustal deformation. When changes in the system under study are fast (hours to weeks), as in hydrothermal systems and restless volcanoes, continuous gravity observations at selected sites can help to capture many details of the dynamics of the intrusive sources. Despite the instrumental effects, mainly caused by atmospheric temperature, results from monitoring at Mt. Etna volcano show that continuous measurements are a powerful tool for monitoring and studying volcanoes.Several analytical and numerical mathematical models can beused to fit gravity and deformation data. Analytical models offer a closed-form description of the volcanic source. In principle, this allows one to readily infer the relative importance of the source parameters. In active volcanic sites such as Long Valley caldera (California, U.S.A.) and Campi Flegrei (Italy), careful use of analytical models and high-quality data sets has produced good results. However, the simplifications that make analytical models tractable might result in misleading volcanological inter-pretations, particularly when the real crust surrounding the source is far from the homogeneous/ isotropic assumption. Using numerical models allows consideration of more realistic descriptions of the sources and of the crust where they are located (e.g., vertical and lateral mechanical discontinuities, complex source geometries, and topography). Applications at Teide volcano (Tenerife) and Campi Flegrei demonstrate the importance of this more realistic description in gravity calculations. ?? 2008 Society of Exploration Geophysicists. All rights reserved.

  4. Modeling the influence of coupled mass transfer processes on mass flux downgradient of heterogeneous DNAPL source zones

    NASA Astrophysics Data System (ADS)

    Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.

    2018-04-01

    Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of sorption, especially for the case of non-ideal sorption, demonstrating the limitations of employing 2-D predictions for field-scale modeling.

  5. Modeling of ESD events from polymeric surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeifer, Kent Bryant

    2014-03-01

    Transient electrostatic discharge (ESD) events are studied to assemble a predictive model of discharge from polymer surfaces. An analog circuit simulation is produced and its response is compared to various literature sources to explore its capabilities and limitations. Results suggest that polymer ESD events can be predicted to within an order of magnitude. These results compare well to empirical findings from other sources having similar reproducibility.

  6. A systematic evaluation of the dose-rate constant determined by photon spectrometry for 21 different models of low-energy photon-emitting brachytherapy sources.

    PubMed

    Chen, Zhe Jay; Nath, Ravinder

    2010-10-21

    The aim of this study was to perform a systematic comparison of the dose-rate constant (Λ) determined by the photon spectrometry technique (PST) with the consensus value ((CON)Λ) recommended by the American Association of Physicists in Medicine (AAPM) for 21 low-energy photon-emitting interstitial brachytherapy sources. A total of 63 interstitial brachytherapy sources (21 different models with 3 sources per model) containing either (125)I (14 models), (103)Pd (6 models) or (131)Cs (1 model) were included in this study. A PST described by Chen and Nath (2007 Med. Phys. 34 1412-30) was used to determine the dose-rate constant ((PST)Λ) for each source model. Source-dependent variations in (PST)Λ were analyzed systematically against the spectral characteristics of the emitted photons and the consensus values recommended by the AAPM brachytherapy subcommittee. The values of (PST)Λ for the encapsulated sources of (103)Pd, (125)I and (131)Cs varied from 0.661 to 0.678 cGyh(-1) U(-1), 0.959 to 1.024 cGyh(-1)U(-1) and 1.066 to 1.073 cGyh(-1)U(-1), respectively. The relative variation in (PST)Λ among the six (103)Pd source models, caused by variations in photon attenuation and in spatial distributions of radioactivity among the source models, was less than 3%. Greater variations in (PST)Λ were observed among the 14 (125)I source models; the maximum relative difference was over 6%. These variations were caused primarily by the presence of silver in some (125)I source models and, to a lesser degree, by the variations in photon attenuation and in spatial distribution of radioactivity among the source models. The presence of silver generates additional fluorescent x-rays with lower photon energies which caused the (PST)Λ value to vary from 0.959 to 1.019 cGyh(-1)U(-1) depending on the amount of silver used by a given source model. For those (125)I sources that contain no silver, their (PST)Λ was less variable and had values within 1% of 1.024 cGyh(-1)U(-1). For the 16 source models that currently have an AAPM recommended (CON)Λ value, the agreement between (PST)Λ and (CON)Λ was less than 2% for 15 models and was 2.6% for 1 (103)Pd source model. Excellent agreement between (PST)Λ and (CON)Λ was observed for all source models that currently have an AAPM recommended consensus dose-rate constant value. These results demonstrate that the PST is an accurate and robust technique for the determination of the dose-rate constant for low-energy brachytherapy sources.

  7. Evaluating agricultural nonpoint-source pollution using integrated geographic information systems and hydrologic/water quality model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tim, U.S.; Jolly, R.

    1994-01-01

    Considerable progress has been made in developing physically based, distributed parameter, hydrologic/water quality (HIWQ) models for planning and control of nonpoint-source pollution. The widespread use of these models is often constrained by the excessive and time-consuming input data demands and the lack of computing efficiencies necessary for iterative simulation of alternative management strategies. Recent developments in geographic information systems (GIS) provide techniques for handling large amounts of spatial data for modeling nonpoint-source pollution problems. Because a GIS can be used to combine information from several sources to form an array of model input data and to examine any combinations ofmore » spatial input/output data, it represents a highly effective tool for HiWQ modeling. This paper describes the integration of a distributed-parameter model (AGNPS) with a GIS (ARC/INFO) to examine nonpoint sources of pollution in an agricultural watershed. The ARC/INFO GIS provided the tools to generate and spatially organize the disparate data to support modeling, while the AGNPS model was used to predict several water quality variables including soil erosion and sedimentation within a watershed. The integrated system was used to evaluate the effectiveness of several alternative management strategies in reducing sediment pollution in a 417-ha watershed located in southern Iowa. The implementation of vegetative filter strips and contour buffer (grass) strips resulted in a 41 and 47% reduction in sediment yield at the watershed outlet, respectively. In addition, when the integrated system was used, the combination of the above management strategies resulted in a 71% reduction in sediment yield. In general, the study demonstrated the utility of integrating a simulation model with GIS for nonpoini-source pollution control and planning. Such techniques can help characterize the diffuse sources of pollution at the landscape level. 52 refs., 6 figs., 1 tab.« less

  8. Paying attention to attention in recognition memory: insights from models and electrophysiology.

    PubMed

    Dubé, Chad; Payne, Lisa; Sekuler, Robert; Rotello, Caren M

    2013-12-01

    Reliance on remembered facts or events requires memory for their sources, that is, the contexts in which those facts or events were embedded. Understanding of source retrieval has been stymied by the fact that uncontrolled fluctuations of attention during encoding can cloud results of key importance to theoretical development. To address this issue, we combined electrophysiology (high-density electroencephalogram, EEG, recordings) with computational modeling of behavioral results. We manipulated subjects' attention to an auditory attribute, whether the source of individual study words was a male or female speaker. Posterior alpha-band (8-14 Hz) power in subjects' EEG increased after a cue to ignore the voice of the person who was about to speak. Receiver-operating-characteristic analysis validated our interpretation of oscillatory dynamics as a marker of attention to source information. With attention under experimental control, computational modeling showed unequivocally that memory for source (male or female speaker) reflected a continuous signal detection process rather than a threshold recollection process.

  9. Aerosol Source Attributions and Source-Receptor Relationships Across the Northern Hemisphere

    NASA Technical Reports Server (NTRS)

    Bian, Huisheng; Chin, Mian; Kucsera, Tom; Pan, Xiaohua; Darmenov, Anton; Colarco, Peter; Torres, Omar; Shults, Michael

    2014-01-01

    Emissions and long-range transport of air pollution pose major concerns on air quality and climate change. To better assess the impact of intercontinental transport of air pollution on regional and global air quality, ecosystems, and near-term climate change, the UN Task Force on Hemispheric Transport of Air Pollution (HTAP) is organizing a phase II activity (HTAP2) that includes global and regional model experiments and data analysis, focusing on ozone and aerosols. This study presents the initial results of HTAP2 global aerosol modeling experiments. We will (a) evaluate the model results with surface and aircraft measurements, (b) examine the relative contributions of regional emission and extra-regional source on surface PM concentrations and column aerosol optical depth (AOD) over several NH pollution and dust source regions and the Arctic, and (c) quantify the source-receptor relationships in the pollution regions that reflect the sensitivity of regional aerosol amount to the regional and extra-regional emission reductions.

  10. Differentiability of simulated MEG hippocampal, medial temporal and neocortical temporal epileptic spike activity.

    PubMed

    Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J

    2005-12-01

    Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.

  11. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  12. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    NASA Technical Reports Server (NTRS)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.

    1985-01-01

    Flat-Earth and spherical-Earth geopotential modeling of crustal anomaly sources at satellite elevations are compared by computing gravity and scalar magnetic anomalies perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Results indicate that the error caused by the flat-Earth approximation is less than 10% in most geometric conditions. Generally, error increase with larger and wider anomaly sources at higher altitudes. For most crustal source modeling applications at conventional satellite altitudes, flat-Earth modeling can be justified and is numerically efficient.

  13. Optimization of radioactive sources to achieve the highest precision in three-phase flow meters using Jaya algorithm.

    PubMed

    Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M

    2018-05-17

    Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  15. Non-cavitating propeller noise modeling and inversion

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Lee, Keunhwa; Seong, Woojae

    2014-12-01

    Marine propeller is the dominant exciter of the hull surface above it causing high level of noise and vibration in the ship structure. Recent successful developments have led to non-cavitating propeller designs and thus present focus is the non-cavitating characteristics of propeller such as hydrodynamic noise and its induced hull excitation. In this paper, analytic source model of propeller non-cavitating noise, described by longitudinal quadrupoles and dipoles, is suggested based on the propeller hydrodynamics. To find the source unknown parameters, the multi-parameter inversion technique is adopted using the pressure data obtained from the model scale experiment and pressure field replicas calculated by boundary element method. The inversion results show that the proposed source model is appropriate in modeling non-cavitating propeller noise. The result of this study can be utilized in the prediction of propeller non-cavitating noise and hull excitation at various stages in design and analysis.

  16. Present status of numerical modeling of hydrogen negative ion source plasmas and its comparison with experiments: Japanese activities and their collaboration with experimental groups

    NASA Astrophysics Data System (ADS)

    Hatayama, A.; Nishioka, S.; Nishida, K.; Mattei, S.; Lettry, J.; Miyamoto, K.; Shibata, T.; Onai, M.; Abe, S.; Fujita, S.; Yamada, S.; Fukano, A.

    2018-06-01

    The present status of kinetic modeling of particle dynamics in hydrogen negative ion (H‑) source plasmas and their comparisons with experiments are reviewed and discussed with some new results. The main focus is placed on the following topics, which are important for the research and development of H‑ sources for intense and high-quality H‑ ion beams: (i) effects of non-equilibrium features of electron energy distribution function on volume and surface H‑ production, (ii) the origin of the spatial non-uniformity in giant multi-cusp arc-discharge H‑ sources, (iii) capacitive to inductive (E to H) mode transition in radio frequency-inductively coupled plasma H‑ sources and (iv) extraction physics of H‑ ions and beam optics, especially the present understanding of the meniscus formation in strongly electronegative plasmas (so-called ion–ion plasmas) and its effect on beam optics. For these topics, mainly Japanese modeling activities, and their domestic and international collaborations with experimental studies, are introduced with some examples showing how models have been improved and to what extent the modeling studies can presently contribute to improving the source performance. Close collaboration between experimental and modeling activities is indispensable for the validation/improvement of the modeling and its contribution to the source design/development.

  17. Diagnostic Air Quality Model Evaluation of Source-Specific ...

    EPA Pesticide Factsheets

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004–February 2005) provided an unprecedented opportunity to diagnostically evaluate the results of a numerical air quality model. Previous analyses of these measurements demonstrated excellent mass closure for the variety of contributing sources. In this study, a carbon-apportionment version of the Community Multiscale Air Quality (CMAQ) model was used to track primary organic and elemental carbon emissions from 15 independent sources such as mobile sources and biomass burning in addition to four precursor-specific classes of secondary organic aerosol (SOA) originating from isoprene, terpenes, aromatics, and sesquiterpenes. Conversion of the source-resolved model output into organic tracer concentrations yielded a total of 2416 data pairs for comparison with observations. While emission source contributions to the total model bias varied by season and measurement location, the largest absolute bias of −0.55 μgC/m3 was attributed to insufficient isoprene SOA in the summertime CMAQ simulation. Biomass combustion was responsible for the second largest summertime model bias (−0.46 μgC/m3 on average). Several instances of compensating errors were also evident; model underpredictions in some sectors were masked by overpredictions in others. The National Exposure Research L

  18. Diagnostic air quality model evaluation of source-specific primary and secondary fine particulate carbon.

    PubMed

    Napelenok, Sergey L; Simon, Heather; Bhave, Prakash V; Pye, Havala O T; Pouliot, George A; Sheesley, Rebecca J; Schauer, James J

    2014-01-01

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004-February 2005) provided an unprecedented opportunity to diagnostically evaluate the results of a numerical air quality model. Previous analyses of these measurements demonstrated excellent mass closure for the variety of contributing sources. In this study, a carbon-apportionment version of the Community Multiscale Air Quality (CMAQ) model was used to track primary organic and elemental carbon emissions from 15 independent sources such as mobile sources and biomass burning in addition to four precursor-specific classes of secondary organic aerosol (SOA) originating from isoprene, terpenes, aromatics, and sesquiterpenes. Conversion of the source-resolved model output into organic tracer concentrations yielded a total of 2416 data pairs for comparison with observations. While emission source contributions to the total model bias varied by season and measurement location, the largest absolute bias of -0.55 μgC/m(3) was attributed to insufficient isoprene SOA in the summertime CMAQ simulation. Biomass combustion was responsible for the second largest summertime model bias (-0.46 μgC/m(3) on average). Several instances of compensating errors were also evident; model underpredictions in some sectors were masked by overpredictions in others.

  19. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less

  20. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  1. Water quality assessment and apportionment of pollution sources using APCS-MLR and PMF receptor modeling techniques in three major rivers of South Florida.

    PubMed

    Haji Gholizadeh, Mohammad; Melesse, Assefa M; Reddi, Lakshmi

    2016-10-01

    In this study, principal component analysis (PCA), factor analysis (FA), and the absolute principal component score-multiple linear regression (APCS-MLR) receptor modeling technique were used to assess the water quality and identify and quantify the potential pollution sources affecting the water quality of three major rivers of South Florida. For this purpose, 15years (2000-2014) dataset of 12 water quality variables covering 16 monitoring stations, and approximately 35,000 observations was used. The PCA/FA method identified five and four potential pollution sources in wet and dry seasons, respectively, and the effective mechanisms, rules and causes were explained. The APCS-MLR apportioned their contributions to each water quality variable. Results showed that the point source pollution discharges from anthropogenic factors due to the discharge of agriculture waste and domestic and industrial wastewater were the major sources of river water contamination. Also, the studied variables were categorized into three groups of nutrients (total kjeldahl nitrogen, total phosphorus, total phosphate, and ammonia-N), water murkiness conducive parameters (total suspended solids, turbidity, and chlorophyll-a), and salt ions (magnesium, chloride, and sodium), and average contributions of different potential pollution sources to these categories were considered separately. The data matrix was also subjected to PMF receptor model using the EPA PMF-5.0 program and the two-way model described was performed for the PMF analyses. Comparison of the obtained results of PMF and APCS-MLR models showed that there were some significant differences in estimated contribution for each potential pollution source, especially in the wet season. Eventually, it was concluded that the APCS-MLR receptor modeling approach appears to be more physically plausible for the current study. It is believed that the results of apportionment could be very useful to the local authorities for the control and management of pollution and better protection of important riverine water quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. New paradigms for Salmonella source attribution based on microbial subtyping.

    PubMed

    Mughini-Gras, Lapo; Franz, Eelco; van Pelt, Wilfrid

    2018-05-01

    Microbial subtyping is the most common approach for Salmonella source attribution. Typically, attributions are computed using frequency-matching models like the Dutch and Danish models based on phenotyping data (serotyping, phage-typing, and antimicrobial resistance profiling). Herewith, we critically review three major paradigms facing Salmonella source attribution today: (i) the use of genotyping data, particularly Multi-Locus Variable Number of Tandem Repeats Analysis (MLVA), which is replacing traditional Salmonella phenotyping beyond serotyping; (ii) the integration of case-control data into source attribution to improve risk factor identification/characterization; (iii) the investigation of non-food sources, as attributions tend to focus on foods of animal origin only. Population genetics models or simplified MLVA schemes may provide feasible options for source attribution, although there is a strong need to explore novel modelling options as we move towards whole-genome sequencing as the standard. Classical case-control studies are enhanced by incorporating source attribution results, as individuals acquiring salmonellosis from different sources have different associated risk factors. Thus, the more such analyses are performed the better Salmonella epidemiology will be understood. Reparametrizing current models allows for inclusion of sources like reptiles, the study of which improves our understanding of Salmonella epidemiology beyond food to tackle the pathogen in a more holistic way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Receptor model comparisons and wind direction analyses of volatile organic compounds and submicrometer particles in an arid, binational, urban air shed.

    PubMed

    Mukerjee, Shaibal; Norris, Gary A; Smith, Luther A; Noble, Christopher A; Neas, Lucas M; Ozkaynak, A Halûk; Gonzales, Melissa

    2004-04-15

    The relationship between continuous measurements of volatile organic compounds sources and particle number was evaluated at a Photochemical Assessment Monitoring Station Network (PAMS) site located near the U.S.-Mexico Border in central El Paso, TX. Sources of volatile organic compounds (VOCs) were investigated using the multivariate receptor model UNMIX and the effective variance least squares receptor model known as Chemical Mass Balance (CMB, Version 8.0). As expected from PAMS measurements, overall findings from data screening as well as both receptor models confirmed that mobile sources were the major source of VOCs. Comparison of hourly source contribution estimates (SCEs) from the two receptor models revealed significant differences in motor vehicle exhaust and evaporative gasoline contributions. However, the motor vehicle exhaust contributions were highly correlated with each other. Motor vehicle exhaust was also correlated with the ultrafine and accumulation mode particle count, which suggests that motor vehicle exhaust is a source of these particles at the measurement site. Wind sector analyses were performed using the SCE and pollutant data to assess source location of VOCs, particle count, and criteria pollutants. Results from this study have application to source apportionment studies and mobile source emission control strategies that are ongoing in this air shed.

  4. Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.

    PubMed

    Padula, William V; McQueen, Robert Brett; Pronovost, Peter J

    2017-11-01

    The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.

  5. Modeling population exposures to outdoor sources of hazardous air pollutants.

    PubMed

    Ozkaynak, Halûk; Palma, Ted; Touma, Jawad S; Thurman, James

    2008-01-01

    Accurate assessment of human exposures is an important part of environmental health effects research. However, most air pollution epidemiology studies rely upon imperfect surrogates of personal exposures, such as information based on available central-site outdoor concentration monitoring or modeling data. In this paper, we examine the limitations of using outdoor concentration predictions instead of modeled personal exposures for over 30 gaseous and particulate hazardous air pollutants (HAPs) in the US. The analysis uses the results from an air quality dispersion model (the ASPEN or Assessment System for Population Exposure Nationwide model) and an inhalation exposure model (the HAPEM or Hazardous Air Pollutant Exposure Model, Version 5), applied by the US. Environmental protection Agency during the 1999 National Air Toxic Assessment (NATA) in the US. Our results show that the total predicted chronic exposure concentrations of outdoor HAPs from all sources are lower than the modeled ambient concentrations by about 20% on average for most gaseous HAPs and by about 60% on average for most particulate HAPs (mainly, due to the exclusion of indoor sources from our modeling analysis and lower infiltration of particles indoors). On the other hand, the HAPEM/ASPEN concentration ratio averages for onroad mobile source exposures were found to be greater than 1 (around 1.20) for most mobile-source related HAPs (e.g. 1, 3-butadiene, acetaldehyde, benzene, formaldehyde) reflecting the importance of near-roadway and commuting environments on personal exposures to HAPs. The distribution of the ratios of personal to ambient concentrations was found to be skewed for a number of the VOCs and reactive HAPs associated with major source emissions, indicating the importance of personal mobility factors. We conclude that the increase in personal exposures from the corresponding predicted ambient levels tends to occur near locations where there are either major emission sources of HAPs or when individuals are exposed to either on- or nonroad sources of HAPs during their daily activities. These findings underscore the importance of applying exposure-modeling methods, which incorporate information on time-activity, commuting, and exposure factors data, for the purposes of assigning exposures in air pollution health studies.

  6. Evaluation of stormwater micropollutant source control and end-of-pipe control strategies using an uncertainty-calibrated integrated dynamic simulation model.

    PubMed

    Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S

    2015-03-15

    The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Evaluating environmental modeling and sampling data with biomarker data to identify sources and routes of exposure

    NASA Astrophysics Data System (ADS)

    Shin, Hyeong-Moo; McKone, Thomas E.; Bennett, Deborah H.

    2013-04-01

    Exposure to environmental chemicals results from multiple sources, environmental media, and exposure routes. Ideally, modeled exposures should be compared to biomonitoring data. This study compares the magnitude and variation of modeled polycyclic aromatic hydrocarbons (PAHs) exposures resulting from emissions to outdoor and indoor air and estimated exposure inferred from biomarker levels. Outdoor emissions result in both inhalation and food-based exposures. We modeled PAH intake doses using U.S. EPA's 2002 National Air Toxics Assessment (NATA) county-level emissions data for outdoor inhalation, the CalTOX model for food ingestion (based on NATA emissions), and indoor air concentrations from field studies for indoor inhalation. We then compared the modeled intake with the measured urine levels of hydroxy-PAH metabolites from the 2001-2002 National Health and Nutrition Examination Survey (NHANES) survey as quantifiable human intake of PAH parent-compounds. Lognormal probability plots of modeled intakes and estimated intakes inferred from biomarkers suggest that a primary route of exposure to naphthalene, fluorene, and phenanthrene for the U.S. population is likely inhalation from indoor sources. For benzo(a)pyrene, the predominant exposure route is likely from food ingestion resulting from multi-pathway transport and bioaccumulation due to outdoor emissions. Multiple routes of exposure are important for pyrene. We also considered the sensitivity of the predicted exposure to the proportion of the total naphthalene production volume emitted to the indoor environment. The comparison of PAH biomarkers with exposure variability estimated from models and sample data for various exposure pathways supports that both indoor and outdoor models are needed to capture the sources and routes of exposure to environmental contaminants.

  8. A critical assessment of flux and source term closures in shallow water models with porosity for urban flood simulations

    NASA Astrophysics Data System (ADS)

    Guinot, Vincent

    2017-11-01

    The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.

  9. Oxygen, Neon, and Iron X-Ray Absorption in the Local Interstellar Medium

    NASA Technical Reports Server (NTRS)

    Gatuzz, Efrain; Garcia, Javier; Kallman, Timothy R.; Mendoza, Claudio

    2016-01-01

    We present a detailed study of X-ray absorption in the local interstellar medium by analyzing the X-ray spectra of 24 galactic sources obtained with the Chandra High Energy Transmission Grating Spectrometer and the XMM-Newton Reflection Grating Spectrometer. Methods. By modeling the continuum with a simple broken power-law and by implementing the new ISMabs X-ray absorption model, we have estimated the total H, O, Ne, and Fe column densities towards the observed sources. Results. We have determined the absorbing material distribution as a function of source distance and galactic latitude longitude. Conclusions. Direct estimates of the fractions of neutrally, singly, and doubly ionized species of O, Ne, and Fe reveal the dominance of the cold component, thus indicating an overall low degree of ionization. Our results are expected to be sensitive to the model used to describe the continuum in all sources.

  10. Modeling Physarum space exploration using memristors

    NASA Astrophysics Data System (ADS)

    Ntinas, V.; Vourkas, I.; Sirakoulis, G. Ch; Adamatzky, A. I.

    2017-05-01

    Slime mold Physarum polycephalum optimizes its foraging behaviour by minimizing the distances between the sources of nutrients it spans. When two sources of nutrients are present, the slime mold connects the sources, with its protoplasmic tubes, along the shortest path. We present a two-dimensional mesh grid memristor based model as an approach to emulate Physarum’s foraging strategy, which includes space exploration and reinforcement of the optimally formed interconnection network in the presence of multiple aliment sources. The proposed algorithmic approach utilizes memristors and LC contours and is tested in two of the most popular computational challenges for Physarum, namely maze and transportation networks. Furthermore, the presented model is enriched with the notion of noise presence, which positively contributes to a collective behavior and enables us to move from deterministic to robust results. Consequently, the corresponding simulation results manage to reproduce, in a much better qualitative way, the expected transportation networks.

  11. Assessment of impact of unaccounted emission on ambient concentration using DEHM and AERMOD in combination with WRF

    NASA Astrophysics Data System (ADS)

    Kumar, Awkash; Patil, Rashmi S.; Dikshit, Anil Kumar; Kumar, Rakesh; Brandt, Jørgen; Hertel, Ole

    2016-10-01

    The accuracy of the results from an air quality model is governed by the quality of emission and meteorological data inputs in most of the cases. In the present study, two air quality models were applied for inverse modelling to determine the particulate matter emission strengths of urban and regional sources in and around Mumbai in India. The study takes outset in an existing emission inventory for Total Suspended Particulate Matter (TSPM). Since it is known that the available TSPM inventory is uncertain and incomplete, this study will aim for qualifying this inventory through an inverse modelling exercise. For use as input to the air quality models in this study, onsite meteorological data has been generated using the Weather Research Forecasting (WRF) model. The regional background concentration from regional sources is transported in the atmosphere from outside of the study domain. The regional background concentrations of particulate matter were obtained from model calculations with the Danish Eulerian Hemisphere Model (DEHM) for regional sources. The regional background concentrations obtained from DEHM were then used as boundary concentrations in AERMOD calculations of the contribution from local urban sources. The results from the AERMOD calculations were subsequently compared with observed concentrations and emission correction factors obtained by best fit of the model results to the observed concentrations. The study showed that emissions had to be up-scaled by between 14 and 55% in order to fit the observed concentrations; this is of course when assuming that the DEHM model describes the background concentration level of the right magnitude.

  12. Improving the seismic small-scale modelling by comparison with numerical methods

    NASA Astrophysics Data System (ADS)

    Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann

    2017-10-01

    The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.

  13. Aquitard contaminant storage and flux resulting from dense nonaqueous phase liquid source zone dissolution and remediation

    EPA Science Inventory

    A one-dimensional diffusion model was used to investigate the effects of dense non-aqueous phase liquid (DNAPL) source zone dissolution and remediation on the storage and release of contaminants from aquitards. Source zone dissolution was represented by a power-law source depleti...

  14. An inter-comparison of PM10 source apportionment using PCA and PMF receptor models in three European sites.

    PubMed

    Cesari, Daniela; Amato, F; Pandolfi, M; Alastuey, A; Querol, X; Contini, D

    2016-08-01

    Source apportionment of aerosol is an important approach to investigate aerosol formation and transformation processes as well as to assess appropriate mitigation strategies and to investigate causes of non-compliance with air quality standards (Directive 2008/50/CE). Receptor models (RMs) based on chemical composition of aerosol measured at specific sites are a useful, and widely used, tool to perform source apportionment. However, an analysis of available studies in the scientific literature reveals heterogeneities in the approaches used, in terms of "working variables" such as the number of samples in the dataset and the number of chemical species used as well as in the modeling tools used. In this work, an inter-comparison of PM10 source apportionment results obtained at three European measurement sites is presented, using two receptor models: principal component analysis coupled with multi-linear regression analysis (PCA-MLRA) and positive matrix factorization (PMF). The inter-comparison focuses on source identification, quantification of source contribution to PM10, robustness of the results, and how these are influenced by the number of chemical species available in the datasets. Results show very similar component/factor profiles identified by PCA and PMF, with some discrepancies in the number of factors. The PMF model appears to be more suitable to separate secondary sulfate and secondary nitrate with respect to PCA at least in the datasets analyzed. Further, some difficulties have been observed with PCA in separating industrial and heavy oil combustion contributions. Commonly at all sites, the crustal contributions found with PCA were larger than those found with PMF, and the secondary inorganic aerosol contributions found by PCA were lower than those found by PMF. Site-dependent differences were also observed for traffic and marine contributions. The inter-comparison of source apportionment performed on complete datasets (using the full range of available chemical species) and incomplete datasets (with reduced number of chemical species) allowed to investigate the sensitivity of source apportionment (SA) results to the working variables used in the RMs. Results show that, at both sites, the profiles and the contributions of the different sources calculated with PMF are comparable within the estimated uncertainties indicating a good stability and robustness of PMF results. In contrast, PCA outputs are more sensitive to the chemical species present in the datasets. In PCA, the crustal contributions are higher in the incomplete datasets and the traffic contributions are significantly lower for incomplete datasets.

  15. Sources and Sinks: A Stochastic Model of Evolution in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Hermsen, Rutger; Hwa, Terence

    2010-12-01

    We study evolution driven by spatial heterogeneity in a stochastic model of source-sink ecologies. A sink is a habitat where mortality exceeds reproduction so that a local population persists only due to immigration from a source. Immigrants can, however, adapt to conditions in the sink by mutation. To characterize the adaptation rate, we derive expressions for the first arrival time of adapted mutants. The joint effects of migration, mutation, birth, and death result in two distinct parameter regimes. These results may pertain to the rapid evolution of drug-resistant pathogens and insects.

  16. 78 FR 2871 - Approval and Promulgation of Implementation Plans; Georgia: New Source Review-Prevention of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-14

    ... quality modeling) to result in an ambient pollutant increase of at least 1 microgram per meter cubed ([mu... 40 CFR 51.166(m) and 40 CFR 52.21(m). In accordance with EPA's Guideline for Air Quality Modeling (40... background concentrations in modeling conducted to demonstrate that the proposed source or modification will...

  17. An open-terrain line source model coupled with street-canyon effects to forecast carbon monoxide at traffic roundabout.

    PubMed

    Pandian, Suresh; Gokhale, Sharad; Ghoshal, Aloke Kumar

    2011-02-15

    A double-lane four-arm roundabout, where traffic movement is continuous in opposite directions and at different speeds, produces a zone responsible for recirculation of emissions within a road section creating canyon-type effect. In this zone, an effect of thermally induced turbulence together with vehicle wake dominates over wind driven turbulence causing pollutant emission to flow within, resulting into more or less equal amount of pollutants upwind and downwind particularly during low winds. Beyond this region, however, the effect of winds becomes stronger, causing downwind movement of pollutants. Pollutant dispersion caused by such phenomenon cannot be described accurately by open-terrain line source model alone. This is demonstrated by estimating one-minute average carbon monoxide concentration by coupling an open-terrain line source model with a street canyon model which captures the combine effect to describe the dispersion at non-signalized roundabout. The results of the modeling matched well with the measurements compared with the line source model alone and the prediction error reduced by about 50%. The study further demonstrated this with traffic emissions calculated by field and semi-empirical methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Experimental validation study of an analytical model of discrete frequency sound propagation in closed-test-section wind tunnels

    NASA Technical Reports Server (NTRS)

    Mosher, Marianne

    1990-01-01

    The principal objective is to assess the adequacy of linear acoustic theory with an impedence wall boundary condition to model the detailed sound field of an acoustic source in a duct. Measurements and calculations are compared of a simple acoustic source in a rectangular concrete duct lined with foam on the walls and anechoic end terminations. Measurement of acoustic pressure for twelve wave numbers provides variation in frequency and absorption characteristics of the duct walls. Close to the source, where the interference of wall reflections is minimal, correlation is very good. Away from the source, correlation degrades, especially for the lower frequencies. Sensitivity studies show little effect on the predicted results for changes in impedance boundary condition values, source location, measurement location, temperature, and source model for variations spanning the expected measurement error.

  19. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  20. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  1. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    NASA Astrophysics Data System (ADS)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  2. An incentive-based source separation model for sustainable municipal solid waste management in China.

    PubMed

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  3. Surface-Water Nutrient Conditions and Sources in the United States Pacific Northwest1

    PubMed Central

    Wise, Daniel R; Johnson, Henry M

    2011-01-01

    Abstract The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts. PMID:22457584

  4. Theoretical and Numerical Modeling of Transport of Land Use-Specific Fecal Source Identifiers

    NASA Astrophysics Data System (ADS)

    Bombardelli, F. A.; Sirikanchana, K. J.; Bae, S.; Wuertz, S.

    2008-12-01

    Microbial contamination in coastal and estuarine waters is of particular concern to public health officials. In this work, we advocate that well-formulated and developed mathematical and numerical transport models can be combined with modern molecular techniques in order to predict continuous concentrations of microbial indicators under diverse scenarios of interest, and that they can help in source identification of fecal pollution. As a proof of concept, we present initially the theory, numerical implementation and validation of one- and two-dimensional numerical models aimed at computing the distribution of fecal source identifiers in water bodies (based on Bacteroidales marker DNA sequences) coming from different land uses such as wildlife, livestock, humans, dogs or cats. These models have been developed to allow for source identification of fecal contamination in large bodies of water. We test the model predictions using diverse velocity fields and boundary conditions. Then, we present some preliminary results of an application of a three-dimensional water quality model to address the source of fecal contamination in the San Pablo Bay (SPB), United States, which constitutes an important sub-embayment of the San Francisco Bay. The transport equations for Bacteroidales include the processes of advection, diffusion, and decay of Bacteroidales. We discuss the validation of the developed models through comparisons of numerical results with field campaigns developed in the SPB. We determine the extent and importance of the contamination in the bay for two decay rates obtained from field observations, corresponding to total host-specific Bacteroidales DNA and host-specific viable Bacteroidales cells, respectively. Finally, we infer transport conditions in the SPB based on the numerical results, characterizing the fate of outflows coming from the Napa, Petaluma and Sonoma rivers.

  5. On precisely modelling surface deformation due to interacting magma chambers and dykes

    NASA Astrophysics Data System (ADS)

    Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora

    2014-01-01

    Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.

  6. Revisiting the radionuclide atmospheric dispersion event of the Chernobyl disaster - modelling sensitivity and data assimilation

    NASA Astrophysics Data System (ADS)

    Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor

    2013-04-01

    A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.

  7. PHARAO laser source flight model: Design and performances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the lasermore » source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.« less

  8. Source characterization and exposure modeling of gas-phase polycyclic aromatic hydrocarbon (PAH) concentrations in Southern California

    NASA Astrophysics Data System (ADS)

    Masri, Shahir; Li, Lianfa; Dang, Andy; Chung, Judith H.; Chen, Jiu-Chiuan; Fan, Zhi-Hua (Tina); Wu, Jun

    2018-03-01

    Airborne exposures to polycyclic aromatic hydrocarbons (PAHs) are associated with adverse health outcomes. Because personal air measurements of PAHs are labor intensive and costly, spatial PAH exposure models are useful for epidemiological studies. However, few studies provide adequate spatial coverage to reflect intra-urban variability of ambient PAHs. In this study, we collected 39-40 weekly gas-phase PAH samples in southern California twice in summer and twice in winter, 2009, in order to characterize PAH source contributions and develop spatial models that can estimate gas-phase PAH concentrations at a high resolution. A spatial mixed regression model was constructed, including such variables as roadway, traffic, land-use, vegetation index, commercial cooking facilities, meteorology, and population density. Cross validation of the model resulted in an R2 of 0.66 for summer and 0.77 for winter. Results showed higher total PAH concentrations in winter. Pyrogenic sources, such as fossil fuels and diesel exhaust, were the most dominant contributors to total PAHs. PAH sources varied by season, with a higher fossil fuel and wood burning contribution in winter. Spatial autocorrelation accounted for a substantial amount of the variance in total PAH concentrations for both winter (56%) and summer (19%). In summer, other key variables explaining the variance included meteorological factors (9%), population density (15%), and roadway length (21%). In winter, the variance was also explained by traffic density (16%). In this study, source characterization confirmed the dominance of traffic and other fossil fuel sources to total measured gas-phase PAH concentrations while a spatial exposure model identified key predictors of PAH concentrations. Gas-phase PAH source characterization and exposure estimation is of high utility to epidemiologist and policy makers interested in understanding the health impacts of gas-phase PAHs and strategies to reduce emissions.

  9. Source Characterization and Exposure Modeling of Gas-Phase Polycyclic Aromatic Hydrocarbon (PAH) Concentrations in Southern California.

    PubMed

    Masri, Shahir; Li, Lianfa; Dang, Andy; Chung, Judith H; Chen, Jiu-Chiuan; Fan, Zhi-Hua Tina; Wu, Jun

    2018-03-01

    Airborne exposures to polycyclic aromatic hydrocarbons (PAHs) are associated with adverse health outcomes. Because personal air measurements of PAHs are labor intensive and costly, spatial PAH exposure models are useful for epidemiological studies. However, few studies provide adequate spatial coverage to reflect intra-urban variability of ambient PAHs. In this study, we collected 39-40 weekly gas-phase PAH samples in southern California twice in summer and twice in winter, 2009, in order to characterize PAH source contributions and develop spatial models that can estimate gas-phase PAH concentrations at a high resolution. A spatial mixed regression model was constructed, including such variables as roadway, traffic, land-use, vegetation index, commercial cooking facilities, meteorology, and population density. Cross validation of the model resulted in an R 2 of 0.66 for summer and 0.77 for winter. Results showed higher total PAH concentrations in winter. Pyrogenic sources, such as fossil fuels and diesel exhaust, were the most dominant contributors to total PAHs. PAH sources varied by season, with a higher fossil fuel and wood burning contribution in winter. Spatial autocorrelation accounted for a substantial amount of the variance in total PAH concentrations for both winter (56%) and summer (19%). In summer, other key variables explaining the variance included meteorological factors (9%), population density (15%), and roadway length (21%). In winter, the variance was also explained by traffic density (16%). In this study, source characterization confirmed the dominance of traffic and other fossil fuel sources to total measured gas-phase PAH concentrations while a spatial exposure model identified key predictors of PAH concentrations. Gas-phase PAH source characterization and exposure estimation is of high utility to epidemiologist and policy makers interested in understanding the health impacts of gas-phase PAHs and strategies to reduce emissions.

  10. Modeling and analysis of CSAMT field source effect and its characteristics

    NASA Astrophysics Data System (ADS)

    Da, Lei; Xiaoping, Wu; Qingyun, Di; Gang, Wang; Xiangrong, Lv; Ruo, Wang; Jun, Yang; Mingxin, Yue

    2016-02-01

    Controlled-source audio-frequency magnetotellurics (CSAMT) has been a highly successful geophysical tool used in a variety of geological exploration studies for many years. However, due to the artificial source used in the CSAMT technique, two important factors are considered during interpretation: non-plane-wave or geometric effects and source overprint effects. Hence, in this paper we simulate the source overprint effects and analyzed the rule and characteristics of its influence on CSAMT applications. Two-dimensional modeling was carried out using an adaptive unstructured finite element method to simulate several typical models. Also, we summarized the characteristics and rule of the source overprint effects and analyzed its influence on the data taken over several mining areas. The results obtained from the study shows that the occurrence and strength of the source overprint effect is dependent on the location of the source dipole, in relation to the receiver and the subsurface geology. In order to avoid source overprint effects, three principle were suggested to determine the best location for the grounded dipole source in the field.

  11. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  12. Zoonotic Transmission of Waterborne Disease: A Mathematical Model.

    PubMed

    Waters, Edward K; Hamilton, Andrew J; Sidhu, Harvinder S; Sidhu, Leesa A; Dunbar, Michelle

    2016-01-01

    Waterborne parasites that infect both humans and animals are common causes of diarrhoeal illness, but the relative importance of transmission between humans and animals and vice versa remains poorly understood. Transmission of infection from animals to humans via environmental reservoirs, such as water sources, has attracted attention as a potential source of endemic and epidemic infections, but existing mathematical models of waterborne disease transmission have limitations for studying this phenomenon, as they only consider contamination of environmental reservoirs by humans. This paper develops a mathematical model that represents the transmission of waterborne parasites within and between both animal and human populations. It also improves upon existing models by including animal contamination of water sources explicitly. Linear stability analysis and simulation results, using realistic parameter values to describe Giardia transmission in rural Australia, show that endemic infection of an animal host with zoonotic protozoa can result in endemic infection in human hosts, even in the absence of person-to-person transmission. These results imply that zoonotic transmission via environmental reservoirs is important.

  13. Spatiotemporal Modelling of Dust Storm Sources Emission in West Asia

    NASA Astrophysics Data System (ADS)

    Khodabandehloo, E.; Alimohamdadi, A.; Sadeghi-Niaraki, A.; Darvishi Boloorani, A.; Alesheikh, A. A.

    2013-09-01

    Dust aerosol is the largest contributor to aerosol mass concentrations in the troposphere and has considerable effects on the air quality of spatial and temporal scales. Arid and semi-arid areas of the West Asia are one of the most important regional dust sources in the world. These phenomena directly or indirectly affecting almost all aspects life in almost 15 countries in the region. So an accurate estimate of dust emissions is very crucial for making a common understanding and knowledge of the problem. Because of the spatial and temporal limits of the ground-based observations, remote sensing methods have been found to be more efficient and useful for studying the West Asia dust source. The vegetation cover limits dust emission by decelerating the surface wind velocities and therefore reducing the momentum transport. While all models explicitly take into account the change of wind speed and soil moisture in calculating dust emissions, they commonly employ a "climatological" land cover data for identifying dust source locations and neglect the time variation of surface bareness. In order to compile the aforementioned model, land surface features such as soil moisture, texture, type, and vegetation and also wind speed as atmospheric parameter are used. Having used NDVI data show significant change in dust emission, The modeled dust emission with static source function in June 2008 is 17.02 % higher than static source function and similar result for Mach 2007 show the static source function is 8.91 % higher than static source function. we witness a significant improvement in accuracy of dust forecasts during the months of most soil vegetation changes (spring and winter) compared to outputs resulted from static model, in which NDVI data are neglected.

  14. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  15. Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Svetlana A.

    2015-01-01

    Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.

  16. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  17. A modification of the Regional Nutrient Management model (ReNuMa) to identify long-term changes in riverine nitrogen sources

    NASA Astrophysics Data System (ADS)

    Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang

    2018-06-01

    Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.

  18. Switching performance of OBS network model under prefetched real traffic

    NASA Astrophysics Data System (ADS)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  19. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  20. Numerical modeling of the SNS H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan

    Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less

  1. The Characteristics of Electromagnetic Fields Induced by Different Type Sources

    NASA Astrophysics Data System (ADS)

    Di, Q.; Fu, C.; Wang, R.; Xu, C.; An, Z.

    2011-12-01

    Controlled source audio-frequence magnetotelluric (CSAMT) method has played an important role in the shallow exploration (less than 1.5km) in the field of resources, environment and engineering geology. In order to prospect the deeper target, one has to increase the strength of the source and offset. However, the exploration is nearly impossible for the heavy larger power transmitting source used in the deeper prospecting and mountain area. So an EM method using a fixed large power source, such as long bipole current source, two perpendicular "L" shape long bipole current source and large radius circle current source, is beginning to take shape. In order to increase the strength of the source, the length of the transmitting bipole in one direction or in perpendicular directions has to be much larger, such as L=100km, or the radius of the circle current source is much larger. The electric field strength are IL2and IL2/4π separately for long bipole source and circle current source with the same wire length. Just considering the effectiveness of source, the strength of the circle current source is larger than that of long bipole source if is large enough. However, the strength of the electromagnetic signal doesn't totally depend on the transmitting source, the effect of ionosphere on the electromagnetic (EM) field should be considered when observation is carried at a very far (about several thousands kilometers) location away from the source for the long bipole source or the large radius circle current source. We firstly calculate the electromagnetic fields with the traditional controlled source (CSEM) configuration using the integral equation (IE) code developed by our research group for a three layers earth-ionosphere model which consists of ionosphere, atmosphere and earth media. The modeling results agree well with the half space analytical results because the effect of ionosphere for this small scale source can be ignorable, which means the integral equation method is reliable and effective for modeling models including ionosphere, atmosphere and earth media. In order to discuss EM fields' characters for complicate earth-ionosphere media excited by long bipole, "L" shape bipole and circle current sources in the far-field and wave-guide zones, we modeled the frequency responses and decay characters of EM fields for three layers earth-ionosphere model. Because of the effect of ionosphere, the earth-ionosphere electromagnetic fields' decay curves with given frequency show that the fields of Ex and Hy , excited by a long bipole and "L" shape bipole, can be divided into an extra wave-guide field with slower attenuation and strong amplititude than that in half space, but the EM fields of circle current source does not show the same characteristics, ionosphere makes the amplitude of the EM field weaker for the circle current source. For this reason, it is better to use long bipole source while working in the wave-guide field with a fixed large power source.

  2. The discounting model selector: Statistical software for delay discounting applications.

    PubMed

    Gilroy, Shawn P; Franck, Christopher T; Hantula, Donald A

    2017-05-01

    Original, open-source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user-supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom-designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open-sourced software are discussed and a review of possible expansions of this software are provided. © 2017 Society for the Experimental Analysis of Behavior.

  3. Comparison of source apportionment of PM2.5 using receptor models in the main hub port city of East Asia: Busan

    NASA Astrophysics Data System (ADS)

    Jeong, Ju-Hee; Shon, Zang-Ho; Kang, Minsung; Song, Sang-Keun; Kim, Yoo-Keun; Park, Jinsoo; Kim, Hyunjae

    2017-01-01

    The contributions of various PM2.5 emission sources to ambient PM2.5 levels during 2013 in the main hub port city (Busan, South Korea) of East Asia was quantified using several receptor modeling techniques. Three receptor models of principal component analysis/absolute principal component score (PCA/APCS), positive matrix factorization (PMF), and chemical mass balance (CMB) were used to apportion the source of PM2.5 obtained from the target city. The results of the receptor models indicated that the secondary formation of PM2.5 was the dominant (45-60%) contributor to PM2.5 levels in the port city of Busan. The PMF and PCA/APCS suggested that ship emission was a non-negligible contributor of PM2.5 (up to about 10%) in the study area, whereas it was a negligible contributor based on CMB. The magnitude of source contribution estimates to PM2.5 levels differed significantly among these three models due to their limitations (e.g., PM2.5 emission source profiles and restrictions of the models). Potential source contribution function and concentration-weighted trajectory analyses indicated that long-range transport from sources in the eastern China and Yellow Sea contributed significantly to the level of PM2.5 in Busan.

  4. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    NASA Astrophysics Data System (ADS)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  5. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  6. Numerical Models for Sound Propagation in Long Spaces

    NASA Astrophysics Data System (ADS)

    Lai, Chenly Yuen Cheung

    Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.

  7. Odor-conditioned rheotaxis of the sea lamprey: modeling, analysis and validation

    USGS Publications Warehouse

    Choi, Jongeun; Jean, Soo; Johnson, Nicholas S.; Brant, Cory O.; Li, Weiming

    2013-01-01

    Mechanisms for orienting toward and locating an odor source are sought in both biology and engineering. Chemical ecology studies have demonstrated that adult female sea lamprey show rheotaxis in response to a male pheromone with dichotomous outcomes: sexually mature females locate the source of the pheromone whereas immature females swim by the source and continue moving upstream. Here we introduce a simple switching mechanism modeled after odor-conditioned rheotaxis for the sea lamprey as they search for the source of a pheromone in a one-dimensional riverine environment. In this strategy, the females move upstream only if they detect that the pheromone concentration is higher than a threshold value and drifts down (by turning off control action to save energy) otherwise. In addition, we propose various uncertainty models such as measurement noise, actuator disturbance, and a probabilistic model of a concentration field in turbulent flow. Based on the proposed model with uncertainties, a convergence analysis showed that with this simplistic switching mechanism, the lamprey converges to the source location on average in spite of all such uncertainties. Furthermore, a slightly modified model and its extensive simulation results explain the behaviors of immature female lamprey near the source location.

  8. Analysis of seismic sources for different mechanisms of fracture growth for microseismic monitoring applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchkov, A. A., E-mail: DuchkovAA@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk, 630090; Stefanov, Yu. P., E-mail: stefanov@ispms.tsc.ru

    2015-10-27

    We have developed and illustrated an approach for geomechanic modeling of elastic wave generation (microsiesmic event occurrence) during incremental fracture growth. We then derived properties of effective point seismic sources (radiation patterns) approximating obtained wavefields. These results establish connection between geomechanic models of hydraulic fracturing and microseismic monitoring. Thus, the results of the moment tensor inversion of microseismic data can be related to different geomechanic scenarios of hydraulic fracture growth. In future, the results can be used for calibrating hydrofrac models. We carried out a series of numerical simulations and made some observations about wave generation during fracture growth. Inmore » particular when the growing fracture hits pre-existing crack then it generates much stronger microseismic event compared to fracture growth in homogeneous medium (radiation pattern is very close to the theoretical dipole-type source mechanism)« less

  9. [Effects of attitude formation, persuasive message, and source expertise on attitude change: an examination based on the Elaboration Likelihood Model and the Attitude Formation Theory].

    PubMed

    Nakamura, M; Saito, K; Wakabayashi, M

    1990-04-01

    The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.

  10. ON THE LAMPPOST MODEL OF ACCRETING BLACK HOLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niedźwiecki, Andrzej; Szanecki, Michał; Zdziarski, Andrzej A.

    2016-04-10

    We study the lamppost model, in which the X-ray source in accreting black hole (BH) systems is located on the rotation axis close to the horizon. We point out a number of inconsistencies in the widely used lamppost model relxilllp, e.g., neglecting the redshift of the photons emitted by the lamppost that are directly observed. They appear to invalidate those model fitting results for which the source distances from the horizon are within several gravitational radii. Furthermore, if those results were correct, most of the photons produced in the lamppost would be trapped by the BH, and the luminosity generatedmore » in the source as measured at infinity would be much larger than that observed. This appears to be in conflict with the observed smooth state transitions between the hard and soft states of X-ray binaries. The required increase of the accretion rate and the associated efficiency reduction also present a problem for active galactic nuclei. Then, those models imply the luminosity measured in the local frame is much higher than that produced in the source and measured at infinity, due to the additional effects of time dilation and redshift, and the electron temperature is significantly higher than that observed. We show that these conditions imply that the fitted sources would be out of the e{sup ±} pair equilibrium. On the other hand, the above issues pose relatively minor problems for sources at large distances from the BH, where relxilllp can still be used.« less

  11. On the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.

    2001-01-01

    The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.

  12. A Comprehensive Model of the Near-Earth Magnetic Field. Phase 3

    NASA Technical Reports Server (NTRS)

    Sabaka, Terence J.; Olsen, Nils; Langel, Robert A.

    2000-01-01

    The near-Earth magnetic field is due to sources in Earth's core, ionosphere, magnetosphere, lithosphere, and from coupling currents between ionosphere and magnetosphere and between hemispheres. Traditionally, the main field (low degree internal field) and magnetospheric field have been modeled simultaneously, and fields from other sources modeled separately. Such a scheme, however, can introduce spurious features. A new model, designated CMP3 (Comprehensive Model: Phase 3), has been derived from quiet-time Magsat and POGO satellite measurements and observatory hourly and annual means measurements as part of an effort to coestimate fields from all of these sources. This model represents a significant advancement in the treatment of the aforementioned field sources over previous attempts, and includes an accounting for main field influences on the magnetosphere, main field and solar activity influences on the ionosphere, seasonal influences on the coupling currents, a priori characterization of ionospheric and magnetospheric influence on Earth-induced fields, and an explicit parameterization and estimation of the lithospheric field. The result of this effort is a model whose fits to the data are generally superior to previous models and whose parameter states for the various constituent sources are very reasonable.

  13. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Source apportionment of formaldehyde during TexAQS 2006 using a source-oriented chemical transport model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongliang; Li, Jingyi; Ying, Qi; Guven, Birnur Buzcu; Olaguer, Eduardo P.

    2013-02-01

    In this study, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model was developed and used to quantify the contributions of five major local emission source types in Southeast Texas (vehicles, industry, natural gas combustion, wildfires, biogenic sources), as well as upwind sources, to regional primary and secondary formaldehyde (HCHO) concentrations. Predicted HCHO concentrations agree well with observations at two urban sites (the Moody Tower [MT] site at the University of Houston and the Haden Road #3 [HRM-3] site operated by Texas Commission on Environmental Quality). However, the model underestimates concentrations at an industrial site (Lynchburg Ferry). Throughout most of Southeast Texas, primary HCHO accounts for approximately 20-30% of total HCHO, while the remaining portion is due to secondary HCHO (30-50%) and upwind sources (20-50%). Biogenic sources, natural gas combustion, and vehicles are important sources of primary HCHO in the urban Houston area, respectively, accounting for 10-20%, 10-30%, and 20-60% of total primary HCHO. Biogenic sources, industry, and vehicles are the top three sources of secondary HCHO, respectively, accounting for 30-50%, 10-30%, and 5-15% of overall secondary HCHO. It was also found that over 70% of PAN in the Houston area is due to upwind sources, and only 30% is formed locally. The model-predicted source contributions to HCHO at the MT generally agree with source apportionment results obtained from the Positive Matrix Factorization (PMF) technique.

  15. Nitrate variability in groundwater of North Carolina using monitoring and private well data models.

    PubMed

    Messier, Kyle P; Kane, Evan; Bolich, Rick; Serre, Marc L

    2014-09-16

    Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. Results show significant differences in the spatial distribution of groundwater NO3- contamination in monitoring versus private wells; high NO3- concentrations in the southeastern plains of North Carolina; and wastewater treatment residuals and swine confined animal feeding operations as local sources of NO3- in monitoring wells. Results are of interest to agencies that regulate drinking water sources or monitor health outcomes from ingestion of drinking water. Lastly, LUR-BME model estimates can be integrated into surface water models for more accurate management of nonpoint sources of nitrogen.

  16. Determining Mass and Persistence of a Reactive Brominated-Solvent DNAPL Source Using Mass Depletion-Mass Flux Reduction Relationships During Pumping

    NASA Astrophysics Data System (ADS)

    Johnston, C. D.; Davis, G. B.; Bastow, T.; Annable, M. D.; Trefry, M. G.; Furness, A.; Geste, Y.; Woodbury, R.; Rhodes, S.

    2011-12-01

    Measures of the source mass and depletion characteristics of recalcitrant dense non-aqueous phase liquid (DNAPL) contaminants are critical elements for assessing performance of remediation efforts. This is in addition to understanding the relationships between source mass depletion and changes to dissolved contaminant concentration and mass flux in groundwater. Here we present results of applying analytical source-depletion concepts to pumping from within the DNAPL source zone of a 10-m thick heterogeneous layered aquifer to estimate the original source mass and characterise the time trajectory of source depletion and mass flux in groundwater. The multi-component, reactive DNAPL source consisted of the brominated solvent tetrabromoethane (TBA) and its transformation products (mostly tribromoethene - TriBE). Coring and multi-level groundwater sampling indicated the DNAPL to be mainly in lower-permeability layers, suggesting the source had already undergone appreciable depletion. Four simplified source dissolution models (exponential, power function, error function and rational mass) were able to describe the concentration history of the total molar concentration of brominated organics in extracted groundwater during 285 days of pumping. Approximately 152 kg of brominated compounds were extracted. The lack of significant kinetic mass transfer limitations in pumped concentrations was notable. This was despite the heterogeneous layering in the aquifer and distribution of DNAPL. There was little to choose between the model fits to pumped concentration time series. The variance of groundwater velocities in the aquifer determined during a partitioning inter-well tracer test (PITT) were used to parameterise the models. However, the models were found to be relatively insensitive to this parameter. All models indicated an initial source mass around 250 kg which compared favourably to an estimate of 220 kg derived from the PITT. The extrapolated concentrations from the dissolution models diverged, showing disparate approaches to possible remediation objectives. However, it also showed that an appreciable proportion of the source would need to be removed to discriminate between the models. This may limit the utility of such modelling early in the history of a DNAPL source. A further limitation is the simplified approach of analysing the combined parent/daughter compounds with different solubilities as a total molar concentration. Although the fitted results gave confidence to this approach, there were appreciable changes in relative abundance. The dissolution and partitioning processes are discussed in relation to the lower-solubility TBA becoming dominant in pumped groundwater over time, despite its known rapid transformation to TriBE. These processes are also related to the architecture of the depleting source as revealed by multi-level groundwater sampling under reversed pumping/injection conditions.

  17. Modeling for the SAFRR Tsunami Scenario-generation, propagation, inundation, and currents in ports and harbors: Chapter D in The SAFRR (Science Application for Risk Reduction) Tsunami Scenario

    USGS Publications Warehouse

    ,

    2013-01-01

    This U.S. Geological Survey (USGS) Open-File report presents a compilation of tsunami modeling studies for the Science Application for Risk Reduction (SAFRR) tsunami scenario. These modeling studies are based on an earthquake source specified by the SAFRR tsunami source working group (Kirby and others, 2013). The modeling studies in this report are organized into three groups. The first group relates to tsunami generation. The effects that source discretization and horizontal displacement have on tsunami initial conditions are examined in section 1 (Whitmore and others). In section 2 (Ryan and others), dynamic earthquake rupture models are explored in modeling tsunami generation. These models calculate slip distribution and vertical displacement of the seafloor as a result of realistic fault friction, physical properties of rocks surrounding the fault, and dynamic stresses resolved on the fault. The second group of papers relates to tsunami propagation and inundation modeling. Section 3 (Thio) presents a modeling study for the entire California coast that includes runup and inundation modeling where there is significant exposure and estimates of maximum velocity and momentum flux at the shoreline. In section 4 (Borrero and others), modeling of tsunami propagation and high-resolution inundation of critical locations in southern California is performed using the National Oceanic and Atmospheric Administration’s (NOAA) Method of Splitting Tsunami (MOST) model and NOAA’s Community Model Interface for Tsunamis (ComMIT) modeling tool. Adjustments to the inundation line owing to fine-scale structures such as levees are described in section 5 (Wilson). The third group of papers relates to modeling of hydrodynamics in ports and harbors. Section 6 (Nicolsky and Suleimani) presents results of the model used at the Alaska Earthquake Information Center for the Ports of Los Angeles and Long Beach, as well as synthetic time series of the modeled tsunami for other selected locales in southern California. Importantly, section 6 provides a comparison of the effect of including horizontal displacements at the source described in section 1 and differences in bottom friction on wave heights and inundation in the Ports of Los Angeles and Long Beach. Modeling described in section 7 (Lynett and Son) uses a higher order physical model to determine variations of currents during the tsunami and complex flow structures such as jets and eddies. Section 7 also uses sediment transport models to estimate scour and deposition of sediment in ports and harbors—a significant effect that was observed in southern California following the 2011 Tohoku tsunami. Together, all of the sections in this report form the basis for damage, impact, and emergency preparedness aspects of the SAFRR tsunami scenario. Three sections of this report independently calculate wave height and inundation results using the source specified by Kirby and others (2013). Refer to figure 29 in section 3, figure 52 in section 4, and figure 62 in section 6. All of these results are relative to a mean high water (MHW) vertical datum. Slight differences in the results are observed in East Basin of the Port of Los Angeles, Alamitos Bay, and the Seal Beach National Wildlife Refuge. However, given that these three modeling efforts involved different implementations of the source, different numerical wave propagation and runup models, and slight differences in the digital elevation models (DEMs), the similarity among the results is remarkable.

  18. A new DG nanoscale TFET based on MOSFETs by using source gate electrode: 2D simulation and an analytical potential model

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2017-08-01

    This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.

  19. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  20. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  1. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  2. Fusion neutron source blanket: requirements for calculation accuracy and benchmark experiment precision

    NASA Astrophysics Data System (ADS)

    Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.

    2017-06-01

    In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.

  3. Human health risk assessment: models for predicting the effective exposure duration of on-site receptors exposed to contaminated groundwater.

    PubMed

    Baciocchi, Renato; Berardi, Simona; Verginelli, Iason

    2010-09-15

    Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.

  4. Discrete time modeling and stability analysis of TCP Vegas

    NASA Astrophysics Data System (ADS)

    You, Byungyong; Koo, Kyungmo; Lee, Jin S.

    2007-12-01

    This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.

  5. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  6. Seismic hazard in the eastern United States

    USGS Publications Warehouse

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  7. Modelling infrasound signal generation from two underground explosions at the Source Physics Experiment using the Rayleigh integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Kyle R.; Whitaker, Rodney W.; Arrowsmith, Stephen J.

    2014-12-11

    For this study, we use the Rayleigh integral (RI) as an approximation to the Helmholtz–Kirchoff integral to model infrasound generation and propagation from underground chemical explosions at distances of 250 m out to 5 km as part of the Source Physics Experiment (SPE). Using a sparse network of surface accelerometers installed above ground zero, we are able to accurately create synthetic acoustic waveforms and compare them to the observed data. Although the underground explosive sources were designed to be symmetric, the resulting seismic wave at the surface shows an asymmetric propagation pattern that is stronger to the northeast of themore » borehole. This asymmetric bias may be attributed to the subsurface geology and faulting of the area and is observed in the acoustic waveforms. We compare observed and modelled results from two of the underground SPE tests with a sensitivity study to evaluate the asymmetry observed in the data. This work shows that it is possible to model infrasound signals from underground explosive sources using the RI and that asymmetries observed in the data can be modelled with this technique.« less

  8. ATMOSPHERIC AEROSOL SOURCE-RECEPTOR RELATIONSHIPS: THE ROLE OF COAL-FIRED POWER PLANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen L. Robinson; Spyros N. Pandis; Cliff I. Davidson

    2004-12-01

    This report describes the technical progress made on the Pittsburgh Air Quality Study (PAQS) during the period of March 2004 through August 2004. Significant progress was made this project period on the analysis of ambient data, source apportionment, and deterministic modeling activities. Results highlighted in this report include evaluation of the performance of PMCAMx+ for an air pollution episode in the Eastern US, an emission profile for a coke production facility, ultrafine particle composition during a nucleation event, and a new hybrid approach for source apportionment. An agreement was reached with a utility to characterize fine particle and mercury emissionsmore » from a commercial coal fired power. Research in the next project period will include source testing of a coal fired power plant, source apportionment analysis, emission scenario modeling with PMCAMx+, and writing up results for submission as journal articles.« less

  9. The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry

    NASA Astrophysics Data System (ADS)

    Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang

    2018-05-01

    Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.

  10. NAPL source zone depletion model and its application to railroad-tank-car spills.

    PubMed

    Marruffo, Amanda; Yoon, Hongkyu; Schaeffer, David J; Barkan, Christopher P L; Saat, Mohd Rapik; Werth, Charles J

    2012-01-01

    We developed a new semi-analytical source zone depletion model (SZDM) for multicomponent light nonaqueous phase liquids (LNAPLs) and incorporated this into an existing screening model for estimating cleanup times for chemical spills from railroad tank cars that previously considered only single-component LNAPLs. Results from the SZDM compare favorably to those from a three-dimensional numerical model, and from another semi-analytical model that does not consider source zone depletion. The model was used to evaluate groundwater contamination and cleanup times for four complex mixtures of concern in the railroad industry. Among the petroleum hydrocarbon mixtures considered, the cleanup time of diesel fuel was much longer than E95, gasoline, and crude oil. This is mainly due to the high fraction of low solubility components in diesel fuel. The results demonstrate that the updated screening model with the newly developed SZDM is computationally efficient, and provides valuable comparisons of cleanup times that can be used in assessing the health and financial risk associated with chemical mixture spills from railroad-tank-car accidents. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  11. Quantifying sources of elemental carbon over the Guanzhong Basin of China: A consistent network of measurements and WRF-Chem modeling.

    PubMed

    Li, Nan; He, Qingyang; Tie, Xuexi; Cao, Junji; Liu, Suixin; Wang, Qiyuan; Li, Guohui; Huang, Rujin; Zhang, Qiang

    2016-07-01

    We conducted a year-long WRF-Chem (Weather Research and Forecasting Chemical) model simulation of elemental carbon (EC) aerosol and compared the modeling results to the surface EC measurements in the Guanzhong (GZ) Basin of China. The main goals of this study were to quantify the individual contributions of different EC sources to EC pollution, and to find the major cause of the EC pollution in this region. The EC measurements were simultaneously conducted at 10 urban, rural, and background sites over the GZ Basin from May 2013 to April 2014, and provided a good base against which to evaluate model simulation. The model evaluation showed that the calculated annual mean EC concentration was 5.1 μgC m(-3), which was consistent with the observed value of 5.3 μgC m(-3). Moreover, the model result also reproduced the magnitude of measured EC in all seasons (regression slope = 0.98-1.03), as well as the spatial and temporal variations (r = 0.55-0.78). We conducted several sensitivity studies to quantify the individual contributions of EC sources to EC pollution. The sensitivity simulations showed that the local and outside sources contributed about 60% and 40% to the annual mean EC concentration, respectively, implying that local sources were the major EC pollution contributors in the GZ Basin. Among the local sources, residential sources contributed the most, followed by industry and transportation sources. A further analysis suggested that a 50% reduction of industry or transportation emissions only caused a 6% decrease in the annual mean EC concentration, while a 50% reduction of residential emissions reduced the winter surface EC concentration by up to 25%. In respect to the serious air pollution problems (including EC pollution) in the GZ Basin, our findings can provide an insightful view on local air pollution control strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  13. Numerical simulation and experimental verification of extended source interferometer

    NASA Astrophysics Data System (ADS)

    Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong

    2013-12-01

    Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.

  14. Into the deep: Evaluation of SourceTracker for assessment of faecal contamination of coastal waters.

    PubMed

    Henry, Rebekah; Schang, Christelle; Coutts, Scott; Kolotelo, Peter; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O'Brien, Peter; Deletic, Ana; McCarthy, David

    2016-04-15

    Faecal contamination of recreational waters is an increasing global health concern. Tracing the source of the contaminant is a vital step towards mitigation and disease prevention. Total 16S rRNA amplicon data for a specific environment (faeces, water, soil) and computational tools such as the Markov-Chain Monte Carlo based SourceTracker can be applied to microbial source tracking (MST) and attribution studies. The current study applied artificial and in-laboratory derived bacterial communities to define the potential and limitations associated with the use of SourceTracker, prior to its application for faecal source tracking at three recreational beaches near Port Phillip Bay (Victoria, Australia). The results demonstrated that at minimum multiple model runs of the SourceTracker modelling tool (i.e. technical replicates) were required to identify potential false positive predictions. The calculation of relative standard deviations (RSDs) for each attributed source improved overall predictive confidence in the results. In general, default parameter settings provided high sensitivity, specificity, accuracy and precision. Application of SourceTracker to recreational beach samples identified treated effluent as major source of human-derived faecal contamination, present in 69% of samples. Site-specific sources, such as raw sewage, stormwater and bacterial populations associated with the Yarra River estuary were also identified. Rainfall and associated sand resuspension at each location correlated with observed human faecal indicators. The results of the optimised SourceTracker analysis suggests that local sources of contamination have the greatest effect on recreational coastal water quality. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An evaluation of differences due to changing source directivity in room acoustic computer modeling

    NASA Astrophysics Data System (ADS)

    Vigeant, Michelle C.; Wang, Lily M.

    2004-05-01

    This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.

  16. Exploring the Differences Between the European (SHARE) and the Reference Italian Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.

    2014-12-01

    The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.

  17. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  18. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  19. Water quality and hydrology of Silver Lake, Oceana County, Michigan, with emphasis on lake response to nutrient loading

    USGS Publications Warehouse

    Brennan, Angela K.; Hoard, Christopher J.; Duris, Joseph W.; Ogdahl, Mary E.; Steinman, Alan D.

    2016-01-29

    Simulations also were run using the BATHTUB model to evaluate the number of days Silver Lake could experience algal blooms (algal blooms are defined as modeled chlorophyll a in excess of 10 micrograms per liter [µg/L]) as a result of an increase/decrease in phosphorus and nitrogen loading from groundwater, Hunter Creek, and (or) a combination of sources. If the phosphorus and nitrogen loading from Hunter Creek is decreased (and all other sources are not altered), Silver Lake will continue to experience algal blooms, but less frequently than what is currently experienced. The same scenario holds true if the nutrient loading from groundwater is decreased. Another scenario was simulated using a combination of sources, which includes increases and decreases in phosphorus and nitrogen loading from sources that are the most likely to be managed, and includes groundwater (as a result of conversion of household septic to sewers), Hunter Creek (conversion of household septic to sewers), and lawn runoff. Results of the BATHTUB model indicated that a 50-percent reduction of phosphorus and nitrogen from these sources would result in a considerable decrease in algal bloom frequency (from 231 to 132 days) and severity, and a 75-percent reduction would greatly reduce algal bloom occurrence on Silver Lake (from 231 to 57 days). BATHTUB model scenarios based on septic load model: A scenario also was conducted using the BATHTUB model to simulate the conversion of septic to sewer and included a low, high, and medium (likely) scenario of nutrient loading to Silver Lake. Simulations of the BATHTUB model indicated that, under the likely scenario, the conversion of all onsite septic treatment to sewers would result in an overall change in lake trophic status from eutrophic to mesotrophic, thereby reducing the frequency of algal blooms and algal bloom intensity on Silver Lake (chlorophyll a >10 µg/L, from 231 to 184 days per year, or chlorophyll a >20 µg/L, from 80 to 49 days per year).

  20. The Errors Sources Affect to the Results of One-Way Nested Ocean Regional Circulation Model

    NASA Astrophysics Data System (ADS)

    Pham, S. V.

    2016-02-01

    Pham-Van Sy1, Jin Hwan Hwang2 and Hyeyun Ku3 Dept. of Civil & Environmental Engineering, Seoul National University, KoreaEmail: 1phamsymt@gmail.com (Corresponding author) Email: 2jinhwang@snu.ac.krEmail: 3hyeyun.ku@gmail.comAbstractThe Oceanic Regional Circulation Model (ORCM) is an essential tool in resolving highly a regional scale through downscaling dynamically the results from the roughly revolved global model. However, when dynamic downscaling from a coarse resolution of the global model or observations to the small scale, errors are generated due to the different sizes of resolution and lateral updating frequency. This research evaluated the effect of four main sources on the results of the ocean regional circulation model (ORCMs) during downscaling and nesting the output data from the ocean global circulation model (OGCMs). Representative four error sources should be the way of the LBC formulation, the spatial resolution difference between driving and driven data, the frequency for up-dating LBCs and domain size. Errors which are contributed from each error source to the results of the ORCMs are investigated separately by applying the Big-Brother Experiment (BBE). Within resolution of 3km grid point of the ORCMs imposing in the BBE framework, it clearly exposes that the simulation results of the ORCMs significantly depend on the domain size and specially the spatial and temporal resolution of lateral boundary conditions (LBCs). The ratio resolution of spatial resolution between driving data and driven model could be up to 3, the updating frequency of the LBCs can be up to every 6 hours per day. The optimal domain size of the ORCMs could be smaller than the OGCMs' domain size around 2 to 10 times. Key words: ORCMs, error source, lateral boundary conditions, domain size Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled as "Developing total management system for the Keum river estuary and coast" and "Development of Technology for CO2 Marine Geological Storage". We also thank to the administrative supports of the Integrated Research Institute of Construction and Environmental Engineering of the Seoul National University.

  1. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent processes from a different domain or have different spatial and temporal resolutions. An open source framework that bridges OpenMI and OpenDA is presented. The framework provides a generic and easy means for any OpenMI compliant model to assimilate observation measurements. An example test case will be presented using MikeSHE, and OpenMI compliant fully coupled integrated hydrological model that can accurately simulate the feedback dynamics of overland flow, unsaturated zone and saturated zone.

  2. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    NASA Astrophysics Data System (ADS)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  3. A Geometric Model for Specularity Prediction on Planar Surfaces with Multiple Light Sources.

    PubMed

    Morgand, Alexandre; Tamaazousti, Mohamed; Bartoli, Adrien

    2018-05-01

    Specularities are often problematic in computer vision since they impact the dynamic range of the image intensity. A natural approach would be to predict and discard them using computer graphics models. However, these models depend on parameters which are difficult to estimate (light sources, objects' material properties and camera). We present a geometric model called JOLIMAS: JOint LIght-MAterial Specularity, which predicts the shape of specularities. JOLIMAS is reconstructed from images of specularities observed on a planar surface. It implicitly includes light and material properties, which are intrinsic to specularities. This model was motivated by the observation that specularities have a conic shape on planar surfaces. The conic shape is obtained by projecting a fixed quadric on the planar surface. JOLIMAS thus predicts the specularity using a simple geometric approach with static parameters (object material and light source shape). It is adapted to indoor light sources such as light bulbs and fluorescent lamps. The prediction has been tested on synthetic and real sequences. It works in a multi-light context by reconstructing a quadric for each light source with special cases such as lights being switched on or off. We also used specularity prediction for dynamic retexturing and obtained convincing rendering results. Further results are presented as supplementary video material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2017.2677445.

  4. Apparent Explosion Moments from Rg Waves Recorded on SPE: Implications for the Late-Time Damage Source Model

    NASA Astrophysics Data System (ADS)

    Patton, H. J.; Larmat, C. S.; Rougier, E.

    2016-12-01

    Seismic moments for chemical shots making up Phase I of the Source Physics Experiments (SPE) are estimated from 6 Hz Rg waves under the assumption that the shots are pure explosions. These apparent explosion moments are compared to moments determined using the Reduced Displacement Potential (RDP) method applied to free field data. LIDAR/photogrammetry observations, strong ground motions on the free surface near ground zero, and moment tensor inversion results are evidence in support of the fourth shot SPE-4P being essentially a pure explosion. The apparent moment for SPE-4P is 9 × 1010 Nm in good agreement with the RDP moment 8 × 1010 Nm. In stark contrast, apparent moments for the first three shots are three to four times smaller than RDP moments. Data show that spallation occurred on these shots, as well as permanent deformations detected with ground-based LIDAR. As such, the source medium suffered late-time damage. The late-time damage source model predicts destructive interference between Rg waves radiated by explosion and damage sources, which reduces amplitudes and explains why apparent moments are smaller than RDP moments based on compressional energy emitted directly from the source. SPE-5 was conducted at roughly the same yield-scaled burial depth as SPE-2 and -3, but with five times the yield. As such, the damage source model predicts less reduction of apparent moment. At this writing, preliminary results from Rg interferometry and RDP moments confirm this prediction. SPE-6 is scheduled for the fall of 2016, and it should have the strongest damage source of all SPE shots. The damage model predicts that the polarity of Rg waves could be reversed. Realization of this prediction will be strong confirmation of the late-time damage source model. This abstract has a Los Alamos National Laboratory Unlimited Release Number LA-UR-16-25709.

  5. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Emmons, L. K.; Mak, J. E.

    2007-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  6. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Mak, J. E.; Emmons, L. K.

    2008-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  7. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  8. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    NASA Astrophysics Data System (ADS)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to monitoring cognitive or mental states of human operators in attention-critical settings or in passive brain-computer interfaces.

  9. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low-saturation ganglia, while diffusion within the DNAPL should be considered for larger NAPL pools. These results offer important insights to the monitoring and interpretation of bioremediation strategies employed within DNAPL source zones.

  10. Jet Noise Source Localization Using Linear Phased Array

    NASA Technical Reports Server (NTRS)

    Agboola, Ferni A.; Bridges, James

    2004-01-01

    A study was conducted to further clarify the interpretation and application of linear phased array microphone results, for localizing aeroacoustics sources in aircraft exhaust jet. Two model engine nozzles were tested at varying power cycles with the array setup parallel to the jet axis. The array position was varied as well to determine best location for the array. The results showed that it is possible to resolve jet noise sources with bypass and other components separation. The results also showed that a focused near field image provides more realistic noise source localization at low to mid frequencies.

  11. Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study

    NASA Astrophysics Data System (ADS)

    O'Neill, B. C.

    2015-12-01

    Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.

  12. Source emission and model evaluation of formaldehyde from composite and solid wood furniture in a full-scale chamber

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyu; Mason, Mark A.; Guo, Zhishi; Krebs, Kenneth A.; Roache, Nancy F.

    2015-12-01

    This paper describes the measurement and model evaluation of formaldehyde source emissions from composite and solid wood furniture in a full-scale chamber at different ventilation rates for up to 4000 h using ASTM D 6670-01 (2007). Tests were performed on four types of furniture constructed of different materials and from different manufacturers. The data were used to evaluate two empirical emission models, i.e., a first-order and power-law decay model. The experimental results showed that some furniture tested in this study, made only of solid wood and with less surface area, had low formaldehyde source emissions. The effect of ventilation rate on formaldehyde emissions was also examined. Model simulation results indicated that the power-law decay model showed better agreement than the first-order decay model for the data collected from the tests, especially for long-term emissions. This research was limited to a laboratory study with only four types of furniture products tested. It was not intended to comprehensively test or compare the large number of furniture products available in the market place. Therefore, care should be taken when applying the test results to real-world scenarios. Also, it was beyond the scope of this study to link the emissions to human exposure and potential health risks.

  13. Partial and Total Annoyance Due to Road Traffic Noise Combined with Aircraft or Railway Noise: Structural Equation Analysis.

    PubMed

    Gille, Laure-Anne; Marquis-Favre, Catherine; Lam, Kin-Che

    2017-11-30

    Structural equation modeling was used to analyze partial and total in situ annoyance in combined transportation noise situations. A psychophysical total annoyance model and a perceptual total annoyance model were proposed. Results show a high contribution of Noise exposure and Noise sensitivity to Noise annoyance , as well as a causal relationship between noise annoyance and lower Dwelling satisfaction. Moreover, the Visibility of noise source may increase noise annoyance, even when the visible noise source is different from the annoying source under study. With regards to total annoyance due to road traffic noise combined with railway or aircraft noise, even though in both situations road traffic noise may be considered background noise and the other noise source event noise, the contribution of road traffic noise to the models is greater than railway noise and smaller than aircraft noise. This finding may be explained by the difference in sound pressure levels between these two types of combined exposures or by the aircraft noise level, which may also indicate the city in which the respondents live. Finally, the results highlight the importance of sample size and variable distribution in the database, as different results can be observed depending on the sample or variables considered.

  14. Partial and Total Annoyance Due to Road Traffic Noise Combined with Aircraft or Railway Noise: Structural Equation Analysis

    PubMed Central

    Gille, Laure-Anne; Marquis-Favre, Catherine; Lam, Kin-Che

    2017-01-01

    Structural equation modeling was used to analyze partial and total in situ annoyance in combined transportation noise situations. A psychophysical total annoyance model and a perceptual total annoyance model were proposed. Results show a high contribution of Noise exposure and Noise sensitivity to Noise annoyance, as well as a causal relationship between noise annoyance and lower Dwelling satisfaction. Moreover, the Visibility of noise source may increase noise annoyance, even when the visible noise source is different from the annoying source under study. With regards to total annoyance due to road traffic noise combined with railway or aircraft noise, even though in both situations road traffic noise may be considered background noise and the other noise source event noise, the contribution of road traffic noise to the models is greater than railway noise and smaller than aircraft noise. This finding may be explained by the difference in sound pressure levels between these two types of combined exposures or by the aircraft noise level, which may also indicate the city in which the respondents live. Finally, the results highlight the importance of sample size and variable distribution in the database, as different results can be observed depending on the sample or variables considered. PMID:29189751

  15. 1-D/3-D geologic model of the Western Canada Sedimentary Basin

    USGS Publications Warehouse

    Higley, D.K.; Henry, M.; Roberts, L.N.R.; Steinshouer, D.W.

    2005-01-01

    The 3-D geologic model of the Western Canada Sedimentary Basin comprises 18 stacked intervals from the base of the Devonian Woodbend Group and age equivalent formations to ground surface; it includes an estimated thickness of eroded sediments based on 1-D burial history reconstructions for 33 wells across the study area. Each interval for the construction of the 3-D model was chosen on the basis of whether it is primarily composed of petroleum system elements of reservoir, hydrocarbon source, seal, overburden, or underburden strata, as well as the quality and areal distribution of well and other data. Preliminary results of the modeling support the following interpretations. Long-distance migration of hydrocarbons east of the Rocky Mountains is indicated by oil and gas accumulations in areas within which source rocks are thermally immature for oil and (or) gas. Petroleum systems in the basin are segmented by the northeast-trending Sweetgrass Arch; hydrocarbons west of the arch were from source rocks lying near or beneath the Rocky Mountains, whereas oil and gas east of the arch were sourced from the Williston Basin. Hydrocarbon generation and migration are primarily due to increased burial associated with the Laramide Orogeny. Hydrocarbon sources and migration were also influenced by the Lower Cretaceous sub-Mannville unconformity. In the Peace River Arch area of northern Alberta, Jurassic and older formations exhibit high-angle truncations against the unconformity. Potential Paleozoic though Mesozoic hydrocarbon source rocks are in contact with overlying Mannville Group reservoir facies. In contrast, in Saskatchewan and southern Alberta the contacts are parallel to sub-parallel, with the result that hydrocarbon source rocks are separated from the Mannville Group by seal-forming strata within the Jurassic. Vertical and lateral movement of hydrocarbons along the faults in the Rocky Mountains deformed belt probably also resulted in mixing of oil and gas from numerous source rocks in Alberta.

  16. Analysis of Seismic Moment Tensor and Finite-Source Scaling During EGS Resource Development at The Geysers, CA

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Dreger, D. S.; Gritto, R.

    2015-12-01

    Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.

  17. Electrical source imaging of interictal spikes using multiple sparse volumetric priors for presurgical epileptogenic focus localization

    PubMed Central

    Strobbe, Gregor; Carrette, Evelien; López, José David; Montes Restrepo, Victoria; Van Roost, Dirk; Meurs, Alfred; Vonck, Kristl; Boon, Paul; Vandenberghe, Stefaan; van Mierlo, Pieter

    2016-01-01

    Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP) approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i) an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii) an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii) an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time epochs were in the same range as the LORETA and ECD techniques. We found distances smaller than 23 mm, with robust results for all the patients. For the finite difference models, we found that the distances to the resection border for the MSVP inversions of the full spike time epochs were generally smaller compared to the MSVP inversions of the time epochs before the spike peak. The results also suggest that the inversions using the finite difference models resulted in slightly smaller distances to the resection border compared to the spherical models. The results we obtained are promising because the MSVP approach allows to study the network of the estimated source-intensities and allows to characterize the spatial extent of the underlying sources. PMID:26958464

  18. Impacts of Oil and Gas Production on Winter Ozone Pollution in the Uintah Basin Using Model Source Apportionment

    NASA Astrophysics Data System (ADS)

    Tran, H. N. Q.; Tran, T. T.; Mansfield, M. L.; Lyman, S. N.

    2014-12-01

    Contributions of emissions from oil and gas activities to elevated ozone concentrations in the Uintah Basin - Utah were evaluated using the CMAQ Integrated Source Apportionment Method (CMAQ-ISAM) technique, and were compared with the results of traditional budgeting methods. Unlike the traditional budgeting method, which compares simulations with and without emissions of the source(s) in question to quantify its impacts, the CMAQ-ISAM technique assigns tags to emissions of each source and tracks their evolution through physical and chemical processes to quantify the final ozone product yield from the source. Model simulations were performed for two episodes in winter 2013 of low and high ozone to provide better understanding of source contributions under different weather conditions. Due to the highly nonlinear ozone chemistry, results obtained from the two methods differed significantly. The growing oil and gas industry in the Uintah Basin is the largest contributor to the elevated zone (>75 ppb) observed in the Basin. This study therefore provides an insight into the impact of oil and gas industry on the ozone issue, and helps in determining effective control strategies.

  19. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. The importance of source configuration in quantifying footprints of regional atmospheric sulphur deposition.

    PubMed

    Vieno, M; Dore, A J; Bealey, W J; Stevenson, D S; Sutton, M A

    2010-01-15

    An atmospheric transport-chemistry model is applied to investigate the effects of source configuration in simulating regional sulphur deposition footprints from elevated point sources. Dry and wet depositions of sulphur are calculated for each of the 69 largest point sources in the UK. Deposition contributions for each point source are calculated for 2003, as well as for a 2010 emissions scenario. The 2010 emissions scenario has been chosen to simulate the Gothenburg protocol emission scenario. Point source location is found to be a major driver of the dry/wet deposition ratio for each deposition footprint, with increased precipitation scavenging of SO(x) in hill areas resulting in a larger fraction of the emitted sulphur being deposited within the UK for sources located near these areas. This reduces exported transboundary pollution, but, associated with the occurrence of sensitive soils in hill areas, increases the domestic threat of soil acidification. The simulation of plume rise using individual stack parameters for each point source demonstrates a high sensitivity of SO(2) surface concentration to effective source height. This emphasises the importance of using site-specific information for each major stack, which is rarely included in regional atmospheric pollution models, due to the difficulty in obtaining the required input data. The simulations quantify how the fraction of emitted SO(x) exported from the UK increases with source magnitude, effective source height and easterly location. The modelled reduction in SO(x) emissions, between 2003 and 2010 resulted in a smaller fraction being exported, with the result that the reductions in SO(x) deposition to the UK are less than proportionate to the emission reduction. This non-linearity is associated with a relatively larger fraction of the SO(2) being converted to sulphate aerosol for the 2010 scenario, in the presence of ammonia. The effect results in less-than-proportional UK benefits of reducing in SO(2) emissions, together with greater-than-proportional benefits in reducing export of UK SO(2) emissions. Copyright 2009 Elsevier B.V. All rights reserved.

  1. Estimating the seasonal carbon source-sink geography of a natural, steady-state terrestrial biosphere

    NASA Technical Reports Server (NTRS)

    Box, Elgene O.

    1988-01-01

    The estimation of the seasonal dynamics of biospheric-carbon sources and sinks to be used as an input to global atmospheric CO2 studies and models is discussed. An ecological biosphere model is given and the advantages of the model are examined. Monthly maps of estimated biospheric carbon source and sink regions and estimates of total carbon fluxes are presented for an equilibrium terrestrial biosphere. The results are compared with those from other models. It is suggested that, despite maximum variations of atmospheric CO2 in boreal latitudes, the enormous contributions of tropical wet-dry regions to global atmospheric CO2 seasonality can not be ignored.

  2. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  3. Case study of dust event sources from the Gobi and Taklamakan deserts: An investigation of the horizontal evolution and topographical effect using numerical modeling and remote sensing.

    PubMed

    Fan, Jin; Yue, Xiaoying; Sun, Qinghua; Wang, Shigong

    2017-06-01

    A severe dust event occurred from April 23 to April 27, 2014, in East Asia. A state-of-the-art online atmospheric chemistry model, WRF/Chem, was combined with a dust model, GOCART, to better understand the entire process of this event. The natural color images and aerosol optical depth (AOD) over the dust source region are derived from datasets of moderate resolution imaging spectroradiometer (MODIS) loaded on a NASA Aqua satellite to trace the dust variation and to verify the model results. Several meteorological conditions, such as pressure, temperature, wind vectors and relative humidity, are used to analyze meteorological dynamic. The results suggest that the dust emission occurred only on April 23 and 24, although this event lasted for 5days. The Gobi Desert was the main source for this event, and the Taklamakan Desert played no important role. This study also suggested that the landform of the source region could remarkably interfere with a dust event. The Tarim Basin has a topographical effect as a "dust reservoir" and can store unsettled dust, which can be released again as a second source, making a dust event longer and heavier. Copyright © 2016. Published by Elsevier B.V.

  4. Using Model Comparisons to Understand Sources of Nitrogen Delivered to US Coastal Areas

    EPA Science Inventory

    Nitrogen loading to water bodies can result in eutrophication-related hypoxia and degraded water quality. The relative contributions of different anthropogenic and natural sources of in-stream N cannot be directly measured at whole-watershed scales; hence, N source attribution e...

  5. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  6. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County.

    PubMed

    Wang, Long; Wei, Jiahua; Huang, Yuefei; Wang, Guangqian; Maqsood, Imran

    2011-07-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Theory for Deducing Volcanic Activity From Size Distributions in Plinian Pyroclastic Fall Deposits

    NASA Astrophysics Data System (ADS)

    Iriyama, Yu; Toramaru, Atsushi; Yamamoto, Tetsuo

    2018-03-01

    Stratigraphic variation in the grain size distribution (GSD) of plinian pyroclastic fall deposits reflects volcanic activity. To extract information on volcanic activity from the analyses of deposits, we propose a one-dimensional theory that provides a formula connecting the sediment GSD to the source GSD. As the simplest case, we develop a constant-source model (CS model), in which the source GSD and the source height are constant during the duration of release of particles. We assume power laws of particle radii for the terminal fall velocity and the source GSD. The CS model can describe an overall (i.e., entire vertically variable) feature of the GSD structure of the sediment. It is shown that the GSD structure is characterized by three parameters, that is, the duration of supply of particles to the source scaled by the fall time of the largest particle, ts/tM, and the power indices of the terminal fall velocity p and of the source GSD q. We apply the CS model to samples of the Worzel D ash layer and compare the sediment GSD structure calculated by using the CS model to the observed structure. The results show that the CS model reproduces the overall structure of the observed GSD. We estimate the duration of the eruption and the q value of the source GSD. Furthermore, a careful comparison of the observed and calculated GSDs reveals new interpretation of the original sediment GSD structure of the Worzel D ash layer.

  8. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  9. Public engagement in 3D flood modelling through integrating crowd sourced imagery with UAV photogrammetry to create a 3D flood hydrograph.

    NASA Astrophysics Data System (ADS)

    Bond, C. E.; Howell, J.; Butler, R.

    2016-12-01

    With an increase in flood and storm events affecting infrastructure the role of weather systems, in a changing climate, and their impact is of increasing interest. Here we present a new workflow integrating crowd sourced imagery from the public with UAV photogrammetry to create, the first 3D hydrograph of a major flooding event. On December 30th 2015, Storm Frank resulted in high magnitude rainfall, within the Dee catchment in Aberdeenshire, resulting in the highest ever-recorded river level for the Dee, with significant impact on infrastructure and river morphology. The worst of the flooding occurred during daylight hours and was digitally captured by the public on smart phones and cameras. After the flood event a UAV was used to shoot photogrammetry to create a textured elevation model of the area around Aboyne Bridge on the River Dee. A media campaign aided crowd sourced digital imagery from the public, resulting in over 1,000 images submitted by the public. EXIF data captured by the imagery of the time, date were used to sort the images into a time series. Markers such as signs, walls, fences and roads within the images were used to determine river level height through the flood, and matched onto the elevation model to contour the change in river level. The resulting 3D hydrograph shows the build up of water on the up-stream side of the Bridge that resulted in significant scouring and under-mining in the flood. We have created the first known data based 3D hydrograph for a river section, from a UAV photogrammetric model and crowd sourced imagery. For future flood warning and infrastructure management a solution that allows a realtime hydrograph to be created utilising augmented reality to integrate the river level information in crowd sourced imagery directly onto a 3D model, would significantly improve management planning and infrastructure resilience assessment.

  10. Ionospheric current source modeling and global geomagnetic induction using ground geomagnetic observatory data

    USGS Publications Warehouse

    Sun, Jin; Kelbert, Anna; Egbert, G.D.

    2015-01-01

    Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.

  11. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  12. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  13. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  14. Rise of Buoyant Emissions from Low-Level Sources in the Presence of Upstream and Downstream Obstacles

    NASA Astrophysics Data System (ADS)

    Pournazeri, Sam; Princevac, Marko; Venkatram, Akula

    2012-08-01

    Field and laboratory studies have been conducted to investigate the effect of surrounding buildings on the plume rise from low-level buoyant sources, such as distributed power generators. The field experiments were conducted in Palm Springs, California, USA in November 2010 and plume rise from a 9.3 m stack was measured. In addition to the field study, a laboratory study was conducted in a water channel to investigate the effects of surrounding buildings on plume rise under relatively high wind-speed conditions. Different building geometries and source conditions were tested. The experiments revealed that plume rise from low-level buoyant sources is highly affected by the complex flows induced by buildings stationed upstream and downstream of the source. The laboratory results were compared with predictions from a newly developed numerical plume-rise model. Using the flow measurements associated with each building configuration, the numerical model accurately predicted plume rise from low-level buoyant sources that are influenced by buildings. This numerical plume rise model can be used as a part of a computational fluid dynamics model.

  15. Decision analysis of emergency ventilation and evacuation strategies against suddenly released contaminant indoors by considering the uncertainty of source locations.

    PubMed

    Cai, Hao; Long, Weiding; Li, Xianting; Kong, Lingjuan; Xiong, Shuang

    2010-06-15

    In case hazardous contaminants are suddenly released indoors, the prompt and proper emergency responses are critical to protect occupants. This paper aims to provide a framework for determining the optimal combination of ventilation and evacuation strategies by considering the uncertainty of source locations. The certainty of source locations is classified as complete certainty, incomplete certainty, and complete uncertainty to cover all the possible situations. According to this classification, three types of decision analysis models are presented. A new concept, efficiency factor of contaminant source (EFCS), is incorporated in these models to evaluate the payoffs of the ventilation and evacuation strategies. A procedure of decision-making based on these models is proposed and demonstrated by numerical studies of one hundred scenarios with ten ventilation modes, two evacuation modes, and five source locations. The results show that the models can be useful to direct the decision analysis of both the ventilation and evacuation strategies. In addition, the certainty of the source locations has an important effect on the outcomes of the decision-making. Copyright 2010 Elsevier B.V. All rights reserved.

  16. Kinetic modeling of particle dynamics in H- negative ion sources (invited)

    NASA Astrophysics Data System (ADS)

    Hatayama, A.; Shibata, T.; Nishioka, S.; Ohta, M.; Yasumoto, M.; Nishida, K.; Yamamoto, T.; Miyamoto, K.; Fukano, A.; Mizuno, T.

    2014-02-01

    Progress in the kinetic modeling of particle dynamics in H- negative ion source plasmas and their comparisons with experiments are reviewed, and discussed with some new results. Main focus is placed on the following two topics, which are important for the research and development of large negative ion sources and high power H- ion beams: (i) Effects of non-equilibrium features of EEDF (electron energy distribution function) on H- production, and (ii) extraction physics of H- ions and beam optics.

  17. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  18. The distribution of Enceladus water-group neutrals in Saturn’s Magnetosphere

    NASA Astrophysics Data System (ADS)

    Smith, Howard T.; Richardson, John D.

    2017-10-01

    Saturn’s magnetosphere is unique in that the plumes from the small icy moon, Enceladus, serve at the primary source for heavy particles in Saturn’s magnetosphere. The resulting co-orbiting neutral particles interact with ions, electrons, photons and other neutral particles to generate separate H2O, OH and O tori. Characterization of these toroidal distributions is essential for understanding Saturn magnetospheric sources, composition and dynamics. Unfortunately, limited direct observations of these features are available so modeling is required. A significant modeling challenge involves ensuring that either the plasma and neutral particle populations are not simply input conditions but can provide feedback to each population (i.e. are self-consistent). Jurac and Richardson (2005) executed such a self-consistent model however this research was performed prior to the return of Cassini data. In a similar fashion, we have coupled a 3-D neutral particle model (Smith et al. 2004, 2005, 2006, 2007, 2009, 2010) with a plasma transport model (Richardson 1998; Richardson & Jurac 2004) to develop a self-consistent model which is constrained by all available Cassini observations and current findings on Saturn’s magnetosphere and the Enceladus plume source resulting in much more accurate neutral particle distributions. Here a new self-consistent model of the distribution of the Enceladus-generated neutral tori that is validated by all available observations. We also discuss the implications for source rate and variability.

  19. Rethinking moment tensor inversion methods to retrieve the source mechanisms of low-frequency seismic events

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J.

    2011-12-01

    Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.

  20. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  1. Propeller sheet cavitation noise source modeling and inversion

    NASA Astrophysics Data System (ADS)

    Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae

    2014-02-01

    Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.

  2. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  3. Noise source and reactor stability estimation in a boiling water reactor using a multivariate autoregressive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanemoto, S.; Andoh, Y.; Sandoz, S.A.

    1984-10-01

    A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less

  4. Source complexity of the 1987 Whittier Narrows, California, earthquake from the inversion of strong motion records

    USGS Publications Warehouse

    Hartzell, S.; Iida, M.

    1990-01-01

    Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors

  5. Fingerprinting Sources of Suspended Sediment in a Canadian Agricultural Watershed Using the MixSIAR Bayesian Unmixing Model

    NASA Astrophysics Data System (ADS)

    Smith, J. P.; Owens, P. N.; Gaspar, L.; Lobb, D. A.; Petticrew, E. L.

    2015-12-01

    An understanding of sediment redistribution processes and the main sediment sources within a watershed is needed to support watershed management strategies. The fingerprinting technique is increasingly being recognized as a method for establishing the source of the sediment transported within watersheds. However, the different behaviour of the various fingerprinting properties has been recognized as a major limitation of the technique, and the uncertainty associated with tracer selection needs to be addressed. There are also questions associated with which modelling approach (frequentist or Bayesian) is the best to unmix complex environmental mixtures, such as river sediment. This study aims to compare and evaluate the differences between fingerprinting predictions provided by a Bayesian unmixing model (MixSIAR) using different groups of tracer properties for use in sediment source identification. We used fallout radionuclides (e.g. 137Cs) and geochemical elements (e.g. As) as conventional fingerprinting properties, and colour parameters as emerging properties; both alone and in combination. These fingerprinting properties are being used (i.e. Koiter et al., 2013; Barthod et al., 2015) to determine the proportional contributions of fine sediment in the South Tobacco Creek Watershed, an agricultural watershed located in Manitoba, Canada. We show that the unmixing model using a combination of fallout radionuclides and geochemical tracers gave similar results to the model based on colour parameters. Furthermore, we show that a model that combines all tracers (i.e. radionuclide/geochemical and colour) gave similar results, showing that sediment sources change from predominantly topsoil in the upper reaches of the watershed to channel bank and bedrock outcrop material in the lower reaches. Barthod LRM et al. (2015). Selecting color-based tracers and classifying sediment sources in the assessment of sediment dynamics using sediment source fingerprinting. J Environ Qual. Doi:10.2134/jeq2015.01.0043 Koiter AJ et al. (2013). Investigating the role of connectivity and scale in assessing the sources of sediment in an agricultural watershed in the Canadian prairies using sediment source fingerprinting. J Soils Sediments, 13, 1676-1691.

  6. Source contributions of fine particulate matter during one winter haze episodes in Xi'an, China

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wu, Q.

    2017-12-01

    Long-term exposure to high levels of fine particulate matter (PM2.5) is found to be associated with adverse effects on human health, ecological environment and climate change. Identification the major source regions of fine particulate matter are essential to proposing proper joint prevention and control strategies for heavy haze mitigation. In this work, the Comprehensive Air Quality Model with extensions (CAMx) together with the Particulate Source Apportionment Technology (PSAT) and the Weather Research and Forecast Model (WRF), have been applied to analyze the major source regions of PM2.5 in Xi'an during the heavy haze episodes in winter (29, December, 2016 - 5 January 2017), and the framework of the model system is shown in Fig. 1. Firstly, according to the model evaluation of the daily PM2.5 concentrations for the two months, the model has well performance, and the fraction of predictions within a factor of 2 of the observations (FAC2) is 84%, while the correlation coefficient (R) is 0.80 in Xi'an. By using the PSAT in CAMx model, a detailed source region contribution matrix is derived for all points within the Xi'an region and its six surrounding areas, and long-range regional transport. The results show that the local emission in Xi'an is the mainly sources at downtown area, which contributing 72.9% as shown in Fig.2, and the contribution rate of transportations between adjacent areas depends on wind direction. Meanwhile, three different suburban areas selected for detailed analysis in fine particles sources. Comparing to downtown area, the sources of suburban areas are more multiply, and the transportations make the contribution 40%-82%. In the suburban areas, regional inflows play an important role in the fine particles concentrations, indicating a strong need for regional joint emission control efforts. The results enhance the quantitative understanding of the PM2.5 source regions and provide a basis for policymaking to advance the control of pollution in Xi'an, China.

  7. Significant impacts of irrigation water sources and methods on modeling irrigation effects in the ACME Land Model

    DOE PAGES

    Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi

    2017-06-20

    An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less

  8. Significant impacts of irrigation water sources and methods on modeling irrigation effects in the ACME Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi

    An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less

  9. Use of sediment source fingerprinting to assess the role of subsurface erosion in the supply of fine sediment in a degraded catchment in the Eastern Cape, South Africa.

    PubMed

    Manjoro, Munyaradzi; Rowntree, Kate; Kakembo, Vincent; Foster, Ian; Collins, Adrian L

    2017-06-01

    Sediment source fingerprinting has been successfully deployed to provide information on the surface and subsurface sources of sediment in many catchments around the world. However, there is still scope to re-examine some of the major assumptions of the technique with reference to the number of fingerprint properties used in the model, the number of model iterations and the potential uncertainties of using more than one sediment core collected from the same floodplain sink. We investigated the role of subsurface erosion in the supply of fine sediment to two sediment cores collected from a floodplain in a small degraded catchment in the Eastern Cape, South Africa. The results showed that increasing the number of individual fingerprint properties in the composite signature did not improve the model goodness-of-fit. This is still a much debated issue in sediment source fingerprinting. To test the goodness-of-fit further, the number of model repeat iterations was increased from 5000 to 30,000. However, this did not reduce uncertainty ranges in modelled source proportions nor improve the model goodness-of-fit. The estimated sediment source contributions were not consistent with the available published data on erosion processes in the study catchment. The temporal pattern of sediment source contributions predicted for the two sediment cores was very different despite the cores being collected in close proximity from the same floodplain. This highlights some of the potential limitations associated with using floodplain cores to reconstruct catchment erosion processes and associated sediment source contributions. For the source tracing approach in general, the findings here suggest the need for further investigations into uncertainties related to the number of fingerprint properties included in un-mixing models. The findings support the current widespread use of ≤5000 model repeat iterations for estimating the key sources of sediment samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Numerical Analysis of Plasma Transport in Tandem Volume Magnetic Multicusp Ion Sources

    DTIC Science & Technology

    1992-03-01

    the results of the model are qualitatively correct. Boltzmann Equation, Ion Sources, Plasma Simulation, Electron Temperature, Plasma Density, Ion Temperature, Hydrogen Ions, Magnetic Filters, Hydrogen Plasma Chemistry .

  11. Evaluation of nitrous acid sources and sinks in urban outflow

    NASA Astrophysics Data System (ADS)

    Gall, Elliott T.; Griffin, Robert J.; Steiner, Allison L.; Dibb, Jack; Scheuer, Eric; Gong, Longwen; Rutter, Andrew P.; Cevik, Basak K.; Kim, Saewung; Lefer, Barry; Flynn, James

    2016-02-01

    Intensive air quality measurements made from June 22-25, 2011 in the outflow of the Dallas-Fort Worth (DFW) metropolitan area are used to evaluate nitrous acid (HONO) sources and sinks. A two-layer box model was developed to assess the ability of established and recently identified HONO sources and sinks to reproduce observations of HONO mixing ratios. A baseline model scenario includes sources and sinks established in the literature and is compared to scenarios including three recently identified sources: volatile organic compound-mediated conversion of nitric acid to HONO (S1), biotic emission from the ground (S2), and re-emission from a surface nitrite reservoir (S3). For all mechanisms, ranges of parametric values span lower- and upper-limit values. Model outcomes for 'likely' estimates of sources and sinks generally show under-prediction of HONO observations, implying the need to evaluate additional sources and variability in estimates of parameterizations, particularly during daylight hours. Monte Carlo simulation is applied to model scenarios constructed with sources S1-S3 added independently and in combination, generally showing improved model outcomes. Adding sources S2 and S3 (scenario S2/S3) appears to best replicate observed HONO, as determined by the model coefficient of determination and residual sum of squared errors (r2 = 0.55 ± 0.03, SSE = 4.6 × 106 ± 7.6 × 105 ppt2). In scenario S2/S3, source S2 is shown to account for 25% and 6.7% of the nighttime and daytime budget, respectively, while source S3 accounts for 19% and 11% of the nighttime and daytime budget, respectively. However, despite improved model fit, there remains significant underestimation of daytime HONO; on average, a 0.15 ppt/s unknown daytime HONO source, or 67% of the total daytime source, is needed to bring scenario S2/S3 into agreement with observation. Estimates of 'best fit' parameterizations across lower to upper-limit values results in a moderate reduction of the unknown daytime source, from 0.15 to 0.10 ppt/s.

  12. Long-period noise source inversion in a 3-D heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Afanasiev, M.; Boehm, C.; Fichtner, A.

    2017-12-01

    We have implemented a new method for ambient noise source inversion that fully honors finite-frequency wave propagation and 3-D heterogeneous Earth structure.Here, we present results of its first application to the Earth's long-period background signal, the hum, in a period band of around 120 - 300 s. In addition to being a computationally convenient test case, the hum is also the topic of ongoing research in its own right, because different physical mechanisms have been proposed for its excitation. The broad patterns of this model for South and North hemisphere winter are qualitatively consistent with previous long-term studies of the hum sources; however, thanks to methodological improvements, the iterative refinement, and the use of a comparatively extensive dataset, we retrieve a more detailed model in certain locations. In particular, our results support findings that the dominant hum sources are focused along coasts and shelf areas, particularly in the North hemisphere winter, with a possible though not well-constrained contribution of pelagic sources. Additionally, our findings indicate that hum source locations in the ocean, tentatively linked to locally high bathymetry, are important contributors particularly during South hemisphere winter. These results, in conjunction with synthetic recovery tests and observed cross-correlation waveforms, suggest that hum sources are rather narrowly concentrated in space, with length scales on the order of few hundred kilometers. Future work includes the extension of the model to spring and fall season and to shorter periods, as well as its use in full-waveform ambient noise inversion for 3-D Earth structure.

  13. Characteristic Variability Timescales in the Gamma-ray Power Spectra of Blazars

    NASA Astrophysics Data System (ADS)

    Ryan, James Lee; Siemiginowska, Aneta; Sobolewska, Malgorzata; Grindlay, Jonathan E.

    2018-01-01

    We study the gamma-ray variability of 13 bright blazars observed with the Fermi Large Area Telescope in the 0.2-300 MeV band over 7.8 years.We find that continuous-time autoregressive moving average (CARMA) models provide adequate fits to the blazar light curves, and using the models we constrain the power spectral density (PSD) of each source.We also perform simulations to test the ability of CARMA modeling to recover the PSDs of artificial light curves with our data quality.Seven sources show evidence for a low-frequency break at an average timescale of ~1 year, with five of these sources showing evidence for an additional high-frequency break at an average timescale of ~7 days.We compare our results to previous studies, and discuss the possible physical interpretations of our results.

  14. Estimating virus occurrence using Bayesian modeling in multiple drinking water systems of the United States

    USGS Publications Warehouse

    Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P

    2017-01-01

    incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.

  15. Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    The results of a study to analyze data and document dynamic program highlights of the Skylab Program are presented. Included are structural model sources, illustration of the analytical models, utilization of models and the resultant derived data, data supplied to organization and subsequent utilization, and specifications of model cycles.

  16. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Microcomputer pollution model for civilian airports and Air Force bases. Model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segal, H.M.; Hamilton, P.L.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less

  18. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  19. Application of the SPARROW model to assess surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, Daniel R.; Johnson, Henry M.

    2013-01-01

    The watershed model SPARROW (Spatially Referenced Regressions on Watershed attributes) was used to estimate mean annual surface-water nutrient conditions (total nitrogen and total phosphorus) and to identify important nutrient sources in catchments of the Pacific Northwest region of the United States for 2002. Model-estimated nutrient yields were generally higher in catchments on the wetter, western side of the Cascade Range than in catchments on the drier, eastern side. The largest source of locally generated total nitrogen stream load in most catchments was runoff from forestland, whereas the largest source of locally generated total phosphorus stream load in most catchments was either geologic material or livestock manure (primarily from grazing livestock). However, the highest total nitrogen and total phosphorus yields were predicted in the relatively small number of catchments where urban sources were the largest contributor to local stream load. Two examples are presented that show how SPARROW results can be applied to large rivers—the relative contribution of different nutrient sources to the total nitrogen load in the Willamette River and the total phosphorus load in the Snake River. The results from this study provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to researchers and water-quality managers performing local nutrient assessments.

  20. Study on induced radioactivity of China Spallation Neutron Source

    NASA Astrophysics Data System (ADS)

    Wu, Qing-Biao; Wang, Qing-Bin; Wu, Jing-Min; Ma, Zhong-Jian

    2011-06-01

    China Spallation Neutron Source (CSNS) is the first High Energy Intense Proton Accelerator planned to be constructed in China during the State Eleventh Five-Year Plan period, whose induced radioactivity is very important for occupational disease hazard assessment and environmental impact assessment. Adopting the FLUKA code, the authors have constructed a cylinder-tunnel geometric model and a line-source sampling physical model, deduced proper formulas to calculate air activation, and analyzed various issues with regard to the activation of different tunnel parts. The results show that the environmental impact resulting from induced activation is negligible, whereas the residual radiation in the tunnels has a great influence on maintenance personnel, so strict measures should be adopted.

  1. Identification of biased sectors in emission data using a combination of chemical transport model and receptor model

    NASA Astrophysics Data System (ADS)

    Uranishi, Katsushige; Ikemori, Fumikazu; Nakatsubo, Ryohei; Shimadera, Hikari; Kondo, Akira; Kikutani, Yuki; Asano, Katsuyoshi; Sugata, Seiji

    2017-10-01

    This study presented a comparison approach with multiple source apportionment methods to identify which sectors of emission data have large biases. The source apportionment methods for the comparison approach included both receptor and chemical transport models, which are widely used to quantify the impacts of emission sources on fine particulate matter of less than 2.5 μm in diameter (PM2.5). We used daily chemical component concentration data in the year 2013, including data for water-soluble ions, elements, and carbonaceous species of PM2.5 at 11 sites in the Kinki-Tokai district in Japan in order to apply the Positive Matrix Factorization (PMF) model for the source apportionment. Seven PMF factors of PM2.5 were identified with the temporal and spatial variation patterns and also retained features of the sites. These factors comprised two types of secondary sulfate, road transportation, heavy oil combustion by ships, biomass burning, secondary nitrate, and soil and industrial dust, accounting for 46%, 17%, 7%, 14%, 13%, and 3% of the PM2.5, respectively. The multiple-site data enabled a comprehensive identification of the PM2.5 sources. For the same period, source contributions were estimated by air quality simulations using the Community Multiscale Air Quality model (CMAQ) with the brute-force method (BFM) for four source categories. Both models provided consistent results for the following three of the four source categories: secondary sulfates, road transportation, and heavy oil combustion sources. For these three target categories, the models' agreement was supported by the small differences and high correlations between the CMAQ/BFM- and PMF-estimated source contributions to the concentrations of PM2.5, SO42-, and EC. In contrast, contributions of the biomass burning sources apportioned by CMAQ/BFM were much lower than and little correlated with those captured by the PMF model, indicating large uncertainties in the biomass burning emissions used in the CMAQ simulations. Thus, this comparison approach using the two antithetical models enables us to identify which sectors of emission data have large biases for improvement of future air quality simulations.

  2. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  3. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P.; Budge, A.; Hudspeth, W.; hide

    2012-01-01

    Juniperus spp. pollen is a significant aeroallergen that can be transported 200-600 km from the source. Local observations of Juniperus spp. phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. Methods: The Dust REgional Atmospheric Model (DREAM)is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We successfully modified the DREAM model to incorporate pollen transport (PREAM) and used MODIS satellite images to develop Juniperus ashei pollen input source masks. The Pollen Release Potential Source Map, also referred to as a source mask in model applications, may use different satellite platforms and sensors and a variety of data sets other than the USGS GAP data we used to map J. ashei cover type. MODIS derived percent tree cover is obtained from MODIS Vegetation Continuous Fields (VCF) product (collection 3 and 4, MOD44B, 500 and 250 m grid resolution). We use updated 2010 values to calculate pollen concentration at source (J. ashei ). The original MODIS derived values are converted from native approx. 250 m to 990m (approx. 1 km) for the calculation of a mask to fit the model (PREAM) resolution. Results: The simulation period is chosen following the information that in the last 2 weeks of December 2010. The PREAM modeled near-surface concentrations (Nm-3) shows the transport patterns of J. ashei pollen over a 5 day period (Fig. 2). Typical scales of the simulated transport process are regional.

  4. Using plant growth modeling to analyze C source–sink relations under drought: inter- and intraspecific comparison

    PubMed Central

    Pallas, Benoît; Clément-Vidal, Anne; Rebolledo, Maria-Camila; Soulié, Jean-Christophe; Luquet, Delphine

    2013-01-01

    The ability to assimilate C and allocate non-structural carbohydrates (NSCs) to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm) were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyze such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed. PMID:24204372

  5. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  6. Source-sector contributions to European ozone and fine PM in 2010 using AQMEII modeling data

    NASA Astrophysics Data System (ADS)

    Karamchandani, Prakash; Long, Yoann; Pirovano, Guido; Balzarini, Alessandra; Yarwood, Greg

    2017-05-01

    Source apportionment modeling provides valuable information on the contributions of different source sectors and/or source regions to ozone (O3) or fine particulate matter (PM2.5) concentrations. This information can be useful in designing air quality management strategies and in understanding the potential benefits of reducing emissions from a particular source category. The Comprehensive Air quality Model with Extensions (CAMx) offers unique source attribution tools, called the Ozone and Particulate Source Apportionment Technology (OSAT/PSAT), which track source contributions. We present results from a CAMx source attribution modeling study for a summer month and a winter month using a recently evaluated European CAMx modeling database developed for Phase 3 of the Air Quality Model Evaluation International Initiative (AQMEII). The contributions of several source sectors (including model boundary conditions of chemical species representing transport of emissions from outside the modeling domain as well as initial conditions of these species) to O3 or PM2.5 concentrations in Europe were calculated using OSAT and PSAT, respectively. A 1-week spin-up period was used to reduce the influence of initial conditions. Evaluation focused on 16 major cities and on identifying source sectors that contributed above 5 %. Boundary conditions have a large impact on summer and winter ozone in Europe and on summer PM2.5, but they are only a minor contributor to winter PM2.5. Biogenic emissions are important for summer ozone and PM2.5. The important anthropogenic sectors for summer ozone are transportation (both on-road and non-road), energy production and conversion, and industry. In two of the 16 cities, solvent and product also contributed above 5 % to summertime ozone. For summertime PM2.5, the important anthropogenic source sectors are energy, transportation, industry, and agriculture. Residential wood combustion is an important anthropogenic sector in winter for PM2.5 over most of Europe, with larger contributions in central and eastern Europe and the Nordic cities. Other anthropogenic sectors with large contributions to wintertime PM2.5 include energy, transportation, and agriculture.

  7. Analytical method for optimal source reduction with monitored natural attenuation in contaminated aquifers

    USGS Publications Warehouse

    Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,

    2003-01-01

    A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.

  8. Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano

    2016-04-01

    On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.

  9. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  10. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  11. The Importance of Electron Source Population to the Remarkable Enhancement of Radiation belt Electrons during the October 2012 Storm

    NASA Astrophysics Data System (ADS)

    Tu, W.; Cunningham, G.; Reeves, G. D.; Chen, Y.; Henderson, M. G.; Blake, J. B.; Baker, D. N.; Spence, H.

    2013-12-01

    During the October 8-9 2012 storm, the MeV electron fluxes in the heart of the outer radiation belt are first wiped out then exhibit a three-orders-of-magnitude increase on the timescale of hours, as observed by the MagEIS and REPT instruments aboard the Van Allen Probes. There is strong observational evidence that the remarkable enhancement is due to local acceleration by chorus waves, as shown in the recent Science paper by Reeves et al.1. However, the importance of the dynamic electron source population transported in from the plasma sheet, to the observed remarkable enhancement, has not been studied. We illustrate the importance of the source population with our simulation of the event using the DREAM 3D diffusion model. Three new modifications have been implemented in the model: 1) incorporating a realistic and time-dependent low-energy boundary condition at 100 keV obtained from the MagEIS data; 2) utilizing event-specific chorus wave distributions derived from the low-energy electron precipitation observed by POES and validated against the in situ wave data from EMFISIS; 3) using an ';open' boundary condition at L*=11 and implementing electron lifetimes on the order of the drift period outside the solar-wind driven last closed drift shell. The model quantitatively reproduces the MeV electron dynamics during this event, including the fast dropout at the start of Oct. 8th, low electron flux during the first Dst dip, and the remarkable enhancement peaked at L*=4.2 during the second Dst dip. By comparing the model results with realistic source population against those with constant low-energy boundary (see figure), we find that the realistic electron source population is critical to reproduce the observed fast and significant increase of MeV electrons. 1Reeves, G. D., et al. (2013), Electron Acceleration in the Heart of the Van Allen Radiation Belts, Science, DOI:10.1126/science.1237743. Comparison between data and model results during the October 2012 storm for electrons at μ=3168 MeV/G and K=0.1 G1/2Re. Top plot is the electron phase space density data measured by the two Van Allen Probes; middle plot shows the results from the DREAM 3D diffusion model with a realistic electron source population derived from MagEIS data; and the bottom plot is the model results with a constant source population.

  12. Acoustic Source Analysis of Magnetoacoustic Tomography With Magnetic Induction for Conductivity Gradual-Varying Tissues.

    PubMed

    Wang, Jiawei; Zhou, Yuqi; Sun, Xiaodong; Ma, Qingyu; Zhang, Dong

    2016-04-01

    As a multiphysics imaging approach, magnetoacoustic tomography with magnetic induction (MAT-MI) works on the physical mechanism of magnetic excitation, acoustic vibration, and transmission. Based on the theoretical analysis of the source vibration, numerical studies are conducted to simulate the pathological changes of tissues for a single-layer cylindrical conductivity gradual-varying model and estimate the strengths of sources inside the model. The results suggest that the inner source is generated by the product of the conductivity and the curl of the induced electric intensity inside conductivity homogeneous medium, while the boundary source is produced by the cross product of the gradient of conductivity and the induced electric intensity at conductivity boundary. For a biological tissue with low conductivity, the strength of boundary source is much higher than that of the inner source only when the size of conductivity transition zone is small. In this case, the tissue can be treated as a conductivity abrupt-varying model, ignoring the influence of inner source. Otherwise, the contributions of inner and boundary sources should be evaluated together quantitatively. This study provide basis for further study of precise image reconstruction of MAT-MI for pathological tissues.

  13. Evidence for Legacy Contamination of Nitrate in Groundwater of North Carolina Using Monitoring and Private Well Data Models

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Kane, E.; Bolich, R.; Serre, M. L.

    2014-12-01

    Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. Legacy contamination, or past releases of NO3-, is thought to be impacting current groundwater and surface water of North Carolina. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure known as constrained forward nonlinear regression and hyperparameter optimization (CFN-RHO) is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is then used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. The major finding regarding legacy sources NO3- in this study is that the LUR-BME models show the geographical extent of low-level contamination of deeper drinking-water aquifers is beyond that of the shallower monitoring well. Groundwater NO3- in monitoring wells is highly variable with many areas predicted above the current Environmental Protection Agency standard of 10 mg/L. Contrarily, the private well results depict widespread, low-level NO3-concentrations. This evidence supports that in addition to downward transport, there is also a significant outward transport of groundwater NO3- in the drinking water aquifer to areas outside the range of sources. Results indicate that the deeper aquifers are potentially acting as a reservoir that is not only deeper, but also covers a larger geographical area, than the reservoir formed by the shallow aquifers. Results are of interest to agencies that regulate surface water and drinking water sources impacted by the effects of legacy NO3- sources. Additionally, the results can provide guidance on factors affecting the point-level variability of groundwater NO3- and areas where monitoring is needed to reduce uncertainty. Lastly, LUR-BME predictions can be integrated into surface water models for more accurate management of non-point sources of nitrogen.

  14. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    USGS Publications Warehouse

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  15. Global distribution and sources of dissolved inorganic nitrogen export to the coastal zone: Results from a spatially explicit, global model

    NASA Astrophysics Data System (ADS)

    Dumont, E.; Harrison, J. A.; Kroeze, C.; Bakker, E. J.; Seitzinger, S. P.

    2005-12-01

    Here we describe, test, and apply a spatially explicit, global model for predicting dissolved inorganic nitrogen (DIN) export by rivers to coastal waters (NEWS-DIN). NEWS-DIN was developed as part of an internally consistent suite of global nutrient export models. Modeled and measured DIN export values agree well (calibration R2 = 0.79), and NEWS-DIN is relatively free of bias. NEWS-DIN predicts: DIN yields ranging from 0.0004 to 5217 kg N km-2 yr-1 with the highest DIN yields occurring in Europe and South East Asia; global DIN export to coastal waters of 25 Tg N yr-1, with 16 Tg N yr-1 from anthropogenic sources; biological N2 fixation is the dominant source of exported DIN; and globally, and on every continent except Africa, N fertilizer is the largest anthropogenic source of DIN export to coastal waters.

  16. Tracing catchment fine sediment sources using the new SIFT (SedIment Fingerprinting Tool) open source software.

    PubMed

    Pulley, S; Collins, A L

    2018-09-01

    The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  17. A Hierarchical Multi-Model Approach for Uncertainty Segregation, Prioritization and Comparative Evaluation of Competing Modeling Propositions

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Elshall, A. S.; Hanor, J. S.

    2012-12-01

    Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis provided insights on uncertainty priorities and propagation. In addition, it assisted in evaluating the relative importance of competing modeling propositions for each uncertain model component. By being able to dissect the uncertain model components and provide weighted representation of the competing propositions for each uncertain model component based on the background knowledge, the HBMA functions as an epistemic framework for advancing knowledge about the system under study.

  18. The Solid Rocket Motor Slag Population: Results of a Radar-based Regressive Statistical Evaluation

    NASA Technical Reports Server (NTRS)

    Horstman, Matthew F.; Xu, Yu-Lin

    2008-01-01

    Solid rocket motor (SRM) slag has been identified as a significant source of man-made orbital debris. The propensity of SRMs to generate particles of 100 m and larger has caused concern regarding their contribution to the debris environment. Radar observation, rather than in-situ gathered evidence, is currently the only measurable source for the NASA/ODPO model of the on-orbit slag population. This simulated model includes the time evolution of the resultant orbital populations using a historical database of SRM launches, propellant masses, and estimated locations and times of tail-off. However, due to the small amount of observational evidence, there can be no direct comparison to check the validity of this model. Rather than using the assumed population developed from purely historical and physical assumptions, a regressional approach was used which utilized the populations observed by the Haystack radar from 1996 to present. The estimated trajectories from the historical model of slag sources, and the corresponding plausible detections by the Haystack radar, were identified. Comparisons with observational data from the ensuing years were made, and the SRM model was altered with respect to size and mass production of slag particles to reflect the historical data obtained. The result is a model SRM population that fits within the bounds of the observed environment.

  19. Study on gas diffusion emitted from different height of point source.

    PubMed

    Yassin, Mohamed F

    2009-01-01

    The flow and dispersion of stack-gas emitted from different elevated point source around flow obstacles in an urban environment have been investigated, using computational fluid dynamics models (CFD). The results were compared with the experimental results obtained from the diffusion wind tunnel under different conditions of thermal stability (stable, neutral or unstable). The flow and dispersion fields in the boundary layer in an urban environment were examined with different flow obstacles. Gaseous pollutant was discharged in the simulated boundary layer over the flat area. The CFD models used for the simulation were based on the steady-state Reynolds-Average Navier-Stoke equations (RANS) with kappa-epsilon turbulence models; standard kappa-epsilon and RNG kappa-epsilon models. The flow and dispersion data measured in the wind tunnel experiments were compared with the results of the CFD models in order to evaluate the prediction accuracy of the pollutant dispersion. The results of the CFD models showed good agreement with the results of the wind tunnel experiments. The results indicate that the turbulent velocity is reduced by the obstacles models. The maximum dispersion appears around the wake region of the obstacles.

  20. Next generation of Z* modelling tool for high intensity EUV and soft x-ray plasma sources simulations

    NASA Astrophysics Data System (ADS)

    Zakharov, S. V.; Zakharov, V. S.; Choi, P.; Krukovskiy, A. Y.; Novikov, V. G.; Solomyannaya, A. D.; Berezin, A. V.; Vorontsov, A. S.; Markov, M. B.; Parot'kin, S. V.

    2011-04-01

    In the specifications for EUV sources, high EUV power at IF for lithography HVM and very high brightness for actinic mask and in-situ inspections are required. In practice, the non-equilibrium plasma dynamics and self-absorption of radiation limit the in-band radiance of the plasma and the usable radiation power of a conventional single unit EUV source. A new generation of the computational code Z* is currently developed under international collaboration in the frames of FP7 IAPP project FIRE for modelling of multi-physics phenomena in radiation plasma sources, particularly for EUVL. The radiation plasma dynamics, the spectral effects of self-absorption in LPP and DPP and resulting Conversion Efficiencies are considered. The generation of fast electrons, ions and neutrals is discussed. Conditions for the enhanced radiance of highly ionized plasma in the presence of fast electrons are evaluated. The modelling results are guiding a new generation of EUV sources being developed at Nano-UV, based on spatial/temporal multiplexing of individual high brightness units, to deliver the requisite brightness and power for both lithography HVM and actinic metrology applications.

  1. Transport and solubility of Hetero-disperse dry deposition particulate matter subject to urban source area rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Ying, G.; Sansalone, J.

    2010-03-01

    SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.

  2. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  3. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  4. Rapid Inflation Caused by Shallow Magmatic Activities at Okmok Volcano, Alaska, Detected by GPS Campaigns 2000-2003

    NASA Astrophysics Data System (ADS)

    Miyagi, Y.; Freymueller, J.; Kimata, F.; Sato, T.; Mann, D.

    2006-12-01

    Okmok volcano is located on Umnak Island in the Aleutian Arc, Alaska. This volcano consists of a large caldera, and there are several post-caldera cones within the caldera. It has erupted more than 10 times during the last century, with the latest eruption occurring in February 1997. Annual GPS campaigns during 2000-2003 have revealed a rapid inflation at Okmok volcano. Surface deformation indicates that Okmok volcano has been inflating during 2000-2003 at a variable inflation rate. Total displacements over three years are as large as 15 cm of maximum radial displacement and more than 35 cm of maximum uplift. Simple inflation pattern after 2001, showing radial outward displacements from the caldera center and significant uplifts, are modeled by a Mogi inflation source, which is located at the depth of about 3.1 km beneath the geometric center of the caldera, and we interpreted the source as a shallow magma chamber. The results from our GPS measurements correspond approximately to the results from InSAR measurement for almost same periods, except for an underestimate of the volume change rate of the source deduced by InSAR data for the period 2002-2003. Taking into consideration the results from InSAR measurements, the amount of volume increase in the source is estimated to be about 0.028 km3 during 1997-2003. This means that 20-54 percent of the volume erupted in the 1997 eruption has been already replenished in the shallow magma chamber. An eruption recurrence time is estimated from the volume change rate of the source to be about 15-30 years for 1997-sized eruptions, which is consistent with about 25 years average time interval between major eruptions at Okmok volcano. An additional modeling using a rectangular tensile source combined to the main spherical source suggests a possibility of other magma storage located between the main source and the active vent, which is associated with lateral magma transportation between them. The combined model improved residuals compared to those from single-source model, and provided significantly better fitting to the deformation data inside the caldera.

  5. Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results

    NASA Technical Reports Server (NTRS)

    Wells, D. N.; Allen, P. A.

    2012-01-01

    An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.

  6. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  7. Application of the Quadrupole Method for Simulation of Passive Thermography

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Zalameda, Joseph N.; Gregory, Elizabeth D.

    2017-01-01

    Passive thermography has been shown to be an effective method for in-situ and real time nondestructive evaluation (NDE) to measure damage growth in a composite structure during cyclic loading. The heat generation by subsurface flaw results in a measurable thermal profile at the surface. This paper models the heat generation as a planar subsurface source and calculates the resultant temperature profile at the surface using a three dimensional quadrupole. The results of the model are compared to finite element simulations of the same planar sources and experimental data acquired during cyclic loading of composite specimens.

  8. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  9. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  10. Effects of sound source directivity on auralizations

    NASA Astrophysics Data System (ADS)

    Sheets, Nathan W.; Wang, Lily M.

    2002-05-01

    Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.

  11. Integrating multiple data sources in species distribution modeling: A framework for data fusion

    USGS Publications Warehouse

    Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.

    2017-01-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  12. Forecasting daily source air quality using multivariate statistical analysis and radial basis function networks.

    PubMed

    Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A

    2008-12-01

    It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.

  13. Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources

    NASA Astrophysics Data System (ADS)

    Davoodi, M.; Mesgari, M. S.

    2015-12-01

    Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.

  14. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  15. Thermal imager sources of non-uniformities: modeling of static and dynamic contributions during operations

    NASA Astrophysics Data System (ADS)

    Sozzi, B.; Olivieri, M.; Mariani, P.; Giunti, C.; Zatti, S.; Porta, A.

    2014-05-01

    Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don't explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.

  16. Identifying Greater Sage-Grouse source and sink habitats for conservation planning in an energy development landscape.

    PubMed

    Kirol, Christopher P; Beck, Jeffrey L; Huzurbazar, Snehalata V; Holloran, Matthew J; Miller, Scott N

    2015-06-01

    Conserving a declining species that is facing many threats, including overlap of its habitats with energy extraction activities, depends upon identifying and prioritizing the value of the habitats that remain. In addition, habitat quality is often compromised when source habitats are lost or fragmented due to anthropogenic development. Our objective was to build an ecological model to classify and map habitat quality in terms of source or sink dynamics for Greater Sage-Grouse (Centrocercus urophasianus) in the Atlantic Rim Project Area (ARPA), a developing coalbed natural gas field in south-central Wyoming, USA. We used occurrence and survival modeling to evaluate relationships between environmental and anthropogenic variables at multiple spatial scales and for all female summer life stages, including nesting, brood-rearing, and non-brooding females. For each life stage, we created resource selection functions (RSFs). We weighted the RSFs and combined them to form a female summer occurrence map. We modeled survival also as a function of spatial variables for nest, brood, and adult female summer survival. Our survival-models were mapped as survival probability functions individually and then combined with fixed vital rates in a fitness metric model that, when mapped, predicted habitat productivity (productivity map). Our results demonstrate a suite of environmental and anthropogenic variables at multiple scales that were predictive of occurrence and survival. We created a source-sink map by overlaying our female summer occurrence map and productivity map to predict habitats contributing to population surpluses (source habitats) or deficits (sink habitat) and low-occurrence habitats on the landscape. The source-sink map predicted that of the Sage-Grouse habitat within the ARPA, 30% was primary source, 29% was secondary source, 4% was primary sink, 6% was secondary sink, and 31% was low occurrence. Our results provide evidence that energy development and avoidance of energy infrastructure were probably reducing the amount of source habitat within the ARPA landscape. Our source-sink map provides managers with a means of prioritizing habitats for conservation planning based on source and sink dynamics. The spatial identification of high value (i.e., primary source) as well as suboptimal (i.e., primary sink) habitats allows for informed energy development to minimize effects on local wildlife populations.

  17. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  18. Code TESLA for Modeling and Design of High-Power High-Efficiency Klystrons

    DTIC Science & Technology

    2011-03-01

    CODE TESLA FOR MODELING AND DESIGN OF HIGH - POWER HIGH -EFFICIENCY KLYSTRONS * I.A. Chernyavskiy, SAIC, McLean, VA 22102, U.S.A. S.J. Cooke, B...and multiple-beam klystrons as high - power RF sources. These sources are widely used or proposed to be used in accelerators in the future. Comparison...of TESLA modelling results with experimental data for a few multiple-beam klystrons are shown. INTRODUCTION High - power and high -efficiency

  19. Role of positive ions on the surface production of negative ions in a fusion plasma reactor type negative ion source--Insights from a three dimensional particle-in-cell Monte Carlo collisions model

    NASA Astrophysics Data System (ADS)

    Fubiani, G.; Boeuf, J. P.

    2013-11-01

    Results from a 3D self-consistent Particle-In-Cell Monte Carlo Collisions (PIC MCC) model of a high power fusion-type negative ion source are presented for the first time. The model is used to calculate the plasma characteristics of the ITER prototype BATMAN ion source developed in Garching. Special emphasis is put on the production of negative ions on the plasma grid surface. The question of the relative roles of the impact of neutral hydrogen atoms and positive ions on the cesiated grid surface has attracted much attention recently and the 3D PIC MCC model is used to address this question. The results show that the production of negative ions by positive ion impact on the plasma grid is small with respect to the production by atomic hydrogen or deuterium bombardment (less than 10%).

  20. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-05-18

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.

  1. Model Calculations of the Impact of NO(x) from Air Traffic, Lightning and Surface Emissions, Compared with Measurements

    NASA Technical Reports Server (NTRS)

    Meijer, E. W.; vanVelthoven, P. F. J.; Thompson, A. M.; Pfister, L.; Schlager, H.; Schulte, P.; Kelder, H.

    1999-01-01

    The impact of NO(x) from aircraft emissions, lightning and surface contributions on atmospheric nitrogen oxides and ozone has been investigated with the three-dimensional global chemistry transport model TM3 by partitioning the nitrogen oxides and ozone according to source category. The results have been compared with POLINAT II and SONEX airborne measurements in the North Atlantic flight corridor in 1997. Various cases have been investigated: measurements during a stagnant anti-cyclone and an almost cut-off low, both with expected high aircraft contributions, a southward bound flight with an expected strong flight corridor gradient and lightning contributions in the South, and a transatlantic flight with expected boundary layer pollution near the U.S. coast. The agreement between modeled results and measurements is reasonably good for NO and ozone. Also, the calculated impact of the three defined sources were consistent with the estimated exposure of the sampled air to these sources, obtained by specialized back-trajectory model products.

  2. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  3. Development and application of a reactive plume-in-grid model: evaluation over Greater Paris

    NASA Astrophysics Data System (ADS)

    Korsakissok, I.; Mallet, V.

    2010-09-01

    Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.

  4. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  5. Nitrate source apportionment using a combined dual isotope, chemical and bacterial property, and Bayesian model approach in river systems

    NASA Astrophysics Data System (ADS)

    Xia, Yongqiu; Li, Yuefei; Zhang, Xinyu; Yan, Xiaoyuan

    2017-01-01

    Nitrate (NO3-) pollution is a serious problem worldwide, particularly in countries with intensive agricultural and population activities. Previous studies have used δ15N-NO3- and δ18O-NO3- to determine the NO3- sources in rivers. However, this approach is subject to substantial uncertainties and limitations because of the numerous NO3- sources, the wide isotopic ranges, and the existing isotopic fractionations. In this study, we outline a combined procedure for improving the determination of NO3- sources in a paddy agriculture-urban gradient watershed in eastern China. First, the main sources of NO3- in the Qinhuai River were examined by the dual-isotope biplot approach, in which we narrowed the isotope ranges using site-specific isotopic results. Next, the bacterial groups and chemical properties of the river water were analyzed to verify these sources. Finally, we introduced a Bayesian model to apportion the spatiotemporal variations of the NO3- sources. Denitrification was first incorporated into the Bayesian model because denitrification plays an important role in the nitrogen pathway. The results showed that fertilizer contributed large amounts of NO3- to the surface water in traditional agricultural regions, whereas manure effluents were the dominant NO3- source in intensified agricultural regions, especially during the wet seasons. Sewage effluents were important in all three land uses and exhibited great differences between the dry season and the wet season. This combined analysis quantitatively delineates the proportion of NO3- sources from paddy agriculture to urban river water for both dry and wet seasons and incorporates isotopic fractionation and uncertainties in the source compositions.

  6. Spectral studies of cosmic X-ray sources

    NASA Astrophysics Data System (ADS)

    Blissett, R. J.

    1980-01-01

    The conventional "indirect" method of reduction and data analysis of spectral data from non-dispersive X-ray detectors, by the fitting of assumed spectral models, is examined. The limitations of this procedure are presented, and alternative schemes are considered in which the derived spectra are not biased to an astrophysical source model. A new method is developed in detail to directly restore incident photon spectra from the detected count histograms. This Spectral Restoration Technique allows an increase in resolution, to a degree dependent on the statistical precision of the data. This is illustrated by numerical simulations. Proportional counter data from Ariel 5 are analysed using this technique. The results obtained for the sources Cas A and the Crab Nebula are consistent with previous analyses and show that increases in resolution of up to a factor three are possible in practice. The source Cyg X-3 is closely examined. Complex spectral variability is found, with the continuum and iron-line emission modulated with the 4.8 hour period of the source. The data suggest multi-component emission in the source. Comparing separate Ariel 5 observations and published data from other experiments, a correlation between the spectral shape and source intensity is evident. The source behaviour is discussed with reference to proposed source models. Data acquired by the low-energy detectors on-board HEAO-1 are analysed using the Spectral Restoration Technique. This treatment explicitly demonstrates the existence of oxygen K-absorption edges in the soft X-ray spectra of the Crab Nebula and Sco X-1. These results are considered with reference to current theories of the interstellar medium. The thesis commences with a review of cosmic X-ray sources and the mechanisms responsible for their spectral signatures, and continues with a discussion of the instruments appropriate for spectral studies in X-ray astronomy.

  7. Observational aspects of outbursting black hole sources: Evolution of spectro-temporal features and X-ray variability

    NASA Astrophysics Data System (ADS)

    Sreehari, H.; Nandi, Anuj; Radhika, D.; Iyer, Nirmal; Mandal, Samir

    2018-02-01

    We report on our attempt to understand the outbursting profile of Galactic Black Hole sources, keeping in mind the evolution of temporal and spectral features during the outburst. We present results of evolution of quasi-periodic oscillations, spectral states and possible connection with jet ejections during the outburst phase. Further, we attempt to connect the observed X-ray variabilities (i.e., `class'/`structured' variabilities, similar to GRS 1915+105) with spectral states of black hole sources. Towards these studies, we consider three black hole sources that have undergone single (XTE J1859+226), a few (IGR J17091-3624) and many (GX 339-4) outbursts since the start of RXTE era. Finally, we model the broadband energy spectra (3-150 keV) of different spectral states using RXTE and NuSTAR observations. Results are discussed in the context of two-component advective flow model, while constraining the mass of the three black hole sources.

  8. How Sources of Sexual Information Relate to Adolescents' Beliefs about Sex

    ERIC Educational Resources Information Center

    Bleakley, Amy; Hennessy, Michael; Fishbein, Martin; Jordan, Amy

    2009-01-01

    Objectives: To examine how sources of sexual information are associated with adolescents' behavioral, normative, and control beliefs about having sexual intercourse using the integrative model of behavior change. Methods: Survey data from a quota sample of 459 youth. Results: The most frequently reported sources were friends, teachers, mothers,…

  9. Open-Source Learning Management Systems: A Predictive Model for Higher Education

    ERIC Educational Resources Information Center

    van Rooij, S. Williams

    2012-01-01

    The present study investigated the role of pedagogical, technical, and institutional profile factors in an institution of higher education's decision to select an open-source learning management system (LMS). Drawing on the results of previous research that measured patterns of deployment of open-source software (OSS) in US higher education and…

  10. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  11. The impact of biogenic, anthropogenic, and biomass burning volatile organic compound emissions on regional and seasonal variations in secondary organic aerosol

    NASA Astrophysics Data System (ADS)

    Kelly, Jamie M.; Doherty, Ruth M.; O'Connor, Fiona M.; Mann, Graham W.

    2018-05-01

    The global secondary organic aerosol (SOA) budget is highly uncertain, with global annual SOA production rates, estimated from global models, ranging over an order of magnitude and simulated SOA concentrations underestimated compared to observations. In this study, we use a global composition-climate model (UKCA) with interactive chemistry and aerosol microphysics to provide an in-depth analysis of the impact of each VOC source on the global SOA budget and its seasonality. We further quantify the role of each source on SOA spatial distributions, and evaluate simulated seasonal SOA concentrations against a comprehensive set of observations. The annual global SOA production rates from monoterpene, isoprene, biomass burning, and anthropogenic precursor sources is 19.9, 19.6, 9.5, and 24.6 Tg (SOA) a-1, respectively. When all sources are included, the SOA production rate from all sources is 73.6 Tg (SOA) a-1, which lies within the range of estimates from previous modelling studies. SOA production rates and SOA burdens from biogenic and biomass burning SOA sources peak during Northern Hemisphere (NH) summer. In contrast, the anthropogenic SOA production rate is fairly constant all year round. However, the global anthropogenic SOA burden does have a seasonal cycle which is lowest during NH summer, which is probably due to enhanced wet removal. Inclusion of the new SOA sources also accelerates the ageing by condensation of primary organic aerosol (POA), making it more hydrophilic, leading to a reduction in the POA lifetime. With monoterpene as the only source of SOA, simulated SOA and total organic aerosol (OA) concentrations are underestimated by the model when compared to surface and aircraft measurements. Model agreement with observations improves with all new sources added, primarily due to the inclusion of the anthropogenic source of SOA, although a negative bias remains. A further sensitivity simulation was performed with an increased anthropogenic SOA reaction yield, corresponding to an annual global SOA production rate of 70.0 Tg (SOA) a-1. Whilst simulated SOA concentrations improved relative to observations, they were still underestimated in urban environments and overestimated further downwind and in remote environments. In contrast, the inclusion of SOA from isoprene and biomass burning did not improve model-observations biases substantially except at one out of two tropical locations. However, these findings may reflect the very limited availability of observations to evaluate the model, which are primarily located in the NH mid-latitudes where anthropogenic emissions are high. Our results highlight that, within the current uncertainty limits in SOA sources and reaction yields, over the NH mid-latitudes, a large anthropogenic SOA source results in good agreement with observations. However, more observations are needed to establish the importance of biomass burning and biogenic sources of SOA in model agreement with observations.

  12. A stochastic-advective transport model for NAPL dissolution and degradation in non-uniform flows in porous media

    NASA Astrophysics Data System (ADS)

    Chan, T. P.; Govindaraju, Rao S.

    2006-10-01

    Remediation schemes for contaminated sites are often evaluated to assess their potential for source zone reduction of mass, or treatment of the contaminant between the source and a control plane (CP) to achieve regulatory limits. In this study, we utilize a stochastic stream tube model to explain the behavior of breakthrough curves (BTCs) across a CP. At the local scale, mass dissolution at the source is combined with an advection model with first-order decay for the dissolved plume. Field-scale averaging is then employed to account for spatial variation in mass within the source zone, and variation in the velocity field. Under the assumption of instantaneous mass transfer from the source to the moving liquid, semi-analytical expressions for the BTC and temporal moments are developed, followed by derivation of expressions for effective velocity, dispersion, and degradation coefficients using the method of moments. It is found that degradation strongly influences the behavior of moments and the effective parameters. While increased heterogeneity in the velocity field results in increased dispersion, degradation causes the center of mass of the plume to shift to earlier times, and reduces the dispersion of the BTC by lowering the concentrations in the tail. Modified definitions of effective parameters are presented for degrading solutes to account for the normalization constant (zeroth moment) that keeps changing with time or distance to the CP. It is shown that anomalous dispersion can result for high degradation rates combined with wide variation in velocity fluctuations. Implications of model results on estimating cleanup times and fulfillment of regulatory limits are discussed. Relating mass removal at the source to flux reductions past a control plane is confounded by many factors. Increased heterogeneity in velocity fields causes mass fluxes past a control plane to persist, however, aggressive remediation between the source and CP can reduce these fluxes.

  13. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  14. A refined 2010-based VOC emission inventory and its improvement on modeling regional ozone in the Pearl River Delta Region, China.

    PubMed

    Yin, Shasha; Zheng, Junyu; Lu, Qing; Yuan, Zibing; Huang, Zhijiong; Zhong, Liuju; Lin, Hui

    2015-05-01

    Accurate and gridded VOC emission inventories are important for improving regional air quality model performance. In this study, a four-level VOC emission source categorization system was proposed. A 2010-based gridded Pearl River Delta (PRD) regional VOC emission inventory was developed with more comprehensive source coverage, latest emission factors, and updated activity data. The total anthropogenic VOC emission was estimated to be about 117.4 × 10(4)t, in which on-road mobile source shared the largest contribution, followed by industrial solvent use and industrial processes sources. Among the industrial solvent use source, furniture manufacturing and shoemaking were major VOC emission contributors. The spatial surrogates of VOC emission were updated for major VOC sources such as industrial sectors and gas stations. Subsector-based temporal characteristics were investigated and their temporal variations were characterized. The impacts of updated VOC emission estimates and spatial surrogates were evaluated by modeling O₃ concentration in the PRD region in the July and October of 2010, respectively. The results indicated that both updated emission estimates and spatial allocations can effectively reduce model bias on O₃ simulation. Further efforts should be made on the refinement of source classification, comprehensive collection of activity data, and spatial-temporal surrogates in order to reduce uncertainty in emission inventory and improve model performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Systematic Uncertainties in High-Energy Hadronic Interaction Models

    NASA Astrophysics Data System (ADS)

    Zha, M.; Knapp, J.; Ostapchenko, S.

    2003-07-01

    Hadronic interaction models for cosmic ray energies are uncertain since our knowledge of hadronic interactions is extrap olated from accelerator experiments at much lower energies. At present most high-energy models are based on Grib ov-Regge theory of multi-Pomeron exchange, which provides a theoretical framework to evaluate cross-sections and particle production. While experimental data constrain some of the model parameters, others are not well determined and are therefore a source of systematic uncertainties. In this paper we evaluate the variation of results obtained with the QGSJET model, when modifying parameters relating to three ma jor sources of uncertainty: the form of the parton structure function, the role of diffractive interactions, and the string hadronisation. Results on inelastic cross sections, on secondary particle production and on the air shower development are discussed.

  16. Quantitative estimation of source complexity in tsunami-source inversion

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.

    2016-04-01

    This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.

  17. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  18. Tools for Virtual Collaboration Designed for High Resolution Hydrologic Research with Continental-Scale Data Support

    NASA Astrophysics Data System (ADS)

    Duffy, Christopher; Leonard, Lorne; Shi, Yuning; Bhatt, Gopal; Hanson, Paul; Gil, Yolanda; Yu, Xuan

    2015-04-01

    Using a series of recent examples and papers we explore some progress and potential for virtual (cyber-) collaboration inspired by access to high resolution, harmonized public-sector data at continental scales [1]. The first example describes 7 meso-scale catchments in Pennsylvania, USA where the watershed is forced by climate reanalysis and IPCC future climate scenarios (Intergovernmental Panel on Climate Change). We show how existing public-sector data and community models are currently able to resolve fine-scale eco-hydrologic processes regarding wetland response to climate change [2]. The results reveal that regional climate change is only part of the story, with large variations in flood and drought response associated with differences in terrain, physiography, landuse and/or hydrogeology. The importance of community-driven virtual testbeds are demonstrated in the context of Critical Zone Observatories, where earth scientists from around the world are organizing hydro-geophysical data and model results to explore new processes that couple hydrologic models with land-atmosphere interaction, biogeochemical weathering, carbon-nitrogen cycle, landscape evolution and ecosystem services [3][4]. Critical Zone cyber-research demonstrates how data-driven model development requires a flexible computational structure where process modules are relatively easy to incorporate and where new data structures can be implemented [5]. From the perspective of "Big-Data" the paper points out that extrapolating results from virtual observatories to catchments at continental scales, will require centralized or cloud-based cyberinfrastructure as a necessary condition for effectively sharing petabytes of data and model results [6]. Finally we outline how innovative cyber-science is supporting earth-science learning, sharing and exploration through the use of on-line tools where hydrologists and limnologists are sharing data and models for simulating the coupled impacts of catchment hydrology on lake eco-hydrology (NSF-INSPIRE, IIS1344272). The research attempts to use a virtual environment (www.organicdatascience.org) to break down disciplinary barriers and support emergent communities of science. [1] Source: Leonard and Duffy, 2013, Environmental Modelling & Software; [2] Source: Yu et al, 2014, Computers in Geoscience; [3] Source: Duffy et al, 2014, Procedia Earth and Planetary Science; [4] Source: Shi et al, Journal of Hydrometeorology, 2014; [5] Source: Bhatt et al, 2014, Environmental Modelling & Software ; [6] Leonard and Duffy, 2014, Environmental Modelling and Software.

  19. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  20. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  1. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    PubMed

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  2. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  3. Kinematic and Dynamic Source Rupture Scenario for Potential Megathrust Event along the Southernmost Ryukyu Trench

    NASA Astrophysics Data System (ADS)

    Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.

    2017-12-01

    Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.

  4. Cosmic ray injection spectrum at the galactic sources

    NASA Astrophysics Data System (ADS)

    Lagutin, Anatoly; Tyumentsev, Alexander; Volkov, Nikolay

    The spectra of cosmic rays measured at Earth are different from their source spectra. A key to understanding this difference, being crucial for solving the problem of cosmic-ray origin, is the determination of how cosmic-ray (CR) particles propagate through the turbulent interstellar medium (ISM). If the medium is a quasi-homogeneous the propagation process can be described by a normal diffusion model. However, during a last few decades many evidences, both from theory and observations, of the existence of multiscale structures in the Galaxy have been found. Filaments, shells, clouds are entities widely spread in the ISM. In such a highly non-homogeneous (fractal-like) ISM the normal diffusion model certainly is not kept valid. Generalization of this model leads to what is known as "anomalous diffusion". The main goal of the report is to retrieve the cosmic ray injection spectrum at the galactic sources in the framework of the anomalous diffusion (AD) model. The anomaly in this model results from large free paths ("Levy flights") of particles between galactic inhomogeneities. In order to evaluate the CR spectrum at the sources, we carried out new calculation of the CR spectra at Earth. AD equation in terms of fractional derivatives have been used to describe CR propagation from the nearby (r≤1 kpc) young (t≤ 1 Myr) and multiple old distant (r > 1 kpc) sources. The assessment of the key model parameters have been based on the results of the particles diffusion in the cosmic and laboratory plasma. We show that in the framework of the anomalous diffusion model the locally observed basic features of the cosmic rays (difference between spectral exponents of proton, He and other nuclei, "knee" problem, positron to electron ratio) can be explained if the injection spectrum at the main galactic sources of cosmic rays has spectral exponent p˜ 2.85. The authors acknowledge support from The Russian Foundation for Basic Research grant No. 14-02-31524.

  5. The Human Exposure Model (HEM): A Tool to Support Rapid ...

    EPA Pesticide Factsheets

    The US EPA is developing an open and publically available software program called the Human Exposure Model (HEM) to provide near-field exposure information for Life Cycle Impact Assessments (LCIAs). Historically, LCIAs have often omitted impacts from near-field sources of exposure. The use of consumer products often results in near-field exposures (exposures that occur directly from the use of a product) that are larger than environmentally mediated exposures (i.e. far-field sources)1,2. Failure to consider near-field exposures could result in biases in LCIA-based determinations of the relative sustainability of consumer products. HEM is designed to provide this information.Characterizing near-field sources of chemical exposures present a challenge to LCIA practitioners. Unlike far-field sources, where multimedia mass balance models have been used to determine human exposure, near-field sources require product-specific models of human exposure and considerable information on product use and product composition. Such information is difficult and time-consuming to gather and curate. The HEM software will characterize the distribution of doses and product intake fractions2 across populations of product users and bystanders, allowing for differentiation by various demographic characteristics. The tool incorporates a newly developed database of the composition of more than 17,000 products, data on physical and chemical properties for more than 2,000 chemicals, and mo

  6. Identification of Geologic and Anthropogenic Sources of Phosphorus to Streams in California and Portions of Adjacent States, U.S.A., Using SPARROW Modeling

    NASA Astrophysics Data System (ADS)

    Domagalski, J. L.

    2013-12-01

    The SPARROW (Spatially Referenced Regressions On Watershed Attributes) model allows for the simulation of nutrient transport at un-gauged catchments on a regional scale. The model was used to understand natural and anthropogenic factors affecting phosphorus transport in developed, undeveloped, and mixed watersheds. The SPARROW model is a statistical tool that allows for mass balance calculation of constituent sources, transport, and aquatic decay based upon a calibration of a subset of stream networks, where concentrations and discharge have been measured. Calibration is accomplished using potential sources for a given year and may include fertilizer, geological background (based on bed-sediment samples and aggregated with geochemical map units), point source discharge, and land use categories. NHD Plus version 2 was used to model the hydrologic system. Land to water transport variables tested were precipitation, permeability, soil type, tile drains, and irrigation. For this study area, point sources, cultivated land, and geological background are significant phosphorus sources to streams. Precipitation and clay content of soil are significant land to water transport variables and various stream sizes show significance with respect to aquatic decay. Specific rock types result in different levels of phosphorus loading and watershed yield. Some important geological sources are volcanic rocks (andesite and basalt), granodiorite, glacial deposits, and Mesozoic to Cenozoic marine deposits. Marine sediments vary in their phosphorus content, but are responsible for some of the highest natural phosphorus yields, especially along the Central and Southern California coast. The Miocene Monterey Formation was found to be an especially important local source in southern California. In contrast, mixed metamorphic and igneous assemblages such as argillites, peridotite, and shales of the Trinity Mountains of northern California result in some of the lowest phosphorus yields. The agriculturally productive Central Valley of California has a low amount of background phosphorus in spite of inputs from streams draining upland areas. Many years of intensive agriculture may be responsible for the decrease of soil phosphorus in that area. Watersheds with significant background sources of phosphorus and large amounts of cultivated land had some of the highest per hectare yields. Seven different stream systems important for water management, or to describe transport processes, were investigated in detail for downstream changes in sources and loads. For example, the Klamath River (Oregon and California) has intensive agriculture and andesite-derived phosphorus in the upper reach. The proportion of agricultural-derived phosphorus decreases as the river flows into California before discharge to the ocean. The river flows through at least three different types of geological background sources from high to intermediate to very low. Knowledge of the role of natural sources in developed watersheds is critical for developing nutrient management strategies and these model results will have applicability for the establishment of realistic nutrient criteria.

  7. [Groundwater organic pollution source identification technology system research and application].

    PubMed

    Wang, Xiao-Hong; Wei, Jia-Hua; Cheng, Zhi-Neng; Liu, Pei-Bin; Ji, Yi-Qun; Zhang, Gan

    2013-02-01

    Groundwater organic pollutions are found in large amount of locations, and the pollutions are widely spread once onset; which is hard to identify and control. The key process to control and govern groundwater pollution is how to control the sources of pollution and reduce the danger to groundwater. This paper introduced typical contaminated sites as an example; then carried out the source identification studies and established groundwater organic pollution source identification system, finally applied the system to the identification of typical contaminated sites. First, grasp the basis of the contaminated sites of geological and hydrogeological conditions; determine the contaminated sites characteristics of pollutants as carbon tetrachloride, from the large numbers of groundwater analysis and test data; then find the solute transport model of contaminated sites and compound-specific isotope techniques. At last, through groundwater solute transport model and compound-specific isotope technology, determine the distribution of the typical site of organic sources of pollution and pollution status; invest identified potential sources of pollution and sample the soil to analysis. It turns out that the results of two identified historical pollution sources and pollutant concentration distribution are reliable. The results provided the basis for treatment of groundwater pollution.

  8. Optimizing laser produced plasmas for efficient extreme ultraviolet and soft X-ray light sources

    NASA Astrophysics Data System (ADS)

    Sizyuk, Tatyana; Hassanein, Ahmed

    2014-08-01

    Photon sources produced by laser beams with moderate laser intensities, up to 1014 W/cm2, are being developed for many industrial applications. The performance requirements for high volume manufacture devices necessitate extensive experimental research supported by theoretical plasma analysis and modeling predictions. We simulated laser produced plasma sources currently being developed for several applications such as extreme ultraviolet lithography using 13.5% ± 1% nm bandwidth, possibly beyond extreme ultraviolet lithography using 6.× nm wavelengths, and water-window microscopy utilizing 2.48 nm (La-α) and 2.88 nm (He-α) emission. We comprehensively modeled plasma evolution from solid/liquid tin, gadolinium, and nitrogen targets as three promising materials for the above described sources, respectively. Results of our analysis for plasma characteristics during the entire course of plasma evolution showed the dependence of source conversion efficiency (CE), i.e., laser energy to photons at the desired wavelength, on plasma electron density gradient. Our results showed that utilizing laser intensities which produce hotter plasma than the optimum emission temperatures allows increasing CE for all considered sources that, however, restricted by the reabsorption processes around the main emission region and this restriction is especially actual for the 6.× nm sources.

  9. Inhalation exposure to cleaning products: application of a two-zone model.

    PubMed

    Earnest, C Matt; Corsi, Richard L

    2013-01-01

    In this study, modifications were made to previously applied two-zone models to address important factors that can affect exposures during cleaning tasks. Specifically, we expand on previous applications of the two-zone model by (1) introducing the source in discrete elements (source-cells) as opposed to a complete instantaneous release, (2) placing source cells in both the inner (near person) and outer zones concurrently, (3) treating each source cell as an independent mixture of multiple constituents, and (4) tracking the time-varying liquid concentration and emission rate of each constituent in each source cell. Three experiments were performed in an environmentally controlled chamber with a thermal mannequin and a simplified pure chemical source to simulate emissions from a cleaning product. Gas phase concentration measurements were taken in the bulk air and in the breathing zone of the mannequin to evaluate the model. The mean ratio of the integrated concentration in the mannequin's breathing zone to the concentration in the outer zone was 4.3 (standard deviation, σ = 1.6). The mean ratio of measured concentration in the breathing zone to predicted concentrations in the inner zone was 0.81 (σ = 0.16). Intake fractions ranged from 1.9 × 10(-3) to 2.7 × 10(-3). Model results reasonably predict those of previous exposure monitoring studies and indicate the inadequacy of well-mixed single-zone model applications for some but not all cleaning events.

  10. Estimation of the Characterized Tsunami Source Model considering the Complicated Shape of Tsunami Source by Using the observed waveforms of GPS Buoys in the Nankai Trough

    NASA Astrophysics Data System (ADS)

    Seto, S.; Takahashi, T.

    2017-12-01

    In the 2011 Tohoku earthquake tsunami disaster, the delay of understanding damage situation increased the human damage. To solve this problem, it is important to search the severe damaged areas. The tsunami numerical modeling is useful to estimate damages and the accuracy of simulation depends on the tsunami source. Seto and Takahashi (2017) proposed a method to estimate the characterized tsunami source model by using the limited observed data of GPS buoys. The model consists of Large slip zone (LSZ), Super large slip zone (SLSZ) and background rupture zone (BZ) as the Cabinet Office, Government of Japan (below COGJ) reported after the Tohoku tsunami. At the beginning of this method, the rectangular fault model is assumed based on the seismic magnitude and hypocenter reported right after an earthquake. By using the fault model, tsunami propagation is simulated numerically, and the fault model is improved after comparing the computed data with the observed data repeatedly. In the comparison, correlation coefficient and regression coefficient are used as indexes. They are calculated with the observed and the computed tsunami wave profiles. This repetition is conducted to get the two coefficients close to 1.0, which makes the precise of the fault model higher. However, it was indicated as the improvement that the model did not examine a complicated shape of tsunami source. In this study, we proposed an improved model to examine the complicated shape. COGJ(2012) assumed that possible tsunami source region in the Nankai trough consisted of the several thousands small faults. And, we use these small faults to estimate the targeted tsunami source in this model. Therefore, we can estimate the complicated tsunami source by using these small faults. The estimation of BZ is carried out as a first step, and LSZ and SLSZ are estimated next as same as the previous model. The proposed model by using GPS buoy was applied for a tsunami scenario in the Nankai Trough. As a result, the final estimated location of LSZ and SLSZ in BZ are estimated well.

  11. Magnetar Bursts

    NASA Technical Reports Server (NTRS)

    Kouveliotou, Chryssa

    2014-01-01

    The Fermi/Gamma-ray Burst Monitor (GBM) was launched in June 2008. During the last five years the instrument has observed several hundreds of bursts from 8 confirmed magnetars and 19 events from unconfirmed sources. I will discuss the results of the GBM magnetar burst catalog, expand on the different properties of their diverse source population, and compare these results with the bursting activity of past sources. I will then conclude with thoughts of how these properties fit the magnetar theoretical models.

  12. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  13. Pulling it all together: the self-consistent distribution of neutral tori in Saturn's Magnetosphere based on all Cassini observations

    NASA Astrophysics Data System (ADS)

    Smith, H. T.; Richardson, J. D.

    2017-12-01

    Saturn's magnetosphere is unique in that the plumes from the small icy moon, Enceladus, serve at the primary source for heavy particles in Saturn's magnetosphere. The resulting co-orbiting neutral particles interact with ions, electrons, photons and other neutral particles to generate separate H2O, OH and O tori. Characterization of these toroidal distributions is essential for understanding Saturn magnetospheric sources, composition and dynamics. Unfortunately, limited direct observations of these features are available so modeling is required. A significant modeling challenge involves ensuring that either the plasma and neutral particle populations are not simply input conditions but can provide feedback to each population (i.e. are self-consistent). Jurac and Richardson (2005) executed such a self-consistent model however this research was performed prior to the return of Cassini data. In a similar fashion, we have coupled a 3-D neutral particle model (Smith et al. 2004, 2005, 2006, 2007, 2009, 2010) with a plasma transport model (Richardson 1998; Richardson & Jurac 2004) to develop a self-consistent model which is constrained by all available Cassini observations and current findings on Saturn's magnetosphere and the Enceladus plume source resulting in much more accurate neutral particle distributions. We present a new self-consistent model of the distribution of the Enceladus-generated neutral tori that is validated by all available observations. We also discuss the implications for source rate and variability.

  14. Effluent trading in river systems through stochastic decision-making process: a case study.

    PubMed

    Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh

    2017-09-01

    The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.

  15. Associations of Mortality with Long-Term Exposures to Fine and Ultrafine Particles, Species and Sources: Results from the California Teachers Study Cohort

    PubMed Central

    Hu, Jianlin; Goldberg, Debbie; Reynolds, Peggy; Hertz, Andrew; Bernstein, Leslie; Kleeman, Michael J.

    2015-01-01

    Background Although several cohort studies report associations between chronic exposure to fine particles (PM2.5) and mortality, few have studied the effects of chronic exposure to ultrafine (UF) particles. In addition, few studies have estimated the effects of the constituents of either PM2.5 or UF particles. Methods We used a statewide cohort of > 100,000 women from the California Teachers Study who were followed from 2001 through 2007. Exposure data at the residential level were provided by a chemical transport model that computed pollutant concentrations from > 900 sources in California. Besides particle mass, monthly concentrations of 11 species and 8 sources or primary particles were generated at 4-km grids. We used a Cox proportional hazards model to estimate the association between the pollutants and all-cause, cardiovascular, ischemic heart disease (IHD), and respiratory mortality. Results We observed statistically significant (p < 0.05) associations of IHD with PM2.5 mass, nitrate, elemental carbon (EC), copper (Cu), and secondary organics and the sources gas- and diesel-fueled vehicles, meat cooking, and high-sulfur fuel combustion. The hazard ratio estimate of 1.19 (95% CI: 1.08, 1.31) for IHD in association with a 10-μg/m3 increase in PM2.5 is consistent with findings from the American Cancer Society cohort. We also observed significant positive associations between IHD and several UF components including EC, Cu, metals, and mobile sources. Conclusions Using an emissions-based model with a 4-km spatial scale, we observed significant positive associations between IHD mortality and both fine and ultrafine particle species and sources. Our results suggest that the exposure model effectively measured local exposures and facilitated the examination of the relative toxicity of particle species. Citation Ostro B, Hu J, Goldberg D, Reynolds P, Hertz A, Bernstein L, Kleeman MJ. 2015. Associations of mortality with long-term exposures to fine and ultrafine particles, species and sources: results from the California Teachers Study cohort. Environ Health Perspect 123:549–556; http://dx.doi.org/10.1289/ehp.1408565 PMID:25633926

  16. Solute source depletion control of forward and back diffusion through low-permeability zones

    NASA Astrophysics Data System (ADS)

    Yang, Minjune; Annable, Michael D.; Jawitz, James W.

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.

  17. Solute source depletion control of forward and back diffusion through low-permeability zones.

    PubMed

    Yang, Minjune; Annable, Michael D; Jawitz, James W

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Estimation of contribution ratios of pollutant sources to a specific section based on an enhanced water quality model.

    PubMed

    Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu

    2015-05-01

    Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.

  19. Models, Measurements, and Local Decisions: Assessing and ...

    EPA Pesticide Factsheets

    This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  20. Detection of Warming Effects Due to Industrialization: An Accumulated Intervention Model with an Application in Pohang, Korea.

    NASA Astrophysics Data System (ADS)

    Ryoo, S. B.; Moon, S. E.

    1995-06-01

    Modifications of surface air temperature caused by anthropogenic impacts have received much attention recently because of the heightened interest in climatic change. When an industrial area is constructed, resulting in a large-scale anthropogenic heat source, is it possible to detect the warming effect of the heat source? In this paper, the intensity of warming is estimated in the area of the source. A statistical model is suggested to estimate the warming caused by that anthropogenic heat source. The model used in this study is an accumulated intervention (AI) model that is applied to industrial heat perturbations that occurred in the area. To evaluate the AI model performance, the forecast experiment was carried out with an independent dataset. The data used in this study are the monthly mean temperatures at Pohang, Korea. The AI model was developed based on the data for the 38-year period from 1953 to 1990, and the forecast experiment was carried out with an independent dataset for the 2-year period from 1991 to 1992.

  1. UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario

    PubMed Central

    Porada, Eugeniusz; Szyszkowicz, Mieczysław

    2016-01-01

    UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416

  2. Dense surface seismic data confirm non-double-couple source mechanisms induced by hydraulic fracturing

    USGS Publications Warehouse

    Pesicek, Jeremy; Cieślik, Konrad; Lambert, Marc-André; Carrillo, Pedro; Birkelo, Brad

    2016-01-01

    We have determined source mechanisms for nine high-quality microseismic events induced during hydraulic fracturing of the Montney Shale in Canada. Seismic data were recorded using a dense regularly spaced grid of sensors at the surface. The design and geometry of the survey are such that the recorded P-wave amplitudes essentially map the upper focal hemisphere, allowing the source mechanism to be interpreted directly from the data. Given the inherent difficulties of computing reliable moment tensors (MTs) from high-frequency microseismic data, the surface amplitude and polarity maps provide important additional confirmation of the source mechanisms. This is especially critical when interpreting non-shear source processes, which are notoriously susceptible to artifacts due to incomplete or inaccurate source modeling. We have found that most of the nine events contain significant non-double-couple (DC) components, as evident in the surface amplitude data and the resulting MT models. Furthermore, we found that source models that are constrained to be purely shear do not explain the data for most events. Thus, even though non-DC components of MTs can often be attributed to modeling artifacts, we argue that they are required by the data in some cases, and can be reliably computed and confidently interpreted under favorable conditions.

  3. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  4. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  5. Seismic Source Scaling and Characteristics of Six North Korean Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Park, J.; Stump, B. W.; Che, I. Y.; Hayward, C.

    2017-12-01

    We estimate the range of yields and source depths for the six North Korean underground nuclear explosions in 2006, 2009, 2013, 2016 (January and September), and 2017, based on regional seismic observations in South Korea and China. Seismic data used in this study are from three seismo-acoustic stations, BRDAR, CHNAR, and KSGAR, cooperatively operated by SMU and KIGAM, the KSRS seismic array operated by the Comprehensive Nuclear Test Ban Treaty Organization, and MDJ, a station in the Global Seismographic Network. We calculate spectral ratios for event pairs using seismograms from the six explosions observed along the same paths and at the same receivers. These relative seismic source scaling spectra for Pn, Pg, Sn, and surface wave windows provide a basis for a grid search source solution that estimates source yield and depth for each event based on both the modified Mueller and Murphy (1971; MM71) and Denny and Johnson (1991; DJ91) source models. The grid search is used to identify the best-fit empirical spectral ratios subject to the source models by minimizing the goodness-of-fit (GOF) in the frequency range of 0.5-15 Hz. For all cases, the DJ91 model produces higher ratios of depth and yield than MM71. These initial results include significant trade-offs between depth and yield in all cases. In order to better take the effect of source depth into account, a modified grid search was implemented that includes the propagation effects for different source depths by including reflectivity Greens functions in the grid search procedure. This revision reduces the trade-offs between depth and yield, results in better model fits to frequencies as high as 15 Hz, and GOF values smaller than those where the depth effects on the Greens functions were ignored. The depth and yield estimates for all six explosions using this new procedure will be presented.

  6. New insight on petroleum system modeling of Ghadames basin, Libya

    NASA Astrophysics Data System (ADS)

    Bora, Deepender; Dubey, Siddharth

    2015-12-01

    Underdown and Redfern (2008) performed a detailed petroleum system modeling of the Ghadames basin along an E-W section. However, hydrocarbon generation, migration and accumulation changes significantly across the basin due to complex geological history. Therefore, a single section can't be considered representative for the whole basin. This study aims at bridging this gap by performing petroleum system modeling along a N-S section and provides new insights on source rock maturation, generation and migration of the hydrocarbons using 2D basin modeling. This study in conjunction with earlier work provides a 3D context of petroleum system modeling in the Ghadames basin. Hydrocarbon generation from the lower Silurian Tanezzuft formation and the Upper Devonian Aouinet Ouenine started during the late Carboniferous. However, high subsidence rate during middle to late Cretaceous and elevated heat flow in Cenozoic had maximum impact on source rock transformation and hydrocarbon generation whereas large-scale uplift and erosion during Alpine orogeny has significant impact on migration and accumulation. Visible migration observed along faults, which reactivated during Austrian unconformity. Peak hydrocarbon expulsion reached during Oligocene for both the Tanezzuft and the Aouinet Ouenine source rocks. Based on modeling results, capillary entry pressure driven downward expulsion of hydrocarbons from the lower Silurian Tanezzuft formation to the underlying Bir Tlacsin formation observed during middle Cretaceous. Kinetic modeling has helped to model hydrocarbon composition and distribution of generated hydrocarbons from both the source rocks. Application of source to reservoir tracking technology suggest some accumulations at shallow stratigraphic level has received hydrocarbons from both the Tanezzuft and Aouinet Ouenine source rocks, implying charge mixing. Five petroleum systems identified based on source to reservoir correlation technology in Petromod*. This Study builds upon the original work of Underdown and Redfern, 2008 and offers new insights and interpretation of the data.

  7. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise levels. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significantly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  8. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise level. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significnatly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  9. Discontinuous model with semi analytical sheath interface for radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Miyashita, Masaru

    2016-09-01

    Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.

  10. Why Do I Feel More Confident? Bandura's Sources Predict Preservice Teachers' Latent Changes in Teacher Self-Efficacy

    PubMed Central

    Pfitzner-Eden, Franziska

    2016-01-01

    Teacher self-efficacy (TSE) is associated with a multitude of positive outcomes for teachers and students. However, the development of TSE is an under-researched area. Bandura (1997) proposed four sources of self-efficacy: mastery experiences, vicarious experiences, verbal persuasion, and physiological and affective states. This study introduces a first instrument to assess the four sources for TSE in line with Bandura's conception. Gathering evidence of convergent validity, the contribution that each source made to the development of TSE during a practicum at a school was explored for two samples of German preservice teachers. The first sample (N = 359) were beginning preservice teachers who completed an observation practicum. The second sample (N = 395) were advanced preservice teachers who completed a teaching practicum. The source measure showed good reliability, construct validity, and convergent validity. Latent true change modeling was applied to explore how the sources predicted changes in TSE. Three different models were compared. As expected, results showed that TSE changes in both groups were significantly predicted by mastery experiences, with a stronger relationship in the advanced group. Further, the results indicated that mastery experiences were largely informed by the other three sources to varying degrees depending on the type of practicum. Implications for the practice of teacher education are discussed in light of the results. PMID:27807422

  11. Why Do I Feel More Confident? Bandura's Sources Predict Preservice Teachers' Latent Changes in Teacher Self-Efficacy.

    PubMed

    Pfitzner-Eden, Franziska

    2016-01-01

    Teacher self-efficacy (TSE) is associated with a multitude of positive outcomes for teachers and students. However, the development of TSE is an under-researched area. Bandura (1997) proposed four sources of self-efficacy: mastery experiences, vicarious experiences, verbal persuasion, and physiological and affective states. This study introduces a first instrument to assess the four sources for TSE in line with Bandura's conception. Gathering evidence of convergent validity, the contribution that each source made to the development of TSE during a practicum at a school was explored for two samples of German preservice teachers. The first sample ( N = 359) were beginning preservice teachers who completed an observation practicum. The second sample ( N = 395) were advanced preservice teachers who completed a teaching practicum. The source measure showed good reliability, construct validity, and convergent validity. Latent true change modeling was applied to explore how the sources predicted changes in TSE. Three different models were compared. As expected, results showed that TSE changes in both groups were significantly predicted by mastery experiences, with a stronger relationship in the advanced group. Further, the results indicated that mastery experiences were largely informed by the other three sources to varying degrees depending on the type of practicum. Implications for the practice of teacher education are discussed in light of the results.

  12. Variation of atmospheric CO, δ13C, and δ18O at high northern latitude during 2004-2009: Observations and model simulations

    NASA Astrophysics Data System (ADS)

    Park, Keyhong; Wang, Zhihui; Emmons, Louisa K.; Mak, John E.

    2015-10-01

    Atmospheric CO mixing ratios and stable isotope ratios (δ13C and δ18O) were measured at a high northern latitude site (Westman Islands, Iceland) from January 2004 to March 2010 in order to investigate recent multiyear trends of the sources of atmospheric carbon monoxide in the extratropical Northern Hemisphere. During this period, we observed a decrease of about 2% per year in CO mixing ratios with little significant interannual variability. The seasonal cycles for δ13C and δ18O in CO are similar to that in the CO mixing ratio, and there is a pronounced interannual variation in their seasonal extremes occurring in summer and fall, which is driven by changes in the relative contribution of different sources. Some of the sources of CO are anthropogenic in character (e.g., fossil fuel and biofuel combustion and agricultural waste burning), and some are primarily natural (e.g., oxidation atmospheric methane and other hydrocarbons and wildfires), and distinction among the various major sources can, more or less, be distinguished by the stable isotopic composition of CO. We compare our observations with simulations from a 3-D global chemical transport model (MOZART-4, Model for Ozone and Related Chemical Tracers, version 4). Our results indicate the observed trend of anthropogenic CO emissions is mostly responsible for the observed variation in δ13C and δ18O of CO during 2004-2009. Especially, the δ18O enriched sources such as fossil fuel and biofuel sources are controlling the variation. The modeling results indicate decreasing trends in the fossil fuel and biofuel source contributions at Iceland of -0.61 ± 0.26 ppbv/yr and -0.38 ± 0.10 ppbv/yr, respectively, during the observation period.

  13. Developing the RAL front end test stand source to deliver a 60 mA, 50 Hz, 2 ms H- beam

    NASA Astrophysics Data System (ADS)

    Faircloth, Dan; Lawrie, Scott; Letchford, Alan; Gabor, Christoph; Perkins, Mike; Whitehead, Mark; Wood, Trevor; Tarvainen, Olli; Komppula, Jani; Kalvas, Taneli; Dudnikov, Vadim; Pereira, Hugo; Izaola, Zunbeltz; Simkin, John

    2013-02-01

    All the Front End Test Stand (FETS) beam requirements have been achieved, but not simultaneously [1]. At 50 Hz repetition rates beam current droop becomes unacceptable for pulse lengths longer than 1 ms. This is fundamental limitation of the present source design. Previous researchers [2] have demonstrated that using a physically larger Penning surface plasma source should overcome these limitations. The scaled source development strategy is outlined in this paper. A study of time-varying plasma behavior has been performed using a V-UV spectrometer. Initial experiments to test scaled plasma volumes are outlined. A dedicated plasma and extraction test stand (VESPA-Vessel for Extraction and Source Plasma Analysis) is being developed to allow new source and extraction designs to be appraised. The experimental work is backed up by modeling and simulations. A detailed ANSYS thermal model has been developed. IBSimu is being used to design extraction and beam transport. A novel 3D plasma modeling code using beamlets is being developed by Cobham Vector Fields using SCALA OPERA, early source modeling results are very promising. Hardware on FETS is also being developed in preparation to run the scaled source. A new 2 ms, 50 Hz, 25 kV pulsed extraction voltage power supply has been constructed and a new discharge power supply is being designed. The design of the post acceleration electrode assembly has been improved.

  14. Source apportionment of ambient PM10 and PM2.5 in Haikou, China

    NASA Astrophysics Data System (ADS)

    Fang, Xiaozhen; Bi, Xiaohui; Xu, Hong; Wu, Jianhui; Zhang, Yufen; Feng, Yinchang

    2017-07-01

    In order to identify the sources of PM10 and PM2.5 in Haikou, 60 ambient air samples were collected in winter and spring, respectively. Fifteen elements (Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn and Pb), water-soluble ions (SO42 - and NO3-), and organic carbon (OC) and elemental carbon (EC) were analyzed. It was clear that the concentration of particulate matter was higher in winter than in spring. The value of PM2.5/PM10 was > 0.6. Moreover, the proportions of TC, ions, Na, Al, Si and Ca were more high in PM10 and PM2.5. The SOC concentration was estimated by the minimum OC/EC ratio method, and deducted from particulate matter compositions when running CMB model. According to the results of CMB model, the resuspended dust (17.5-35.0%), vehicle exhaust (14.9-23.6%) and secondary particulates (20.4-28.8%) were the major source categories of ambient particulate matter. Additionally, sea salt also had partial contribution (3-8%). And back trajectory analysis results showed that particulate matter was greatly affected by regional sources in winter, while less affected in spring. So particulate matter was not only affected by local sources, but also affected by sea salt and regional sources in coastal cities. Further research could focuses on establishing the actual secondary particles profiles and identifying the local and regional sources of PM at once by one model or analysis method.

  15. Modeling the source contribution of heavy metals in surficial sediment and analysis of their historical changes in the vertical sediments of a drinking water reservoir

    NASA Astrophysics Data System (ADS)

    Wang, Guoqiang; A, Yinglan; Jiang, Hong; Fu, Qing; Zheng, Binghui

    2015-01-01

    Increasing water pollution in developing countries poses a significant threat to environmental health and human welfare. Understanding the spatial distribution and apportioning the sources of pollution are important for the efficient management of water resources. In this study, ten types of heavy metals were detected during 2010-2013 for all ambient samples and point sources samples. A pollution assessment based on the surficial sediment dataset by Enrichment Factor (EF) showed the surficial sediment was moderately contaminated. A comparison of the multivariate approach (principle components analysis/absolute principle component score, PCA/APCS) and the chemical mass balance model (CMB) shows that the identification of sources and calculation of source contribution based on the CMB were more objective and acceptable when source profiles were known and source composition was complex. The results of source apportionment for surficial heavy metals, both from PCA/APCS and CMB model, showed that the natural background (30%) was the most dominant contributor to the surficial heavy metals, followed by mining activities (29%). The contribution percentage of the natural background was negatively related to the degree of contamination. The peak concentrations of many heavy metals (Cu, Ba, Fe, As and Hg) were found in the middle layer of sediment, which is most likely due to the result of development of industry beginning in the 1970s. However, the highest concentration of Pb appeared in the surficial sediment layer, which was most likely due to the sharp increase in the traffic volume. The historical analysis of the sources based on the CMB showed that mining and the chemical industry are stable sources for all of the sections. The comparing of change rates of source contribution versus years indicated that the composition of the materials in estuary site (HF1) is sensitive to the input from the land, whereas center site (HF4) has a buffering effect on the materials from the land through a series of complex movements. These results provide information for the development of improved pollution control strategies for the lakes and reservoirs.

  16. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  17. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator.

    PubMed

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-11-01

    In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.

  19. A Seismic Source Model for Central Europe and Italy

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Williams, C.; Onur, T.

    2006-12-01

    We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.

  20. Research on precise modeling of buildings based on multi-source data fusion of air to ground

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong

    2016-03-01

    Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.

  1. NO(x) Concentrations in the Upper Troposphere as a Result of Lightning

    NASA Technical Reports Server (NTRS)

    Penner, Joyce E.

    1998-01-01

    Upper tropospheric NO(x) controls, in part, the distribution of ozone in this greenhouse-sensitive region of the atmosphere. Many factors control NO(x) in this region. As a result it is difficult to assess uncertainties in anthropogenic perturbations to NO from aircraft, for example, without understanding the role of the other major NO(x) sources in the upper troposphere. These include in situ sources (lightning, aircraft), convection from the surface (biomass burning, fossil fuels, soils), stratospheric intrusions, and photochemical recycling from HNO3. This work examines the separate contribution to upper tropospheric "primary" NO(x) from each source category and uses two different chemical transport models (CTMS) to represent a range of possible atmospheric transport. Because aircraft emissions are tied to particular pressure altitudes, it is important to understand whether those emissions are placed in the model stratosphere or troposphere and to assess whether the models can adequately differentiate stratospheric air from tropospheric air. We examine these issues by defining a point-by-point "tracer tropopause" in order to differentiate stratosphere from troposphere in terms of NO(x) perturbations. Both models predict similar zonal average peak enhancements of primary NO(x) due to aircraft (approx. = 10-20 parts per trillion by volume (pptv) in both January and July); however, the placement of this peak is primarily in a region of large stratospheric influence in one model and centered near the level evaluated as the tracer tropopause in the second. Below the tracer tropopause, both models show negligible NO(x) derived directly from the stratospheric source. Also, they predict a typically low background of 1 - 20 pptv NO(x) when tropospheric HNO3 is constrained to be 100 pptv of HNO3. The two models calculate large differences in the total background NO(x) (defined as the source of NO(x) from lightning + stratosphere + surface + HNO3) when using identical loss frequencies for NO(x). This difference is primarily due to differing treatments of vertical transport. An improved diagnosis of this transport that is relevant to NO(x) requires either measurements of a surface-based tracer with a substantially shorter lifetime than Rn-222 or diagnosis and mapping of tracer correlations with different source signatures. Because of differences in transport by the two models we cannot constrain the source of NO(x) from lightning through comparison of average model concentrations with observations of NO(x).

  2. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  3. Research on the forward modeling of controlled-source audio-frequency magnetotellurics in three-dimensional axial anisotropic media

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Tan, Handong

    2017-11-01

    Controlled-source audio-frequency magnetotellurics (CSAMT) has developed rapidly in recent years and are widely used in the area of mineral and oil resource exploration as well as other fields. The current theory, numerical simulation, and inversion research are based on the assumption that the underground media have resistivity isotropy. However a large number of rock and mineral physical property tests show the resistivity of underground media is generally anisotropic. With the increasing application of CSAMT, the demand for probe accuracy of practical exploration to complex targets continues to increase. The question of how to evaluate the influence of anisotropic resistivity to CSAMT response is becoming important. To meet the demand for CSAMT response research of resistivity anisotropic media, this paper examines the CSAMT electric equations, derives and realizes a three-dimensional (3D) staggered-grid finite difference numerical simulation method of CSAMT resistivity axial anisotropy. Through building a two-dimensional (2D) resistivity anisotropy geoelectric model, we validate the 3D computation result by comparing it to the result of controlled-source electromagnetic method (CSEM) resistivity anisotropy 2D finite element program. Through simulating a 3D resistivity axial anisotropy geoelectric model, we compare and analyze the responses of equatorial configuration, axial configuration, two oblique sources and tensor source. The research shows that the tensor source is suitable for CSAMT to recognize the anisotropic effect of underground structure.

  4. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  5. Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez

    2011-04-01

    The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.

  6. A comparison of data-driven groundwater vulnerability assessment methods

    USGS Publications Warehouse

    Sorichetta, Alessandro; Ballabio, Cristiano; Masetti, Marco; Robinson, Gilpin R.; Sterlacchini, Simone

    2013-01-01

    Increasing availability of geo-environmental data has promoted the use of statistical methods to assess groundwater vulnerability. Nitrate is a widespread anthropogenic contaminant in groundwater and its occurrence can be used to identify aquifer settings vulnerable to contamination. In this study, multivariate Weights of Evidence (WofE) and Logistic Regression (LR) methods, where the response variable is binary, were used to evaluate the role and importance of a number of explanatory variables associated with nitrate sources and occurrence in groundwater in the Milan District (central part of the Po Plain, Italy). The results of these models have been used to map the spatial variation of groundwater vulnerability to nitrate in the region, and we compare the similarities and differences of their spatial patterns and associated explanatory variables. We modify the standard WofE method used in previous groundwater vulnerability studies to a form analogous to that used in LR; this provides a framework to compare the results of both models and reduces the effect of sampling bias on the results of the standard WofE model. In addition, a nonlinear Generalized Additive Model has been used to extend the LR analysis. Both approaches improved discrimination of the standard WofE and LR models, as measured by the c-statistic. Groundwater vulnerability probability outputs, based on rank-order classification of the respective model results, were similar in spatial patterns and identified similar strong explanatory variables associated with nitrate source (population density as a proxy for sewage systems and septic sources) and nitrate occurrence (groundwater depth).

  7. A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling

    NASA Astrophysics Data System (ADS)

    Chen, L.; Gong, Y.; Shen, Z.

    2015-11-01

    Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.

  8. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  9. Estimation of Biogenic VOC Emissions From Ecosystems in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Zemankova, K.; Brechler, J.

    2008-12-01

    Volatile organic compounds (VOC) are one of the crucial elements in photochemical reactions in the atmosphere which lead to tropospheric ozone formation. While modelling concentration of low-level ozone proper information about VOC sources and sinks is necessary. VOC are emitted into the atmosphere both from anthropogenic and natural sources. It has been shown in previous studies (e.g. Simpson et al, 1995) that contribution of volatile organic compounds emitted from biogenic sources to total amount of VOC in the atmosphere can be significant. Our work focuses on estimation of VOC emissions from natural ecosystems, most importantly from forests, and its application in photochemical modelling. Preliminary results have shown that inclusion of biogenic emissions in model input data leads to improvement of resulting ozone concentration which encouraged us to work on detailed biogenic VOC emission estimation. Using grid of 1x1km CORINE Land Cover over the area of the Czech Republic, emissions from deciduous, coniferous and mixed forests were estimated aplying the algorithm of Guenther et al., 1995. According to data from Forest Management Institute each cell of model grid has been assigned a proportional composition of each of thirteen tree species which are the the main forest constituents in the Czech Republic. Aggregating data of tree species composition with land cover category emission factor of particular chemical compound (isoprene, monoterpenes) has been obtained for each cell. Annual emissions of VOC on hourly basis have been calculated for domain of the Czech Republic. Biogenic emissions of isoprene and monoterpenes were compared with the emission inventory of anthropogenic sources. The inventory is provided by Czech Hydrometeorological Institute and covers emissions from major stationary sources, area sources (including domestic heating) and mobile sources. Our results show that natural emissions are approximately half the amount of organic compounds emitted from anthropogenic sources. References: - Simpson D., Guenther A., Hewit C.N. and Steinbrecher R., 1995. Biogenic emissions in Europe. 1. estimates and uncertainties. J. Geophys. Res. 100(D11), 22875-22890. - Guenther A., Hewitt N., Erickson D., Fall R., Geron Ch., Graedel T., Harley P., Klinger L., Lerdau M., McKay W. A., Pierce T., Scholes B., Steinbrecher R., Tallamraju R., Taylor J., Zimmerman P., 1995. Global model of natural organic compound emissions. J. Geophys. Res. 100, 8873-8892.

  10. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  11. Copper content in lake sediments as a tracer of urban emissions: evaluation through a source-transport-storage model.

    PubMed

    Cui, Qing; Brandt, Nils; Sinha, Rajib; Malmström, Maria E

    2010-06-01

    A coupled source-transport-storage model was developed to determine the origin and path of copper from materials/goods in use in the urban drainage area and the fate of copper in local recipient lakes. The model was applied and tested using five small lakes in Stockholm, Sweden. In the case of the polluted lakes Råcksta Träsk, Trekanten and Långsjön, the source strengths of copper identified by the model were found to be well linked with independently observed copper contents in the lake sediments through the model. The model results also showed that traffic emissions, especially from brake linings, dominated the total load in all five cases. Sequential sedimentation and burial proved to be the most important fate processes of copper in all lakes, except Råcksta Träsk, where outflow dominated. The model indicated that the sediment copper content can be used as a tracer of the urban diffuse copper source strength, but that the response to changes in source strength is fairly slow (decades). Major uncertainties in the source model were related to management of stormwater in the urban area, the rate of wear of brake linings and weathering of copper roofs. The uncertainty of the coupled model is in addition affected mainly by parameters quantifying the sedimentation and bury processes, such as particulate fraction, settling velocity of particles, and sedimentation rate. As a demonstration example, we used the model to predict the response of the sediment copper level to a decrease in the copper load from the urban catchment in one of the case study lakes. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  12. Enhanced and updated spatially referenced statistical assessment of dissolved-solids load sources and transport in streams of the Upper Colorado River Basin

    USGS Publications Warehouse

    Miller, Matthew P.; Buto, Susan G.; Lambert, Patrick M.; Rumsey, Christine A.

    2017-03-07

    Approximately 6.4 million tons of dissolved solids are discharged from the Upper Colorado River Basin (UCRB) to the Lower Colorado River Basin each year. This results in substantial economic damages, and tens of millions of dollars are spent annually on salinity control projects designed to reduce salinity loads in surface waters of the UCRB. Dissolved solids in surface water and groundwater have been studied extensively over the past century, and these studies have contributed to a conceptual understanding of sources and transport of dissolved solids. This conceptual understanding was incorporated into a Spatially Referenced Regressions on Watershed Attributes (SPARROW) model to examine sources and transport of dissolved solids in the UCRB. The results of this model were published in 2009. The present report documents the methods and data used to develop an updated dissolved-solids SPARROW model for the UCRB, and incorporates data defining current basin attributes not available in the previous model, including delineation of irrigated lands by irrigation type (sprinkler or flood irrigation), and calibration data from additional monitoring sites.Dissolved-solids loads estimated for 312 monitoring sites were used to calibrate the SPARROW model, which predicted loads for each of 10,789 stream reaches in the UCRB. The calibrated model provided a good fit to the calibration data as evidenced by R2 and yield R2 values of 0.96 and 0.73, respectively, and a root-mean-square error of 0.47. The model included seven geologic sources that have estimated dissolved-solids yields ranging from approximately 1 to 45 tons per square mile (tons/mi2). Yields generated from irrigated agricultural lands are substantially greater than those from geologic sources, with sprinkler irrigated lands generating an average of approximately 150 tons/mi2 and flood irrigated lands generating between 770 and 2,300 tons/mi2 depending on underlying lithology. The coefficients estimated for six landscape transport characteristics that influence the delivery of dissolved solids from sources to streams, are consistent with the process understanding of dissolved-solids loading to streams in the UCRB.Dissolved-solids loads and the proportion of those loads among sources in the entire UCRB as well as in major tributaries in the basin are reported, as are loads generated from irrigated lands, rangelands, Bureau of Land Management (BLM) lands, and grazing allotments on BLM lands. Model-predicted loads also are compared with load estimates from 1957 and 1991 at selected locations in three divisions of the UCRB. At the basin scale, the model estimates that 32 percent of the dissolved-solids loads are from irrigated agricultural land sources that compose less than 2 percent of the land area in the UCRB. This estimate is less than previously reported estimates of 40 to 45 percent of basin-scale dissolved-solids loads from irrigated agricultural land sources. This discrepancy could be a result of the implementation of salinity control projects in the basin. Notably, results indicate that the conversion of flood irrigated agricultural lands to sprinkler irrigated agricultural lands is a likely process contributing to the temporal decrease in dissolved-solids loads from irrigated lands.

  13. Use of a dynamic simulation model to understand nitrogen cycling in the middle Rio Grande, NM.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meixner, Tom; Tidwell, Vincent Carroll; Oelsner, Gretchen

    2008-08-01

    Water quality often limits the potential uses of scarce water resources in semiarid and arid regions. To best manage water quality one must understand the sources and sinks of both solutes and water to the river system. Nutrient concentration patterns can identify source and sink locations, but cannot always determine biotic processes that affect nutrient concentrations. Modeling tools can provide insight into these large-scale processes. To address questions about large-scale nitrogen removal in the Middle Rio Grande, NM, we created a system dynamics nitrate model using an existing integrated surface water--groundwater model of the region to evaluate our conceptual modelsmore » of uptake and denitrification as potential nitrate removal mechanisms. We modeled denitrification in groundwater as a first-order process dependent only on concentration and used a 5% denitrification rate. Uptake was assumed to be proportional to transpiration and was modeled as a percentage of the evapotranspiration calculated within the model multiplied by the nitrate concentration in the water being transpired. We modeled riparian uptake as 90% and agricultural uptake as 50% of the respective evapotranspiration rates. Using these removal rates, our model results suggest that riparian uptake, agricultural uptake and denitrification in groundwater are all needed to produce the observed nitrate concentrations in the groundwater, conveyance channels, and river as well as the seasonal concentration patterns. The model results indicate that a total of 497 metric tons of nitrate-N are removed from the Middle Rio Grande annually. Where river nitrate concentrations are low and there are no large nitrate sources, nitrate behaves nearly conservatively and riparian and agricultural uptake are the most important removal mechanisms. Downstream of a large wastewater nitrate source, denitrification and agricultural uptake were responsible for approximately 90% of the nitrogen removal.« less

  14. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  15. Ignition probability of polymer-bonded explosives accounting for multiple sources of material stochasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu

    2014-05-07

    Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less

  16. 76 FR 35340 - Airworthiness Directives; Fokker Services B.V. Model F.28 Mark 0070 and 0100 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-17

    .... Under certain conditions, this may result in an ignition source in the wing tank vapour space. This..., 1200 New Jersey Avenue, SE., Washington, DC. FOR FURTHER INFORMATION CONTACT: Tom Rodriguez, Aerospace...-probe. Under certain conditions, this may result in an ignition source in the wing tank vapour space...

  17. Studies of acoustic emission from point and extended sources

    NASA Technical Reports Server (NTRS)

    Sachse, W.; Kim, K. Y.; Chen, C. P.

    1986-01-01

    The use of simulated and controlled acoustic emission signals forms the basis of a powerful tool for the detailed study of various deformation and wave interaction processes in materials. The results of experiments and signal analyses of acoustic emission resulting from point sources such as various types of indentation-produced cracks in brittle materials and the growth of fatigue cracks in 7075-T6 aluminum panels are discussed. Recent work dealing with the modeling and subsequent signal processing of an extended source of emission in a material is reviewed. Results of the forward problem and the inverse problem are presented with the example of a source distributed through the interior of a specimen.

  18. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  19. Source insights into the 11-h daytime and nighttime fine ambient particulate matter in China as well as the synthetic studies using the new Multilinear Engine 2-species ratios (ME2-SR) method.

    PubMed

    Shi, Guoliang; Chen, Gang; Liu, Guirong; Wang, Haiting; Tian, Yingze; Feng, Yinchang

    2016-10-01

    Modeled results are very important for environmental management. Unreasonable modeled result can lead to wrong strategy for air pollution management. In this work, an improved physically constrained source apportionment (PCSA) technology known as Multilinear Engine 2-species ratios (ME2-SR) was developed to the 11-h daytime and nighttime fine ambient particulate matter in urban area. Firstly, synthetic studies were carried out to explore the effectiveness of ME2-SR. The estimated source contributions were compared with the true values. The results suggest that, compared with the positive matrix factorization (PMF) model, the ME2-SR method could obtain more physically reliable outcomes, indicating that ME2-SR was effective, especially when apportioning the datasets with no unknown source. Additionally, 11-h daytime and nighttime PM2.5 samples were collected from Tianjin in China. The sources of the 11-h daytime and nighttime fine ambient particulate matter in China were identified using the new method and the PMF model. The calculated source contributions for ME2-SR for daytime PM2.5 samples are resuspended dust (38.91 μg m(-3), 26.60%), sulfate and nitrate (38.60 μg m(-3), 26.39%), vehicle exhaust and road dust (38.26 μg m(-3), 26.16%) and coal combustion (20.14 μg m(-3), 13.77%), and those for nighttime PM2.5 samples are resuspended dust (18.78 μg m(-3), 12.91%), sulfate and nitrate (41.57 μg m(-3), 28.58%), vehicle exhaust and road dust (38.39 μg m(-3), 26.39%), and coal combustion (36.76 μg m(-3), 25.27%). The comparisons of the constrained versus unconstrained outcomes clearly suggest that the physical meaning of the ME2-SR results is interpretable and reliable, not only for the specified species values but also for source contributions. The findings indicate that the ME2-SR method can be a useful tool in source apportionment studies, for air pollution management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. INEEL Source Water Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehlke, Gerald

    2003-03-01

    The Idaho National Engineering and Environmental Laboratory (INEEL) covers approximately 890 mi2 and includes 12 public water systems that must be evaluated for Source water protection purposes under the Safe Drinking Water Act. Because of its size and location, six watersheds and five aquifers could potentially affect the INEEL’s drinking water sources. Based on a preliminary evaluation of the available information, it was determined that the Big Lost River, Birch Creek, and Little Lost River Watersheds and the eastern Snake River Plain Aquifer needed to be assessed. These watersheds were delineated using the United States Geologic Survey’s Hydrological Unit scheme.more » Well capture zones were originally estimated using the RESSQC module of the Environmental Protection Agency’s Well Head Protection Area model, and the initial modeling assumptions and results were checked by running several scenarios using Modflow modeling. After a technical review, the resulting capture zones were expanded to account for the uncertainties associated with changing groundwater flow directions, a thick vadose zone, and other data uncertainties. Finally, all well capture zones at a given facility were merged to a single wellhead protection area at each facility. A contaminant source inventory was conducted, and the results were integrated with the well capture zones, watershed and aquifer information, and facility information using geographic information system technology to complete the INEEL’s Source Water Assessment. Of the INEEL’s 12 public water systems, three systems rated as low susceptibility (EBR-I, Main Gate, and Gun Range), and the remainder rated as moderate susceptibility. No INEEL public water system rated as high susceptibility. We are using this information to develop a source water management plan from which we will subsequently implement an INEEL-wide source water management program. The results are a very robust set of wellhead protection areas that will protect the INEEL’s public water systems yet not too conservative to inhibit the INEEL from carrying out its missions.« less

  1. The Idaho National Engineering and Environmental Laboratory Source Water Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehlke, G.

    2003-03-17

    The Idaho National Engineering and Environmental Laboratory (INEEL) covers approximately 890 square miles and includes 12 public water systems that must be evaluated for Source water protection purposes under the Safe Drinking Water Act. Because of its size and location, six watersheds and five aquifers could potentially affect the INEEL's drinking water sources. Based on a preliminary evaluation of the available information, it was determined that the Big Lost River, Birch Creek, and Little Lost River Watersheds and the eastern Snake River Plain Aquifer needed to be assessed. These watersheds were delineated using the United States Geologic Survey's Hydrological Unitmore » scheme. Well capture zones were originally estimated using the RESSQC module of the Environmental Protection Agency's Well Head Protection Area model, and the initial modeling assumptions and results were checked by running several scenarios using Modflow modeling. After a technical review, the resulting capture zones were expanded to account for the uncertainties associated with changing groundwater flow directions, a this vadose zone, and other data uncertainties. Finally, all well capture zones at a given facility were merged to a single wellhead protection area at each facility. A contaminant source inventory was conducted, and the results were integrated with the well capture zones, watershed and aquifer information, and facility information using geographic information system technology to complete the INEEL's Source Water Assessment. Of the INEEL's 12 public water systems, three systems rated as low susceptibility (EBR-1, Main Gate, and Gun Range), and the remainder rated as moderate susceptibility. No INEEL public water system rated as high susceptibility. We are using this information to develop a source water management plan from which we will subsequently implement an INEEL-wide source water management program. The results are a very robust set of wellhead protection areas that will protect the INEEL's public water systems yet not too conservative to inhibit the INEEL from carrying out its missions.« less

  2. Assessing sources of airborne mineral dust and other aerosols, in Iraq

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Johann P.; Jayanty, R. K. M.

    2013-06-01

    Most airborne particulate matter in Iraq comes from mineral dust sources. This paper describes the statistics and modeling of chemical results, specifically those from Teflon® filter samples collected at Tikrit, Balad, Taji, Baghdad, Tallil and Al Asad, in Iraq, in 2006/2007. Methodologies applied to the analytical results include calculation of correlation coefficients, Principal Components Analysis (PCA), and Positive Matrix Factorization (PMF) modeling. PCA provided a measure of the covariance within the data set, thereby identifying likely point sources and events. These include airborne mineral dusts of silicate and carbonate minerals, gypsum and salts, as well as anthropogenic sources of metallic fumes, possibly from battery smelting operations, and emissions of leaded gasoline vehicles. Five individual PMF factors (source categories) were modeled, four of which being assigned to components of geological dust, and the fifth to gasoline vehicle emissions together with battery smelting operations. The four modeled geological components, dust-siliceous, dust-calcic, dust-gypsum, and evaporate occur in variable ratios for each site and size fraction (TSP, PM10, and PM2.5), and also vary by season. In general, Tikrit and Taji have the largest and Al Asad the smallest percentages of siliceous dust. In contrast, Al Asad has the largest proportion of gypsum, in part representing the gypsiferous soils in that region. Baghdad has the highest proportions of evaporite in both size fractions, ascribed to the highly salinized agricultural soils, following millennia of irrigation along the Tigris River valley. Although dust storms along the Tigris and Euphrates River valleys originate from distal sources, the mineralogy bears signatures of local soils and air pollutants.

  3. Merging Models and Biomonitoring Data to Characterize Sources andPathways of Human Exposure to Organophosphorous Pesticides in the SalinasValley of California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKone, Thomas E.; Castorina, Rosemary; Kuwabara, Yu

    2006-06-01

    By drawing on human biomonitoring data and limited environmental samples together with outputs from the CalTOX multimedia, multipathway source-to-dose model, we characterize cumulative intake of organophosphorous (OP) pesticides in an agricultural region of California. We assemble regional OP pesticide use, environmental sampling, and biological tissue monitoring data for a large and geographically dispersed population cohort of 592 pregnant Latina women in California (the CHAMACOS cohort). We then use CalTOX with regional pesticide usage data to estimate the magnitude and uncertainty of exposure and intake from local sources. We combine model estimates of intake from local sources with food intake basedmore » on national residue data to estimate for the CHAMACOS cohort cumulative median OP intake, which corresponds to expected levels of urinary dialkylphosphate (DAP) metabolite excretion for this cohort. From these results we develop premises about relative contributions from different sources and pathways of exposure. We evaluate these premises by comparing the magnitude and variation of DAPs in the CHAMACOS cohort with the whole U.S. population using data from the National Health and Nutrition Evaluation Survey (NHANES). This comparison supports the premise that in both populations diet is the common and dominant exposure pathway. Both the model results and biomarker comparison supports the observation that the CHAMACOS population has a statistically significant higher intake of OP pesticides that appears as an almost constant additional dose among all participants. We attribute the magnitude and small variance of this intake to non-dietary exposure in residences from local sources.« less

  4. Modeling diffuse phosphorus emissions to assist in best management practice designing

    NASA Astrophysics Data System (ADS)

    Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne

    2010-05-01

    A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.

  5. Air Pollution Source/receptor Relationships in South Coast Air Basin, CA

    NASA Astrophysics Data System (ADS)

    Gao, Ning

    This research project includes the application of some existing receptor models to study the air pollution source/receptor relationships in the South Coast Air Basin (SoCAB) of southern California, the development of a new receptor model and the testing and the modifications of some existing models. These existing receptor models used include principal component factor analysis (PCA), potential source contribution function (PSCF) analysis, Kohonen's neural network combined with Prim's minimal spanning tree (TREE-MAP), and direct trilinear decomposition followed by a matrix reconstruction. The ambient concentration measurements used in this study are a subset of the data collected during the 1987 field exercise of Southern California Air Quality Study (SCAQS). It consists of a number of gaseous and particulate pollutants analyzed from samples collected by SCAQS samplers at eight sampling sites, Anaheim, Azusa, Burbank, Claremont, Downtown Los Angeles, Hawthorne, Long Beach, and Rubidoux. Based on the information of emission inventories, meteorology and ambient concentrations, this receptor modeling study has revealed mechanisms that influence the air quality in SoCAB. Some of the mechanisms affecting the air quality in SoCAB that were revealed during this study include the following aspects. The SO_2 collected at sampling sites is mainly contributed by refineries in the coastal area and the ships equipped with oil-fired boilers off shore. Combustion of fossil fuel by automobiles dominates the emission of NO_{rm x} that is subsequently transformed and collected at sampling sites. Electric power plants also contribute HNO_3 to the sampling sites. A large feedlot in the eastern region of SoCAB has been identified as the major source of NH_3. Possible contributions from other industrial sources such as smelters and incinerators were also revealed. The results of this study also suggest the possibility of DMS (dimethylsulfide) and NH_3 emissions from off-shore sediments that have been contaminated by waste sludge disposal. The study also discovered that non-anthropogenic sources account for the observation of many chemical components being brought to the sampling sites, such as seasalt particles, soil particles, and Cl emission from Mojave Desert. The potential and limitation of the receptor models have been evaluated and some modifications have been made to improve the value of the models. A source apportionment method has been developed based on the application results of the potential source contribution function (PSCF) analysis.

  6. Modeling of surface-dominated plasmas: from electric thruster to negative ion source.

    PubMed

    Taccogna, F; Schneider, R; Longo, S; Capitelli, M

    2008-02-01

    This contribution shows two important applications of the particle-in-cell/monte Carlo technique on ion sources: modeling of the Hall thruster SPT-100 for space propulsion and of the rf negative ion source for ITER neutral beam injection. In the first case translational degrees of freedom are involved, while in the second case inner degrees of freedom (vibrational levels) are excited. Computational results show how in both cases, plasma-wall and gas-wall interactions play a dominant role. These are secondary electron emission from the lateral ceramic wall of SPT-100 and electron capture from caesiated surfaces by positive ions and atoms in the rf negative ion source.

  7. Development of a microbial contamination susceptibility model for private domestic groundwater sources

    NASA Astrophysics Data System (ADS)

    Hynds, Paul D.; Misstear, Bruce D.; Gill, Laurence W.

    2012-12-01

    Groundwater quality analyses were carried out on samples from 262 private sources in the Republic of Ireland during the period from April 2008 to November 2010, with microbial quality assessed by thermotolerant coliform (TTC) presence. Assessment of potential microbial contamination risk factors was undertaken at all sources, and local meteorological data were also acquired. Overall, 28.9% of wells tested positive for TTC, with risk analysis indicating that source type (i.e., borehole or hand-dug well), local bedrock type, local subsoil type, groundwater vulnerability, septic tank setback distance, and 48 h antecedent precipitation were all significantly associated with TTC presence (p < 0.05). A number of source-specific design parameters were also significantly associated with bacterial presence. Hierarchical logistic regression with stepwise parameter entry was used to develop a private well susceptibility model, with the final model exhibiting a mean predictive accuracy of >80% (TTC present or absent) when compared to an independent validation data set. Model hierarchies of primary significance are source design (20%), septic tank location (11%), hydrogeological setting (10%), and antecedent 120 h precipitation (2%). Sensitivity analysis shows that the probability of contamination is highly sensitive to septic tank setback distance, with probability increasing linearly with decreases in setback distance. Likewise, contamination probability was shown to increase with increasing antecedent precipitation. Results show that while groundwater vulnerability category is a useful indicator of aquifer susceptibility to contamination, its suitability with regard to source contamination is less clear. The final model illustrates that both localized (well-specific) and generalized (aquifer-specific) contamination mechanisms are involved in contamination events, with localized bypass mechanisms dominant. The susceptibility model developed here could be employed in the appropriate location, design, construction, and operation of private groundwater wells, thereby decreasing the contamination risk, and hence health risk, associated with these sources.

  8. Surfzone alongshore advective accelerations: observations and modeling

    NASA Astrophysics Data System (ADS)

    Hansen, J.; Raubenheimer, B.; Elgar, S.

    2014-12-01

    The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.

  9. Optimizing dynamic downscaling in one-way nesting using a regional ocean model

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun

    2016-10-01

    Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.

  10. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  11. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  12. Three Dimensional Thermal Model of Newberry Volcano, Oregon

    DOE Data Explorer

    Trenton Cladouhos

    2015-01-30

    Final results of a 3D finite difference thermal model of Newberry Volcano, Oregon. Model data are formatted as a text file with four data columns (X, Y, Z, T). X and Y coordinates are in UTM (NAD83 Zone 10N), Z is elevation from mean sea level (meters), T is temperature in °C. Model is 40km X 40km X 12.5 km, grid node spacing is 100m in X, Y, and Z directions. A symmetric cylinder shaped magmatic heat source centered on the present day caldera is the modeled heat source. The center of the modeled body is a -1700 m (elevation) and is 600m thick with a radius of 8700m. This is the best fit results from 2D modeling of the west flank of the volcano. The model accounts for temperature dependent thermal properties and latent heat of crystallization. For additional details, assumptions made, data used, and a discussion of the validity of the model see Frone, 2015 (http://search.proquest.com/docview/1717633771).

  13. Applying spatial regression to evaluate risk factors for microbiological contamination of urban groundwater sources in Juba, South Sudan

    NASA Astrophysics Data System (ADS)

    Engström, Emma; Mörtberg, Ulla; Karlström, Anders; Mangold, Mikael

    2017-06-01

    This study developed methodology for statistically assessing groundwater contamination mechanisms. It focused on microbial water pollution in low-income regions. Risk factors for faecal contamination of groundwater-fed drinking-water sources were evaluated in a case study in Juba, South Sudan. The study was based on counts of thermotolerant coliforms in water samples from 129 sources, collected by the humanitarian aid organisation Médecins Sans Frontières in 2010. The factors included hydrogeological settings, land use and socio-economic characteristics. The results showed that the residuals of a conventional probit regression model had a significant positive spatial autocorrelation (Moran's I = 3.05, I-stat = 9.28); therefore, a spatial model was developed that had better goodness-of-fit to the observations. The most significant factor in this model ( p-value 0.005) was the distance from a water source to the nearest Tukul area, an area with informal settlements that lack sanitation services. It is thus recommended that future remediation and monitoring efforts in the city be concentrated in such low-income regions. The spatial model differed from the conventional approach: in contrast with the latter case, lowland topography was not significant at the 5% level, as the p-value was 0.074 in the spatial model and 0.040 in the traditional model. This study showed that statistical risk-factor assessments of groundwater contamination need to consider spatial interactions when the water sources are located close to each other. Future studies might further investigate the cut-off distance that reflects spatial autocorrelation. Particularly, these results advise research on urban groundwater quality.

  14. SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin

    EPA Pesticide Factsheets

    The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.

  15. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    NASA Astrophysics Data System (ADS)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  16. Advance and application of the stratigraphic simulation model 2D- SedFlux: From tank experiment to geological scale simulation

    NASA Astrophysics Data System (ADS)

    Kubo, Yu'suke; Syvitski, James P. M.; Hutton, Eric W. H.; Paola, Chris

    2005-07-01

    The stratigraphic simulation model 2D- SedFlux is further developed and applied to a turbidite experiment in a subsiding minibasin. The new module dynamically simulates evolving hyperpycnal flows and their interaction with the basin bed. Comparison between the numerical results and the experimental results verifies the ability of 2D- SedFlux to predict the distribution of the sediments and the possible feedback from subsidence. The model was subsequently applied to geological-scale minibasins such as are located in the Gulf of Mexico. Distance from the sediment source is determined to be more influential than the sediment entrapment in upstream minibasin. The results suggest that efficiency of sediment entrapment by a basin was not influenced by the distance from the sediment source.

  17. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  18. Sources of fine particles in the South Coast area, California

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Turkiewicz, Katarzyna; Zulawnick, Sylvia A.; Magliano, Karen L.

    2010-08-01

    PM 2.5 (particulate matter less than 2.5 μm in aerodynamic diameter) speciation data collected between 2003 and 2005 at two United State Environmental Protection Agency (US EPA) Speciation Trends Network monitoring sites in the South Coast area, California were analyzed to identify major PM 2.5 sources as a part of the State Implementation Plan development. Eight and nine major PM 2.5 sources were identified in LA and Rubidoux, respectively, through PMF2 analyses. Similar to a previous study analyzing earlier data ( Kim and Hopke, 2007a), secondary particles contributed the most to the PM 2.5 concentrations: 53% in LA and 59% in Rubidoux. The next highest contributors were diesel emissions (11%) in LA and Gasoline vehicle emissions (10%) in Rubidoux. Most of the source contributions were lower than those from the earlier study. However, the average source contributions from airborne soil, sea salt, and aged sea salt in LA and biomass smoke in Rubidoux increased. To validate the apportioned sources in this study, PMF2 results were compared with those obtained from EPA PMF ( US EPA, 2005). Both models identified the same number of major sources and the resolved source profiles and contributions were similar at the two monitoring sites. The minor differences in the results caused by the differences in the least square algorithm and non-negativity constraints between two models did not affect the source identifications.

  19. Does External Knowledge Sourcing Enhance Market Performance? Evidence from the Korean Manufacturing Industry

    PubMed Central

    Lee, Kibaek; Yoo, Jaeheung; Choi, Munkee; Zo, Hangjung; Ciganek, Andrew P.

    2016-01-01

    Firms continuously search for external knowledge that can contribute to product innovation, which may ultimately increase market performance. The relationship between external knowledge sourcing and market performance is not well-documented. The extant literature primarily examines the causal relationship between external knowledge sources and product innovation performance or to identify factors which moderates the relationship between external knowledge sourcing and product innovation. Non-technological innovations, such as organization and marketing innovations, intervene in the process of external knowledge sourcing to product innovation to market performance but has not been extensively examined. This study addresses two research questions: does external knowledge sourcing lead to market performance and how does external knowledge sourcing interact with a firm’s different innovation activities to enhance market performance. This study proposes a comprehensive model to capture the causal mechanism from external knowledge sourcing to market performance. The research model was tested using survey data from manufacturing firms in South Korea and the results demonstrate a strong statistical relationship in the path of external knowledge sourcing (EKS) to product innovation performance (PIP) to market performance (MP). Organizational innovation is an antecedent to EKS while marketing innovation is a consequence of EKS, which significantly influences PIP and MP. The results imply that any potential EKS effort should also consider organizational innovations which may ultimately enhance market performance. Theoretical and practical implications are discussed as well as concluding remarks. PMID:28006022

  20. Does External Knowledge Sourcing Enhance Market Performance? Evidence from the Korean Manufacturing Industry.

    PubMed

    Lee, Kibaek; Yoo, Jaeheung; Choi, Munkee; Zo, Hangjung; Ciganek, Andrew P

    2016-01-01

    Firms continuously search for external knowledge that can contribute to product innovation, which may ultimately increase market performance. The relationship between external knowledge sourcing and market performance is not well-documented. The extant literature primarily examines the causal relationship between external knowledge sources and product innovation performance or to identify factors which moderates the relationship between external knowledge sourcing and product innovation. Non-technological innovations, such as organization and marketing innovations, intervene in the process of external knowledge sourcing to product innovation to market performance but has not been extensively examined. This study addresses two research questions: does external knowledge sourcing lead to market performance and how does external knowledge sourcing interact with a firm's different innovation activities to enhance market performance. This study proposes a comprehensive model to capture the causal mechanism from external knowledge sourcing to market performance. The research model was tested using survey data from manufacturing firms in South Korea and the results demonstrate a strong statistical relationship in the path of external knowledge sourcing (EKS) to product innovation performance (PIP) to market performance (MP). Organizational innovation is an antecedent to EKS while marketing innovation is a consequence of EKS, which significantly influences PIP and MP. The results imply that any potential EKS effort should also consider organizational innovations which may ultimately enhance market performance. Theoretical and practical implications are discussed as well as concluding remarks.

  1. Fermi Large Area Telescope third source catalog

    DOE PAGES

    Acero, F.; Ackermann, M.; Ajello, M.; ...

    2015-06-12

    Here, we present the third Fermi Large Area Telescope (LAT) source catalog (3FGL) of sources in the 100 MeV–300 GeV range. Based on the first 4 yr of science data from the Fermi Gamma-ray Space Telescope mission, it is the deepest yet in this energy range. Relative to the Second Fermi LAT catalog, the 3FGL catalog incorporates twice as much data, as well as a number of analysis improvements, including improved calibrations at the event reconstruction level, an updated model for Galactic diffuse γ-ray emission, a refined procedure for source detection, and improved methods for associating LAT sources with potential counterparts at other wavelengths. The 3FGL catalog includes 3033 sources abovemore » $$4\\sigma $$ significance, with source location regions, spectral properties, and monthly light curves for each. Of these, 78 are flagged as potentially being due to imperfections in the model for Galactic diffuse emission. Twenty-five sources are modeled explicitly as spatially extended, and overall 238 sources are considered as identified based on angular extent or correlated variability (periodic or otherwise) observed at other wavelengths. For 1010 sources we have not found plausible counterparts at other wavelengths. More than 1100 of the identified or associated sources are active galaxies of the blazar class; several other classes of non-blazar active galaxies are also represented in the 3FGL. Pulsars represent the largest Galactic source class. As a result, from source counts of Galactic sources we estimate that the contribution of unresolved sources to the Galactic diffuse emission is ~3% at 1 GeV.« less

  2. Modelling remediation scenarios in historical mining catchments.

    PubMed

    Gamarra, Javier G P; Brewer, Paul A; Macklin, Mark G; Martin, Katherine

    2014-01-01

    Local remediation measures, particularly those undertaken in historical mining areas, can often be ineffective or even deleterious because erosion and sedimentation processes operate at spatial scales beyond those typically used in point-source remediation. Based on realistic simulations of a hybrid landscape evolution model combined with stochastic rainfall generation, we demonstrate that similar remediation strategies may result in differing effects across three contrasting European catchments depending on their topographic and hydrologic regimes. Based on these results, we propose a conceptual model of catchment-scale remediation effectiveness based on three basic catchment characteristics: the degree of contaminant source coupling, the ratio of contaminated to non-contaminated sediment delivery, and the frequency of sediment transport events.

  3. pyLIMA : The first open source microlensing modeling software

    NASA Astrophysics Data System (ADS)

    Bachelet, Etienne; Street, Rachel; Bozza, Valerio

    2018-01-01

    Microlensing is highly sensitive to planets beyond the snowline and distributed along the line of sight towards the Galactic Bulge. The WFIRST-AFTA mission should detect about 3000 of these planets and significantly improves our knowledge of planet formation and statistics, complementing results found by transit and radial velocity methods. However, the modeling of microlensing event is challenging on different aspects leading to a highly time consuming analysis. After a quick summarize of these different challenges, I will present pyLIMA, the first open source microlensing modeling software. The aimed goal of this software are to be flexible, powerful and user friendly. This presentation will focus on various case and early results.

  4. Astrometric light-travel time signature of sources in nonlinear motion. I. Derivation of the effect and radial motion

    NASA Astrophysics Data System (ADS)

    Anglada-Escudé, G.; Torra, J.

    2006-04-01

    Context: .Very precise planned space astrometric missions and recent improvements in imaging capabilities require a detailed review of the assumptions of classical astrometric modeling.Aims.We show that Light-Travel Time must be taken into account in modeling the kinematics of astronomical objects in nonlinear motion, even at stellar distances.Methods.A closed expression to include Light-Travel Time in the current astrometric models with nonlinear motion is provided. Using a perturbative approach the expression of the Light-Travel Time signature is derived. We propose a practical form of the astrometric modelling to be applied in astrometric data reduction of sources at stellar distances(d>1 pc).Results.We show that the Light-Travel Time signature is relevant at μ as accuracy (or even at mas) depending on the time span of the astrometric measurements. We explain how information on the radial motion of a source can be obtained. Some estimates are provided for known nearby binary systemsConclusions.Given the obtained results, it is clear that this effect must be taken into account in interpreting precise astrometric measurements. The effect is particularly relevant in measurements performed by the planned astrometric space missions (GAIA, SIM, JASMINE, TPF/DARWIN). An objective criterion is provided to quickly evaluate whether the Light-Travel Time modeling is required for a given source or system.

  5. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system

    PubMed Central

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-01-01

    ABSTRACT In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods. PMID:28515537

  6. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system.

    PubMed

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-05-19

    In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods.

  7. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  8. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    PubMed

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. X-Ray Dust Scattering At Small Angles: The Complete Halo Around GX13+1

    NASA Technical Reports Server (NTRS)

    Smith, Randall K.

    2007-01-01

    The exquisite angular resolution available with Chandra should allow precision measurements of faint diffuse emission surrounding bright sources, such as the X-ray scattering halos created by interstellar dust. However, the ACIS CCDs suffer from pileup when observing bright sources, and this creates difficulties when trying to extract the scattered halo near the source. The initial study of the X-ray halo around GX13+1 using only the ACIS-I detector done by Smith, Edgar & Shafer (2002) suffered from a lack of sensitivity within 50" of the source, limiting what conclusions could be drawn. To address this problem, observations of GX13+1 were obtained with the Chandra HRC-I and simultaneously with the RXTE PCA. Combined with the existing ACIS-I data, this allowed measurements of the X-ray halo between 2-1000". After considering a range of dust models, each assumed to be smoothly distributed with or without a dense cloud along the line of sight, the results show that there is no evidence in this data for a dense cloud near the source, as suggested by Xiang et al. (2005). In addition, although no model leads to formally acceptable results, the Weingartner & Draine (2001) and all but one of the composite grain models from Zubko, Dwek & Arendt (2004) give particularly poor fits.

  10. Modeling study of natural emissions, source apportionment, and emission control of atmospheric mercury

    NASA Astrophysics Data System (ADS)

    Shetty, Suraj K.

    Mercury (Hg) is a toxic pollutant and is important to understand its cycling in the environment. In this dissertation, a number of modeling investigations were conducted to better understand the emission from natural surfaces, the source-receptor relationship of the emissions, and emission reduction of atmospheric mercury. The first part of this work estimates mercury emissions from vegetation, soil and water surfaces using a number of natural emission processors and detailed (LAI) Leaf Area Index data from GIS (Geographic Information System) satellite products. East Asian domain was chosen as it contributes nearly 50% of the global anthropogenic mercury emissions into the atmosphere. The estimated annual natural mercury emissions (gaseous elemental mercury) in the domain are 834 Mg yr-1 with 462 Mg yr-1 contributing from China. Compared to anthropogenic sources, natural sources show greater seasonal variability (highest in simmer). The emissions are significant, sometimes dominant, contributors to total mercury emission in the regions. The estimates provide possible explanation for the gaps between the anthropogenic emission estimates based on activity data and the emission inferred from field observations in the regions. To understand the contribution of domestic emissions to mercury deposition in the United States, the second part of the work applies the mercury model of Community Multi-scale Air Quality Modeling system (CMAQ-Hg v4.6) to apportion the various emission sources attributing to the mercury wet and dry deposition in the 6 United States receptor regions. Contributions to mercury deposition from electric generating units (EGU), iron and steel industry (IRST), industrial point sources excluding EGU and IRST (OIPM), the remaining anthropogenic sources (RA), natural processes (NAT), and out-of-boundary transport (BC) in domain was estimated. The model results for 2005 compared reasonably well to field observations made by MDN (Mercury Deposition Network) and CAMNet (Canadian Atmospheric Mercury Measurement Network). The model estimated a total deposition of 474 Mg yr-1 to the CONUS (Contiguous United States) domain, with two-thirds being dry deposited. Reactive gaseous mercury contributed the most to 60% of deposition. Emission speciation distribution is a key factor for local deposition as contribution from large point sources can be as high as 75% near (< 100 km) the emission sources, indicating that emission reduction may result in direct deposition decrease near the source locations. Among the sources, BC contributes to about 68% to 91% of total deposition. Excluding the BC's contribution, EGU contributes to nearly 50% of deposition caused by CONUS emissions in the Northeast, Southeast and East Central regions, while emissions from natural processes are more important in the Pacific and West Central regions (contributing up to 40% of deposition). The modeling results implies that implementation of the new emission standards proposed by USEPA (United States Environmental Protection Agency) would significantly benefit regions that have larger contributions from EGU sources. Control of mercury emissions from coal combustion processes has attracted great attention due to its toxicity and the emission-control regulations and has lead to advancement in state-of-the-art control technologies that alleviate the impact of mercury on ecosystem and human health. This part of the work applies a sorption model to simulate adsorption of mercury in flue gases, onto a confined-bed of activated carbon. The model's performances were studied at various flue gas flow rates, inlet mercury concentrations and adsorption bed temperatures. The process simulated a flue gas, with inlet mercury concentration of 300 ppb, entering at a velocity of 0.3 m s-1 from the bottom into a fixed bed (inside bed diameter of 1 m and 3 m bed height; bed temperature of 25 °C) of activated carbon (particle size of 0.004 m with density of 0.5 g cm-3 and surface area of 90.25 cm2 g -1). The model result demonstrated that a batch of activated carbon bed was capable of controlling mercury emission for approximately 275 days after which further mercury uptake starts to decrease till it reaches about 500 days when additional control ceases. An increase in bed temperature significantly reduces mercury sorption capacity of the activated carbon. Increase in flue gas flow rate may result in faster consumption of sorption capacity initially but at a later stage, the sorption rate decreases due to reduced sorption capacity. Thus, overall sorption rate remains unaffected. The activated carbon's effective life (time to reach saturation) is not affected by inlet mercury concentration, implying that the designing and operation of a mercury sorption process can be done independently. The results provide quantitative indication for designing efficient confined-bed process to remove mercury from flue gases.

  11. Implementation of warm-cloud processes in a source-oriented WRF/Chem model to study the effect of aerosol mixing state on fog formation in the Central Valley of California

    NASA Astrophysics Data System (ADS)

    Lee, H.-H.; Chen, S.-H.; Kleeman, M. J.; Zhang, H.; DeNero, S. P.; Joe, D. K.

    2015-11-01

    The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-dimensional chemical variable (X, Z, Y, Size Bins, Source Types, Species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and longwave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into CCN at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.

  12. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    NASA Astrophysics Data System (ADS)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.

  13. A brief compendium of correlations and analytical formulae for the thermal field generated by a heat source embedded in porous and purely-conductive media

    NASA Astrophysics Data System (ADS)

    Conti, P.; Testi, D.; Grassi, W.

    2017-11-01

    This work reviews and compares suitable models for the thermal analysis of forced convection over a heat source in a porous medium. The set of available models refers to an infinite medium in which a fluid moves over different three heat source geometries: i.e. the moving infinite line source, the moving finite line source, and the moving infinite cylindrical source. In this perspective, the present work presents a plain and handy compendium of the above-mentioned models for forced external convection in porous media; besides, we propose a dimensionless analysis to figure out the reciprocal deviation among available models, helping the selection of the most suitable one in the specific case of interest. Under specific conditions, the advection term becomes ineffective in terms of heat transfer performances, allowing the use of purely-conductive models. For that reason, available analytical and numerical solutions for purely-conductive media are also reviewed and compared, again, by dimensionless criteria. Therefore, one can choose the simplest solution, with significant benefits in terms of computational effort and interpretation of the results. The main outcomes presented in the paper are: the conditions under which the system can be considered subject to a Darcy flow, the minimal distance beyond which the finite dimension of the heat source does not affect the thermal field, and the critical fluid velocity needed to have a significant contribution of the advection term in the overall heat transfer process.

  14. Contributions of wood smoke and vehicle emissions to ambient concentrations of volatile organic compounds and particulate matter during the Yakima wintertime nitrate study

    NASA Astrophysics Data System (ADS)

    VanderSchelden, Graham; de Foy, Benjamin; Herring, Courtney; Kaspari, Susan; VanReken, Tim; Jobson, Bertram

    2017-02-01

    A multiple linear regression (MLR) chemical mass balance model was applied to data collected during an air quality field experiment in Yakima, WA, during January 2013 to determine the relative contribution of residential wood combustion (RWC) and vehicle emissions to ambient pollutant levels. Acetonitrile was used as a chemical tracer for wood burning and nitrogen oxides (NOx) as a chemical tracer for mobile sources. RWC was found to be a substantial source of gas phase air toxics in wintertime. The MLR model found RWC primarily responsible for emissions of formaldehyde (73%), acetaldehyde (69%), and black carbon (55%) and mobile sources primarily responsible for emissions of carbon monoxide (CO; 83%), toluene (81%), C2-alkylbenzenes (81%), and benzene (64%). When compared with the Environmental Protection Agency's 2011 winter emission inventory, the MLR results suggest that the contribution of RWC to CO emissions was underestimated in the inventory by a factor of 2. Emission ratios to NOx from the MLR model agreed to within 25% with wintertime emission ratios predicted from the Motor Vehicle Emissions Simulator (MOVES) 2010b emission model for Yakima County for all pollutants modeled except for CO, C2-alkylbenzenes, and black carbon. The MLR model results suggest that MOVES was overpredicting mobile source emissions of CO relative to NOx by a factor of 1.33 and black carbon relative to NOx by about a factor of 3.

  15. The energy trilogy: An integrated sustainability model to bridge wastewater treatment plant energy and emissions gaps

    NASA Astrophysics Data System (ADS)

    Al-Talibi, A. Adhim

    An estimated 4% of national energy consumption is used for drinking water and wastewater services. Despite the awareness and optimization initiatives for energy conservation, energy consumption is on the rise owing to population and urbanization expansion and to commercial and industrial business advancement. The principal concern is since energy consumption grows, the higher will be the energy production demand, leading to an increase in CO2 footprints and the contribution to global warming potential. This research is in the area of energy-water nexus, focusing on wastewater treatment plant (WWTP) energy trilogy -- the group of three related entities, which includes processes: (1) consuming energy, (2) producing energy, and (3) the resulting -- CO2 equivalents. Detailed and measurable energy information is not readily obtained for wastewater facilities, specifically during facility preliminary design phases. These limitations call for data-intensive research approach on GHG emissions quantification, plant efficiencies and source reduction techniques. To achieve these goals, this research introduced a model integrating all plant processes and their pertinent energy sources. In a comprehensive and "Energy Source-to-Effluent Discharge" pattern, this model is capable of bridging the gaps of WWTP energy, facilitating plant designers' decision-making for meeting energy assessment, sustainability and the environmental regulatory compliance. Protocols for estimating common emissions sources are available such as for fuels, whereas, site-specific emissions for other sources have to be developed and are captured in this research. The dissertation objectives were met through an extensive study of the relevant literature, models and tools, originating comprehensive lists of processes and energy sources for WWTPs, locating estimation formulas for each source, identifying site specific emissions factors, and linking the sources in a mathematical model for site specific CO2 e determination. The model was verified and showed a good agreement with billed and measured data from a base case study. In a next phase, a supplemental computational tool can be created for conducting plant energy design comparisons and plant energy and emissions parameters assessments. The main conclusions drawn from this research is that current approaches are severely limited, not covering plant's design phase and not fully considering the balance of energy consumed (EC), energy produced (EP) and the resulting CO2 e emission integration. Finally their results are not representative. This makes reported governmental and institutional national energy consumption figures incomplete and/or misleading, since they are mainly considering energy consumptions from electricity and some fuels or certain processes only. The distinction of the energy trilogy model over existing approaches is based on the following: (1) the ET energy model is unprecedented, prepared to fit WWTP energy assessment during the design and rehabilitation phases, (2) links the energy trilogy eliminating the need for using several models or tools, (3) removes the need for on-site expensive energy measurements or audits, (4) offers alternatives for energy optimization during plant's life-cycle, and (5) ensures reliable GHG emissions inventory reporting for permitting and regulatory compliance.

  16. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  17. An image-based skeletal dosimetry model for the ICRP reference adult female—internal electron sources

    NASA Astrophysics Data System (ADS)

    O'Reilly, Shannon E.; DeWeese, Lindsay S.; Maynard, Matthew R.; Rajon, Didier A.; Wayson, Michael B.; Marshall, Emily L.; Bolch, Wesley E.

    2016-12-01

    An image-based skeletal dosimetry model for internal electron sources was created for the ICRP-defined reference adult female. Many previous skeletal dosimetry models, which are still employed in commonly used internal dosimetry software, do not properly account for electron escape from trabecular spongiosa, electron cross-fire from cortical bone, and the impact of marrow cellularity on active marrow self-irradiation. Furthermore, these existing models do not employ the current ICRP definition of a 50 µm bone endosteum (or shallow marrow). Each of these limitations was addressed in the present study. Electron transport was completed to determine specific absorbed fractions to both active and shallow marrow of the skeletal regions of the University of Florida reference adult female. The skeletal macrostructure and microstructure were modeled separately. The bone macrostructure was based on the whole-body hybrid computational phantom of the UF series of reference models, while the bone microstructure was derived from microCT images of skeletal region samples taken from a 45 years-old female cadaver. The active and shallow marrow are typically adopted as surrogate tissue regions for the hematopoietic stem cells and osteoprogenitor cells, respectively. Source tissues included active marrow, inactive marrow, trabecular bone volume, trabecular bone surfaces, cortical bone volume, and cortical bone surfaces. Marrow cellularity was varied from 10 to 100 percent for active marrow self-irradiation. All other sources were run at the defined ICRP Publication 70 cellularity for each bone site. A total of 33 discrete electron energies, ranging from 1 keV to 10 MeV, were either simulated or analytically modeled. The method of combining skeletal macrostructure and microstructure absorbed fractions assessed using MCNPX electron transport was found to yield results similar to those determined with the PIRT model applied to the UF adult male skeletal dosimetry model. Calculated skeletal averaged absorbed fractions for each source-target combination were found to follow similar trends of more recent dosimetry models (image-based models) but did not follow results from skeletal models based upon assumptions of an infinite expanse of trabecular spongiosa.

  18. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.

  19. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. An analytic model of axisymmetric mantle plume due to thermal and chemical diffusion

    NASA Technical Reports Server (NTRS)

    Liu, Mian; Chase, Clement G.

    1990-01-01

    An analytic model of axisymmetric mantle plumes driven by either thermal diffusion or combined diffusion of both heat and chemical species from a point source is presented. The governing equations are solved numerically in cylindrical coordinates for a Newtonian fluid with constant viscosity. Instead of starting from an assumed plume source, constraints on the source parameters, such as the depth of the source regions and the total heat input from the plume sources, are deduced using the geophysical characteristics of mantle plumes inferred from modelling of hotspot swells. The Hawaiian hotspot and the Bermuda hotspot are used as examples. Narrow mantle plumes are expected for likely mantle viscosities. The temperature anomaly and the size of thermal plumes underneath the lithosphere can be sensitive indicators of plume depth. The Hawaiian plume is likely to originate at a much greater depth than the Bermuda plume. One suggestive result puts the Hawaiian plume source at a depth near the core-mantle boundary and the source of the Bermuda plume in the upper mantle, close to the 700 km discontinuity. The total thermal energy input by the source region to the Hawaiian plume is about 5 x 10(10) watts. The corresponding diameter of the source region is about 100 to 150 km. Chemical diffusion from the same source does not affect the thermal structure of the plume.

  1. Source Mechanism and Near-field Characteristics of the 2011 Tohoku-oki Tsunami

    NASA Astrophysics Data System (ADS)

    Yamazaki, Y.; Cheung, K.; Lay, T.

    2011-12-01

    The Tohoku-oki great earthquake ruptured the megathrust fault offshore of Miyagi and Fukushima in Northeast Honshu with moment magnitude of Mw 9.0 on March 11, 2011, and generated strong shaking across the region. The resulting tsunami devastated the northeastern Japan coasts and damaged coastal infrastructure across the Pacific. The extensive global seismic networks, dense geodetic instruments, well-positioned buoys and wave gauges, and comprehensive runup records along the northeast Japan coasts provide datasets of unprecedented quality and coverage for investigation of the tsunami source mechanism and near-field wave characteristics. Our finite-source model reconstructs detailed source rupture processes by inversion of teleseismic P waves recorded around the globe. The finite-source solution is validated through comparison with the static displacements recoded at the ARIA (JPL-GSI) GPS stations and models obtained by inversion of high-rate GPS observations. The rupture model has two primary slip regions, near the hypocenter and along the trench; the maximum slip is about 60 m near the trench. Together with the low rupture velocity, the Tohoku-oki event has characteristics in common with tsunami earthquakes, although it ruptured across the entire megathrust. Superposition of the deformation of the subfaults from the planar fault model according to their rupture initiation and rise times specifies the seafloor vertical displacement and velocity for tsunami modeling. We reconstruct the 2011 Tohoku-oki tsunami from the time histories of the seafloor deformation using the dispersive long-wave model NEOWAVE (Non-hydrostatic Evolution of Ocean WAVEs). The computed results are compared with data from six GPS gauges and three wave gauges near the source at 120~200-m and 50-m water depth, as well as DART buoys positioned across the Pacific. The shock-capturing model reproduces near-shore tsunami bores and the runup data gathered by the 2011 Tohoku Earthquake Tsunami Joint Survey Group. Spectral analysis of the computed surface elevation reveals a series of resonance modes and areas prone to tsunami hazards. This case study improves our understanding of near-field tsunami waves and validates the modeling capability to predict their impacts for hazard mitigation and emergency management.

  2. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  3. The Competition Between a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2012-11-01

    We propose a new mathematical model to study the competition between localised and distributed sources of buoyancy within a naturally ventilated filling box. The main controlling parameters in this configuration are the buoyancy fluxes of the distributed and local source, specifically their ratio Ψ. The steady state dynamics of the flow are heavily dependent on this parameter. For large Ψ, where the distributed source dominates, we find the space becomes well mixed as expected if driven by an distributed source alone. Conversely, for small Ψ we find the space reaches a stable two layer stratification. This is analogous to the classical case of a purely local source but here the lower layer is buoyant compared to the ambient, due to the constant flux of buoyancy emanating from the distributed source. The ventilation flow rate, buoyancy of the layers and also the location of the interface height, which separates the two layer stratification, are obtainable from the model. To validate the theoretical model, small scale laboratory experiments were carried out. Water was used as the working medium with buoyancy being driven directly by temperature differences. Theoretical results were compared with experimental data and overall good agreement was found. A CASE award project with Arup.

  4. Influence of Mean-Density Gradient on Small-Scale Turbulence Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas

    2000-01-01

    A physics-based methodology is described to predict jet-mixing noise due to small-scale turbulence. Both self- and shear-noise source teens of Lilley's equation are modeled and the far-field aerodynamic noise is expressed as an integral over the jet volume of the source multiplied by an appropriate Green's function which accounts for source convection and mean-flow refraction. Our primary interest here is to include transverse gradients of the mean density in the source modeling. It is shown that, in addition to the usual quadrupole type sources which scale to the fourth-power of the acoustic wave number, additional dipole and monopole sources are present that scale to lower powers of wave number. Various two-point correlations are modeled and an approximate solution to noise spectra due to multipole sources of various orders is developed. Mean flow and turbulence information is provided through RANS-k(epsilon) solution. Numerical results are presented for a subsonic jet at a range of temperatures and Mach numbers. Predictions indicated a decrease in high frequency noise with added heat, while changes in the low frequency noise depend on jet velocity and observer angle.

  5. Comparison of two trajectory based models for locating particle sources for two rural New York sites

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Hopke, Philip K.; Liu, Wei

    Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.

  6. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    NASA Astrophysics Data System (ADS)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  7. Propagation of Exploration Seismic Sources in Shallow Water

    NASA Astrophysics Data System (ADS)

    Diebold, J. B.; Tolstoy, M.; Barton, P. J.; Gulick, S. P.

    2006-05-01

    The choice of safety radii to mitigation the impact of exploration seismic sources upon marine mammals is typically based on measurement or modeling in deep water. In shallow water environments, rule-of-thumb spreading laws are often used to predict the falloff of amplitude with offset from the source, but actual measurements (or ideally, near-perfect modeling) are still needed to account for the effects of bathymetric changes and subseafloor characteristics. In addition, the question: "how shallow is 'shallow?'" needs an answer. In a cooperative effort by NSF, MMS, NRL, IAGC and L-DEO, a series of seismic source calibration studies was carried out in the Northern Gulf of Mexico during 2003. The sources used were the two-, six-, ten-, twelve-, and twenty-airgun arrays of R/V Ewing, and a 31-element, 3-string "G" gun array, deployed by M/V Kondor, an exploration industry source ship. The results of the Ewing calibrations have been published, documenting results in deep (3200m) and shallow (60m) water. Lengthy analysis of the Kondor results, presented here, suggests an approach to answering the "how shallow is shallow" question. After initially falling off steadily with source-receiver offset, the Kondor levels suddenly increased at a 4km offset. Ray-based modeling with a complex, realistic source, but with a simple homogeneous water column-over-elastic halfspace ocean shows that the observed pattern is chiefly due to geophysical effects, and not focusing within the water column. The same kind of modeling can be used to predict how the amplitudes will change with decreasing water depth, and when deep-water safety radii may need to be increased. Another set of data (see Barton, et al., this session) recorded in 20 meters of water during early 2005, however, shows that simple modeling may be insufficient when the geophysics becomes more complex. In this particular case, the fact that the seafloor was within the near field of the R/V Ewing source array seems to have given rise to seismic phases not normally seen in marine survey data acquired in deeper water. The associated partitioning of energy is likely to have caused the observed uncharacteristically rapid loss of energy with distance. It appears that in this case, the shallow-water marine mammal safety mitigation measures prescribed and followed were far more stringent than they needed to be. A new approach, wherein received levels detected by the towed 6-km multichannel hydrophone array may be used to modify safety radii has recently been proposed, based on these observations.

  8. The reflection spectrum of the low-mass X-ray binary 4U 1636-53

    NASA Astrophysics Data System (ADS)

    Wang, Yanan; Méndez, Mariano; Sanna, Andrea; Altamirano, Diego; Belloni, T. M.

    2017-06-01

    We present 3-79 keV NuSTAR observations of the neutron star low-mass X-ray binary 4U 1636-53 in the soft, transitional and hard state. The spectra display a broad emission line at 5-10 keV. We applied several models to fit this line: A Gaussian line, a relativistically broadened emission line model, kyrline, and two models including relativistically smeared and ionized reflection off the accretion disc with different coronal heights, relxill and relxilllp. All models fit the spectra well; however, the kyrline and relxill models yield an inclination of the accretion disc of ˜88° with respect to the line of sight, which is at odds with the fact that this source shows no dips or eclipses. The relxilllp model, on the other hand, gives a reasonable inclination of ˜56°. We discuss our results for these models in this source and the possible primary source of the hard X-rays.

  9. The risk assessment of sudden water pollution for river network system under multi-source random emission

    NASA Astrophysics Data System (ADS)

    Li, D.

    2016-12-01

    Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.

  10. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  11. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  12. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less

  13. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  14. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  15. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  16. Water quality modeling using geographic information system (GIS) data

    NASA Technical Reports Server (NTRS)

    Engel, Bernard A

    1992-01-01

    Protection of the environment and natural resources at the Kennedy Space Center (KSC) is of great concern. The potential for surface and ground water quality problems resulting from non-point sources of pollution was examined using models. Since spatial variation of parameters required was important, geographic information systems (GIS) and their data were used. The potential for groundwater contamination was examined using the SEEPAGE (System for Early Evaluation of the Pollution Potential of Agricultural Groundwater Environments) model. A watershed near the VAB was selected to examine potential for surface water pollution and erosion using the AGNPS (Agricultural Non-Point Source Pollution) model.

  17. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  18. Connecting Numerical Relativity and Data Analysis of Gravitational Wave Detectors

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Jani, Karan; London, Lionel; Pekowsky, Larne

    Gravitational waves deliver information in exquisite detail about astrophysical phenomena, among them the collision of two black holes, a system completely invisible to the eyes of electromagnetic telescopes. Models that predict gravitational wave signals from likely sources are crucial for the success of this endeavor. Modeling binary black hole sources of gravitational radiation requires solving the Einstein equations of General Relativity using powerful computer hardware and sophisticated numerical algorithms. This proceeding presents where we are in understanding ground-based gravitational waves resulting from the merger of black holes and the implications of these sources for the advent of gravitational-wave astronomy.

  19. Chemical characteristics and source apportionment of indoor and outdoor fine particles observed in an urban environment in Korea

    NASA Astrophysics Data System (ADS)

    Heo, J.; Yi, S. M.

    2016-12-01

    Paired indoor-outdoor fine particulate matter (PM2.5) samples were collected at subway stations, underground shopping centers, and schools in Seoul metropolitan over a 4-year period between 2004 and 2007. Relationships between indoor and outdoor PM2.5 chemical species were determined and source contributions to indoor and outdoor PM2.5 mass were estimated using a positive matrix factorization (PMF) model. The PM2.5 samples were analyzed for major chemical components including organic carbon and elemental carbon, ions, and metals, and the results were used in the PMF model. The levels of the PM2.5 mass and its chemical components observed at the indoor sites were higher than those at the outdoor sites. Indoor levels of ions (i.e. sulfate, nitrate, ammonium), elemental carbon, and several metals (i.e. Fe, Zn, and Cu) were found to be significantly affected by outdoor sources. Very high indoor-to-outdoor mass ratio of these chemical components, in particular, were observed, representing the significant impacts of outdoor sources on indoor levels of them. Seven sources (secondary sulfate, secondary nitrate, mobile, biomass burning, roadway emissions, dust, and sea salt) were resolved by the PMF model at both of the indoor and outdoor sites. The secondary inorganic aerosol (i.e. secondary sulfate and nitrate) and the mobile sources were major contributors to the indoor and outdoor PM2.5, accounting for 47% and 27% of the outdoor PM2.5 and 40% and 25% of the indoor PM2.5, respectively. Furthermore, the contributions of the secondary inorganic aerosol and the mobile sources to the indoor PM2.5 were very comparable to its corresponding contributions to the outdoor PM2.5 levels. The spatial and temporal characteristics of each of sources resolved by the PMF model across the sites were examined using summary statistics, correlation analysis, and coefficient of variation and divergence analysis and the detailed results will be discussed in the presentation.

  20. Use of the Hydrological Simulation Program-FORTRAN and bacterial source tracking for development of the fecal coliform total maximum daily load (TMDL) for Blacks Run, Rockingham County, Virginia

    USGS Publications Warehouse

    Moyer, Douglas; Hyer, Kenneth

    2003-01-01

    Impairment of surface waters by fecal coliform bacteria is a water-quality issue of national scope and importance. Section 303(d) of the Clean Water Act requires that each State identify surface waters that do not meet applicable water-quality standards. In Virginia, more than 175 stream segments are on the 1998 Section 303(d) list of impaired waters because of violations of the water-quality standard for fecal coliform bacteria. A total maximum daily load (TMDL) will need to be developed by 2006 for each of these impaired streams and rivers by the Virginia Departments of Environmental Quality and Conservation and Recreation. A TMDL is a quantitative representation of the maximum load of a given water-quality constituent, from all point and nonpoint sources, that a stream can assimilate without violating the designated water-quality standard. Blacks Run, in Rockingham County, Virginia, is one of the stream segments listed by the State of Virginia as impaired by fecal coliform bacteria. Watershed modeling and bacterial source tracking were used to develop the technical components of the fecal coliform bacteria TMDL for Accotink Creek. The Hydrological Simulation Program?FORTRAN (HSPF) was used to simulate streamflow, fecal coliform concentrations, and source-specific fecal coliform loading in Blacks Run. Ribotyping, a bacterial source tracking technique, was used to identify the dominant sources of fecal coliform bacteria in the Blacks Run watershed. Ribotyping also was used to determine the relative contributions of specific sources to the observed fecal coliform load in Blacks Run. Data from the ribotyping analysis were incorporated into the calibration of the fecal coliform model. Study results provide information regarding the calibration of the streamflow and fecal coliform bacteria models and also identify the reductions in fecal coliform loads required to meet the TMDL for Blacks Run. The calibrated streamflow model simulated observed streamflow characteristics with respect to total annual runoff, seasonal runoff, average daily streamflow, and hourly stormflow. The calibrated fecal coliform model simulated the patterns and range of observed fecal coliform bacteria concentrations. Observed fecal coliform bacteria concentrations during low-flow periods ranged from 40 to 7,000 colonies per 100 milliliters, and peak concentrations during storm-flow periods ranged from 33,000 to 260,000 colonies per 100 milliliters. Simulated source-specific contributions of fecal coliform bacteria to instream load were matched to the observed contributions from the dominant sources, which were cats, cattle, deer, dogs, ducks, geese, horses, humans, muskrats, poultry, raccoons, and sheep. According to model results, a 95-percent reduction in the current fecal coliform load delivered from the watershed to Blacks Run would result in compliance with the designated water-quality goals and associated TMDL.

  1. Use of the Hydrological Simulation Program-FORTRAN and Bacterial Source Tracking for Development of the fecal coliform Total Maximum Daily Load (TMDL) for Accotink Creek, Fairfax County, Virginia

    USGS Publications Warehouse

    Moyer, Douglas; Hyer, Kenneth

    2003-01-01

    Impairment of surface waters by fecal coliform bacteria is a water-quality issue of national scope and importance. Section 303(d) of the Clean Water Act requires that each State identify surface waters that do not meet applicable water-quality standards. In Virginia, more than 175 stream segments are on the 1998 Section 303(d) list of impaired waters because of violations of the water-quality standard for fecal coliform bacteria. A total maximum daily load (TMDL) will need to be developed by 2006 for each of these impaired streams and rivers by the Virginia Departments of Environmental Quality and Conservation and Recreation. A TMDL is a quantitative representation of the maximum load of a given water-quality constituent, from all point and nonpoint sources, that a stream can assimilate without violating the designated water-quality standard. Accotink Creek, in Fairfax County, Virginia, is one of the stream segments listed by the State of Virginia as impaired by fecal coliform bacteria. Watershed modeling and bacterial source tracking were used to develop the technical components of the fecal coliform bacteria TMDL for Accotink Creek. The Hydrological Simulation Program?FORTRAN (HSPF) was used to simulate streamflow, fecal coliform concentrations, and source-specific fecal coliform loading in Accotink Creek. Ribotyping, a bacterial source tracking technique, was used to identify the dominant sources of fecal coliform bacteria in the Accotink Creek watershed. Ribotyping also was used to determine the relative contributions of specific sources to the observed fecal coliform load in Accotink Creek. Data from the ribotyping analysis were incorporated into the calibration of the fecal coliform model. Study results provide information regarding the calibration of the streamflow and fecal coliform bacteria models and also identify the reductions in fecal coliform loads required to meet the TMDL for Accotink Creek. The calibrated streamflow model simulated observed streamflow characteristics with respect to total annual runoff, seasonal runoff, average daily streamflow, and hourly stormflow. The calibrated fecal coliform model simulated the patterns and range of observed fecal coliform bacteria concentrations. Observed fecal coliform bacteria concentrations during low-flow periods ranged from 25 to 800 colonies per 100 milliliters, and peak concentrations during storm-flow periods ranged from 19,000 to 340,000 colonies per 100 milliliters. Simulated source-specific contributions of fecal coliform bacteria to instream load were matched to the observed contributions from the dominant sources, which were cats, deer, dogs, ducks, geese, humans, muskrats, and raccoons. According to model results, an 89-percent reduction in the current fecal coliform load delivered from the watershed to Accotink Creek would result in compliance with the designated water-quality goals and associated TMDL.

  2. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aab, A.; Abreu, P.; Andringa, S.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties aboutmore » physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less

  3. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröoder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zong, Z.

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ṡ 1018 eV, i.e. the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.

  4. A model-based analysis of extinction ratio effects on phase-OTDR distributed acoustic sensing system performance

    NASA Astrophysics Data System (ADS)

    Aktas, Metin; Maral, Hakan; Akgun, Toygar

    2018-02-01

    Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.

  5. Ultraluminous X-ray sources: new distance indicators?

    NASA Astrophysics Data System (ADS)

    Różańska, A.; Bresler, K.; Bełdycki, B.; Madej, J.; Adhikari, T. P.

    2018-05-01

    Aims: In this paper we fit the NuSTAR and XMM-Newton data of three sources: NGC 7793 P13, NGC5907 ULX1, and Circinus ULX5. Methods: Our single model contains emission from a non-spherical system: a neutron star plus an accretion disk directed towards the observer. Results: We obtained a very good fit with the reduced χ2 per degree of freedom equal to 1.08 for P13, 1.01 for ULX1, and 1.14 for ULX5. The normalization of our model constrains the distance to the source. The resulting distances are D = 3.41-0.10+0.11, 6.55-0.81+0.69, and 2.60-0.03+0.05 Mpc for P13, ULX1, and ULX5 respectively. The distances to P13 and ULX5 are in perfect agreement with previous distance measurements to their host galaxies. Conclusions: Our results confirm that P13, ULX1, and ULX5 may contain central hot neutron stars. When the outgoing emission is computed by integration over the emitting surface and successfully fitted to the data, then the resulting model normalization is the direct distance indicator.

  6. Electronic neutron sources for compensated porosity well logging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, A. X.; Antolak, A. J.; Leung, K. -N.

    2012-08-01

    The viability of replacing Americium–Beryllium (Am–Be) radiological neutron sources in compensated porosity nuclear well logging tools with D–T or D–D accelerator-driven neutron sources is explored. The analysis consisted of developing a model for a typical well-logging borehole configuration and computing the helium-3 detector response to varying formation porosities using three different neutron sources (Am–Be, D–D, and D–T). The results indicate that, when normalized to the same source intensity, the use of a D–D neutron source has greater sensitivity for measuring the formation porosity than either an Am–Be or D–T source. The results of the study provide operational requirements that enablemore » compensated porosity well logging with a compact, low power D–D neutron generator, which the current state-of-the-art indicates is technically achievable.« less

  7. Relative Contributions of the Saharan and Sahelian Sources to the Atmospheric Dust Load Over the North Atlantic

    NASA Technical Reports Server (NTRS)

    Ginoux, Paul; Chin, M.; Torres, O.; Prospero, J.; Dubovik, O.; Holben, B.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    It has long been recognized that Saharan desert is the major source for long range transport of mineral dust over the Atlantic. The contribution from other natural sources to the dust load over the Atlantic has generally been ignored in previous model studies or been replaced by anthropogenically disturbed soil emissions. Recently, Prospero et.at. have identified the major dust sources over the Earth using TOMS aerosol index. They showed that these sources correspond to dry lakes with layers of sediment deposed in the late Holocene or Pleistocene. One of the most active of these sources seem to be the Bodele depression. Chiapello et al. have analyzed the mineralogical composition of dust on the West coast of Africa. They found that Sahelian dust events are the most intense but are less frequent than Saharan plumes. This suggests that the Bodele depression could contribute significantly to the dust load over the Atlantic. The relative contribution of the Sahel and Sahara dust sources is of importance for marine biogeochemistry or atmospheric radiation, because each source has a distinct mineralogical composition. We present here a model study of the relative contributions of Sahara and Sahel sources to the atmospheric dust aerosols over the North Atlantic. The Georgia Tech/Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model is used to simulate dust distribution in 1996-1997. Dust particles are labeled depending on their sources. In this presentation, we will present the comparison between the model results and observations from ground based measurements (dust concentration, optical thickness and size distribution) and satellite data (TOMS aerosol index). The relative contribution of each source will then be analyzed spatially and temporally.

  8. Rg-Lg coupling as a Lg-wave excitation mechanism

    NASA Astrophysics Data System (ADS)

    Ge, Z.; Xie, X.

    2003-12-01

    Regional phase Lg is predominantly comprised of shear wave energy trapped in the crust. Explosion sources are expected to be less efficient for excitation of Lg phases than earthquakes to the extent that the source can be approximated as isotropic. Shallow explosions generate relatively large surface wave Rg compared to deeper earthquakes, and Rg is readily disrupted by crustal heterogeneity. Rg energy may thus scatter into trapped crustal S-waves near the source region and contribute to low-frequency Lg wave. In this study, a finite-difference modeling plus the slowness analysis are used for investigating the above mentioned Lg-wave excitation mechanism. The method allows us to investigate near source energy partitioning in multiple domains including frequency, slowness and time. The main advantage of this method is that it can be applied at close range, before Lg is actually formed, which allows us to use very fine near source velocity model to simulate the energy partitioning process. We use a layered velocity structure as the background model and add small near source random velocity patches to the model to generate the Rg to Lg coupling. Two types of simulations are conducted, (1) a fixed shallow explosion source vs. randomness at different depths and (2) a fixed shallow randomness vs. explosion sources at different depths. The results show apparent couplings between the Rg and Lg waves at lower frequencies (0.3-1.5 Hz). A shallow source combined with shallow randomness generates the maximum Lg-wave, which is consistent with the Rg energy distribution of a shallow explosion source. The Rg energy and excited Lg energy show a near linear relationship. The numerical simulation and slowness analysis suggest that the Rg to Lg coupling is an effective excitation mechanism for low frequency Lg-waves from a shallow explosion source.

  9. Modelling absorbing aerosol with ECHAM-HAM: Insights from regional studies

    NASA Astrophysics Data System (ADS)

    Tegen, Ina; Heinold, Bernd; Schepanski, Kerstin; Banks, Jamie; Kubin, Anne; Schacht, Jacob

    2017-04-01

    Quantifying distributions and properties of absorbing aerosol is a basis for investigations of interactions of aerosol particles with radiation and climate. While evaluations of aerosol models by field measurements can be particularly successful at the regional scale, such results need to be put into a global context for climate studies. We present an overview over studies performed at the Leibniz Institute for Tropospheric Research aiming at constraining the properties of mineral dust and soot aerosol in the global aerosol model ECHAM6-HAM2 based on different regional studies. An example is the impact of different sources for dust transported to central Asia, which is influenced, by far-range transport of dust from Arabia and the Sahara together with dust from local sources. Dust types from these different source regions were investigated in the context of the CADEX project and are expected to have different optical properties. For Saharan dust, satellite retrievals from MSG SEVIRI are used to constrain Saharan dust sources and optical properties. In the Arctic region, on one hand dust aerosol is simulated in the framework of the PalMod project. On the other hand aerosol measurements that will be taken during the DFG-funded (AC)3 field campaigns will be used to evaluate the simulated transport pathways of soot aerosol from European, North American and Asian sources, as well as the parameterization of soot ageing processes in ECHAM6-HAM2. Ultimately, results from these studies will improve the representation of aerosol absorption in the global model.

  10. Spatial and temporal changes of water quality, and SWAT modeling of Vosvozis river basin, North Greece.

    PubMed

    Boskidis, Ioannis; Gikas, Georgios D; Pisinaras, Vassilios; Tsihrintzis, Vassilios A

    2010-09-01

    The results of an investigation of the quantitative and qualitative characteristics of Vosvozis river in Northern Greece is presented. For the purposes of this study, three gaging stations were installed along Vosvozis river, where water quantity and quality measurements were conducted for the period August 2005 to November 2006. Water discharge, temperature, pH, dissolved oxygen (DO) and electrical conductivity (EC) were measured in situ using appropriate equipment. The collected water samples were analyzed in the laboratory for the determination of nitrate, nitrite and ammonium nitrogen, total Kjeldalh nitrogen (TKN), orthophosphate (OP), total phosphorus (TP), COD, and BOD. Agricultural diffuse sources provided the major source of nitrate nitrogen loads during the wet period. During the dry period (from June to October), the major nutrient (N, P) and COD, BOD sources were point sources. The trophic status of Vosvozis river during the monitoring period was determined as eutrophic, based on Dodds classification scheme. Moreover, the SWAT model was used to simulate hydrographs and nutrient loads. SWAT was validated with the measured data. Predicted hydrographs and pollutographs were plotted against observed values and showed good agreement. The validated model was used to test eight alternative scenarios concerning different cropping management approaches. The results of these scenarios indicate that nonpoint source pollution is the prevailing type of pollution in the study area. The SWAT model was found to satisfactorily simulate processes in ephemeral river basins and is an effective tool in water resources management.

  11. Effect of conductor geometry on source localization: Implications for epilepsy studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlitt, H.; Heller, L.; Best, E.

    1994-07-01

    We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less

  12. Tsunami Inundation Mapping for the Upper East Coast of the United States

    NASA Astrophysics Data System (ADS)

    Tehranirad, B.; Kirby, J. T., Jr.; Callahan, J. A.; Shi, F.; Banihashemi, S.; Grilli, S. T.; Grilli, A. R.; Tajalli Bakhsh, T. S.; O'Reilly, C.

    2014-12-01

    We describe the modeling of tsunami inundation for the Upper US East Coast (USEC) from Ocean City, MD up to Nantucket, MA. and the development of inundation maps for use in emergency management and hazard analysis. Seven tsunami sources were used as initial conditions in order to develop inundation maps based on a Probable Maximum Tsunami approach. Of the seven, two coseismic sources were used; the first being a large earthquake in the Puerto Rico Trench, in the well-known Caribbean Subduction Zone, and the second, an earthquake close to the Azores Gibraltar plate boundary known as the source of the biggest tsunami recorded in the North Atlantic Basin. In addition, four Submarine Mass Failure (SMF) sources located at different locations on the edge of the shelf break were simulated. Finally, the Cumbre Vieja Volcanic (CVV) collapse, located in the Canary Islands, was studied. For this presentation, we discuss modeling results for nearshore tsunami propagation and onshore inundation. A fully nonlinear Boussinesq model (FUNWAVE-TVD) is used to capture the characteristics of tsunami propagation, both nearshore and inland. In addition to the inundation line as the main result of this work, other tsunami quantities such as inundation depth and maximum velocities will be discussed for the whole USEC area. Moreover, a discussion of most vulnerable areas to a possible tsunami in the USEC will be provided. For example, during the inundation simulation process, it was observed that coastal environments with barrier islands are among the hot spots to be significantly impacted by a tsunami. As a result, areas like western Long Island, NY and Atlantic City, NJ are some of the locations that will get extremely affected in case of a tsunami occurrence in the Atlantic Ocean. Finally, the differences between various tsunami sources modeled here will be presented. Although inundation lines for different sources usually follow a similar pattern, there are clear distinctions between the inundation depth and other tsunami features in different areas. Figure below shows the inundation depth for surrounding area of the Ocean City, MD. Figure (a) and (b) are the envelope inundation depth for SMF and coseismic sources. Figure (C) shows the inundation depth for CVV source, which clearly has the largest magnitude amongst the sources studied for this work.

  13. Attribution of the French human Salmonellosis cases to the main food-sources according to the type of surveillance data.

    PubMed

    David, J M; Sanders, P; Bemrah, N; Granier, S A; Denis, M; Weill, F-X; Guillemot, D; Watier, L

    2013-05-15

    Salmonella are the most common bacterial cause of foodborne infections in France and ubiquitous pathogens present in many animal productions. Assessing the relative contribution of the different food-animal sources to the burden of human cases is a key step towards the conception, prioritization and assessment of efficient control policy measures. For this purpose, we considered a Bayesian microbial subtyping attribution approach based on a previous published model (Hald et al., 2004). It requires quality integrated data on human cases and on the contamination of their food sources, per serotype and microbial subtype, which were retrieved from the French integrated surveillance system for Salmonella. The quality of the data available for such an approach is an issue for many countries in which the surveillance system has not been designed for this purpose. In France, the sources are monitored simultaneously by an active, regulation-based surveillance system that produces representative prevalence data (as ideally required for the approach) and a passive system relying on voluntary laboratories that produces data not meeting the standards set by Hald et al. (2004) but covering a broader range of sources. These data allowed us to study the impact of data quality on the attribution results, globally and focusing on specific features of the data (number of sources and contamination indicator). The microbial subtyping attribution model was run using an adapted parameterization previously proposed (David et al., 2012). A total of 9076 domestic sporadic cases were included in the analyses as well as 9 sources among which 5 were common to the active and the passive datasets. The greatest impact on the attribution results was observed for the number of sources. Thus, especially in the absence of data on imported products, the attribution estimates presented here should be considered with caution. The results were comparable for both types of surveillance, leading to the conclusion that passive data constitute a potential cost-effective complement to active data collection, especially interesting because the former encompass a greater number of sources. The model appeared robust to the type of surveillance, and provided that some methodological aspects of the model can be enhanced, it could also serve as a risk-based guidance tool for active surveillance systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Stepwise multiple regression method of greenhouse gas emission modeling in the energy sector in Poland.

    PubMed

    Kolasa-Wiecek, Alicja

    2015-04-01

    The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.

  15. Sources, Properties, Aging, and Anthropogenic Influences on OA and SOA over the Southeast US and the Amazon duing SOAS, DC3, SEAC4RS, and GoAmazon

    EPA Science Inventory

    The SE US and the Amazon have large sources of biogenic VOCs, varying anthropogenic pollution impacts, and often poor organic aerosol (OA) model performance. Recent results on the sources, properties, aging, and impact of anthropogenic pollution on OA and secondary OA (SOA) over ...

  16. Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.

    PubMed

    Liu, X; Zhai, Z

    2007-12-01

    Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.

  17. Multivariate matrix model for source identification of inrush water: A case study from Renlou and Tongting coal mine in northern Anhui province, China

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Yao, Duoxi; Su, Yue

    2018-02-01

    Under the current situation of energy demand, coal is still one of the major energy sources in China for a certain period of time, so the task of coal mine safety production remains arduous. In order to identify the water source of the mine accurately, this article takes the example from Renlou and Tongting coal mines in the northern Anhui mining area. A total of 7 conventional water chemical indexes were selected, including Ca2+, Mg2+, Na++K+, Cl-, SO4 2-, HCO3 - and TDS, to establish a multivariate matrix model for the source identifying inrush water. The results show that the model is simple and is rarely limited by the quantity of water samples, and the recognition effect is ideal, which can be applied to the control and treatment for water inrush.

  18. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  19. Inter-comparison of source apportionment of PM10 using PMF and CMB in three sites nearby an industrial area in central Italy

    NASA Astrophysics Data System (ADS)

    Cesari, Daniela; Donateo, Antonio; Conte, Marianna; Contini, Daniele

    2016-12-01

    Receptor models (RMs), based on chemical composition of particulate matter (PM), such as Chemical Mass Balance (CMB) and Positive Matrix Factorization (PMF), represent useful tools for determining the impact of PM sources to air quality. This information is useful, especially in areas influenced by anthropogenic activities, to plan mitigation strategies for environmental management. Recent inter-comparison of source apportionment (SA) results showed that one of the difficulties in the comparison of estimated source contributions is the compatibility of the sources, i.e. the chemical profiles of factor/sources used in receptor models. This suggests that SA based on integration of several RMs could give more stable and reliable solutions with respect to a single model. The aim of this work was to perform inter-comparison of PMF (using PMF3.0 and PMF5.0 codes) and CMB outputs, focusing on both source chemical profiles and estimates of source contributions. The dataset included 347 daily PM10 samples collected in three sites in central Italy located near industrial emissions. Samples were chemically analysed for the concentrations of 21 chemical species (NH4+, Ca2 +, Mg2 +, Na+, K+, Mg2 +, SO42 -, NO3-, Cl-, Si, Al, Ti, V, Mn, Fe, Ni, Cu, Zn, Br, EC, and OC) used as input of RMs. The approach identified 9 factor/sources: marine, traffic, resuspended dust, biomass burning, secondary sulphate, secondary nitrate, crustal, coal combustion power plant and harbour-industrial. Results showed that the application of constraints in PMF5.0 improved interpretability of profiles and comparability of estimated source contributions with stoichiometric calculations. The inter-comparison of PMF and CMB gave significant differences for secondary nitrate, biomass burning, and harbour-industrial sources, due to non-compatibility of these source profiles that have local specificities. When these site-dependent specificities were taken into account, optimising the input source profiles of CMB, a significant improvement in the comparison of the estimated source contributions with PMF was obtained.

  20. Viscous remanent magnetization model for the Broken Ridge satellite magnetic anomaly

    NASA Technical Reports Server (NTRS)

    Johnson, B. D.

    1985-01-01

    An equivalent source model solution of the satellite magnetic field over Australia obtained by Mayhew et al. (1980) showed that the satellite anomalies could be related to geological features in Australia. When the processing and selection of the Magsat data over the Australian region had progressed to the point where interpretation procedures could be initiated, it was decided to start by attempting to model the Broken Ridge satellite anomaly, which represents one of the very few relatively isolated anomalies in the Magsat maps, with an unambiguous source region. Attention is given to details concerning the Broken Ridge satellite magnetic anomaly, the modeling method used, the Broken Ridge models, modeling results, and characteristics of magnetization.

Top