Sample records for source model based

  1. Assessing Model Characterization of Single Source ...

    EPA Pesticide Factsheets

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  2. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  3. An incentive-based source separation model for sustainable municipal solid waste management in China.

    PubMed

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  4. Modeling a Common-Source Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.

    2010-01-01

    This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.

  5. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  6. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    USGS Publications Warehouse

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  7. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  8. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  9. Colors of attraction: Modeling insect flight to light behavior.

    PubMed

    Donners, Maurice; van Grunsven, Roy H A; Groenendijk, Dick; van Langevelde, Frank; Bikker, Jan Willem; Longcore, Travis; Veenendaal, Elmar

    2018-06-26

    Light sources attract nocturnal flying insects, but some lamps attract more insects than others. The relation between the properties of a light source and the number of attracted insects is, however, poorly understood. We developed a model to quantify the attractiveness of light sources based on the spectral output. This model is fitted using data from field experiments that compare a large number of different light sources. We validated this model using two additional datasets, one for all insects and one excluding the numerous Diptera. Our model facilitates the development and application of light sources that attract fewer insects without the need for extensive field tests and it can be used to correct for spectral composition when formulating hypotheses on the ecological impact of artificial light. In addition, we present a tool allowing the conversion of the spectral output of light sources to their relative insect attraction based on this model. © 2018 Wiley Periodicals, Inc.

  10. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Variable cycle control model for intersection based on multi-source information

    NASA Astrophysics Data System (ADS)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  12. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  13. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  14. Inverse Modeling of Tropospheric Methane Constrained by 13C Isotope in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Bruhwiler, L. M.

    2001-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood, due in part to the large spatial and temporal variability of the methane flux. Model calculations of methane mixing ratios using most process-based source estimates typically over-predict the inter-hemispheric gradient of atmospheric methane. Inverse models, which estimate trace gas budgets by using observations of atmospheric mixing ratios and transport models to estimate sources and sinks, have been used to incorporate features of the atmospheric observations into methane budgets. While inverse models of methane generally tend to find a decrease in northern hemisphere sources and an increase in southern hemisphere sources relative to process-based estimates,no inverse study has definitively associated the inter-hemispheric gradient difference with a specific source process or group of processes. In this presentation, observations of isotopic ratios of 13C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of methane to further constrain the source estimates of methane. In order to investigate the advantages of incorporating 13C, the TM3 three-dimensional transport model was used. The methane and carbon dioxide measurements used are from a cooperative international effort, the Cooperative Air Sampling Network, lead by the Climate Monitoring Diagnostics Laboratory (CMDL) at the National Oceanic and Atmospheric Administration (NOAA). Experiments using model calculations based on process-based source estimates show that the inter-hemispheric gradient of δ 13CH4 is not reproduced by these source estimates, showing that the addition of observations of δ 13CH4 should provide unique insight into the methane problem.

  15. Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.

    2017-12-01

    A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.

  16. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  17. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  18. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  19. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python

  20. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  1. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE PAGES

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.; ...

    2015-09-08

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  2. MOSES: A Matlab-based open-source stochastic epidemic simulator.

    PubMed

    Varol, Huseyin Atakan

    2016-08-01

    This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.

  3. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  4. Martian methane plume models for defining Mars rover methane source search strategies

    NASA Astrophysics Data System (ADS)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  5. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  6. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  7. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  8. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  9. Photochemical grid model implementation and application of ...

    EPA Pesticide Factsheets

    For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3) concentrations. Source-based apportionment techniques implemented in photochemical grid models track sources through the physical and chemical processes important to the formation and transport of air pollutants. Photochemical model source apportionment has been used to track source impacts of specific sources, groups of sources (sectors), sources in specific geographic areas, and stratospheric and lateral boundary inflow on O3. The implementation and application of a source apportionment technique for O3 and its precursors, nitrogen oxides (NOx) and volatile organic compounds (VOCs), for the Community Multiscale Air Quality (CMAQ) model are described here. The Integrated Source Apportionment Method (ISAM) O3 approach is a hybrid of source apportionment and source sensitivity in that O3 production is attributed to precursor sources based on O3 formation regime (e.g., for a NOx-sensitive regime, O3 is apportioned to participating NOx emissions). This implementation is illustrated by tracking multiple emissions source sectors and lateral boundary inflow. NOx, VOC, and O3 attribution to tracked sectors in the application are consistent with spatial and temporal patterns of precursor emissions. The O3 ISAM implementation is further evaluated through comparisons of apportioned am

  10. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    PubMed

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  12. Ionospheric current source modeling and global geomagnetic induction using ground geomagnetic observatory data

    USGS Publications Warehouse

    Sun, Jin; Kelbert, Anna; Egbert, G.D.

    2015-01-01

    Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.

  13. Equivalent circuit of radio frequency-plasma with the transformer model

    NASA Astrophysics Data System (ADS)

    Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.

    2014-02-01

    LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  14. Toward better public health reporting using existing off the shelf approaches: The value of medical dictionaries in automated cancer detection using plaintext medical data.

    PubMed

    Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J

    2017-05-01

    Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Rethinking historical and cultural source of spontaneous mental models of water cycle: in the perspective of South Korea

    NASA Astrophysics Data System (ADS)

    Nam, Younkyeong

    2012-06-01

    This review explores Ben-Zvi Assaraf, Eshach, Orion, and Alamour's paper titled "Cultural Differences and Students' Spontaneous Models of the Water Cycle: A Case Study of Jewish and Bedouin Children in Israel" by examining how the authors use the concept of spontaneous mental models to explain cultural knowledge source of Bedouin children's mental model of water compared to Jewish children's mental model of water in nature. My response to Ben-Zvi Assaraf et al.'s work expands upon their explanations of the Bedouin children's cultural knowledge source. Bedouin children's mental model is based on their culture, religion, place of living and everyday life practices related to water. I suggest a different knowledge source for spontaneous mental model of water in nature based on unique history and traditions of South Korea where people think of water in nature in different ways. This forum also addresses how western science dominates South Korean science curriculum and ways of assessing students' conceptual understanding of scientific concepts. Additionally I argue that western science curriculum models could diminish Korean students' understanding of natural world which are based on Korean cultural ways of thinking about the natural world. Finally, I also suggest two different ways of considering this unique knowledge source for a more culturally relevant teaching Earth system education.

  16. Co-Attention Based Neural Network for Source-Dependent Essay Scoring

    ERIC Educational Resources Information Center

    Zhang, Haoran; Litman, Diane

    2018-01-01

    This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of…

  17. Exploring the Characteristics and Diverse Sources of Students' Mental Models of Acids and Bases

    ERIC Educational Resources Information Center

    Lin, Jing-Wen; Chiu, Mei-Hung

    2007-01-01

    This study was part of a 6-year integrated project designed to build a databank of students' science conceptions in Taiwan. The main purpose of this study was to identify the characteristics of students' mental models regarding acids/bases, understand their changes in mental models, and explore sources that might influence students in constructing…

  18. A 1D ion species model for an RF driven negative ion source

    NASA Astrophysics Data System (ADS)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  19. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    PubMed

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  20. Constrained positive matrix factorization: Elemental ratios, spatial distinction, and chemical transport model source contributions

    NASA Astrophysics Data System (ADS)

    Sturtz, Timothy M.

    Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.

  1. The Mismatch between Students' Mental Models of Acids/Bases and Their Sources and Their Teacher's Anticipations Thereof

    ERIC Educational Resources Information Center

    Lin, Jing-Wen; Chiu, Mei-Hung

    2010-01-01

    The aim of this study is to compare the characteristics and sources of students' mental models of acids and bases with a teacher's anticipations and, based on this comparison, to explore some possible explanations why motivated students might fail to learn from a subject-knowledgeable chemistry teacher. The study involves a chemistry teacher and…

  2. A process-based emission model for volatile organic compounds from silage sources on farms

    USDA-ARS?s Scientific Manuscript database

    Silage on dairy farms can emit large amounts of volatile organic compounds (VOCs), a precursor in the formation of tropospheric ozone. Because of the challenges associated with direct measurements, process-based modeling is another approach for estimating emissions of air pollutants from sources suc...

  3. A refined 2010-based VOC emission inventory and its improvement on modeling regional ozone in the Pearl River Delta Region, China.

    PubMed

    Yin, Shasha; Zheng, Junyu; Lu, Qing; Yuan, Zibing; Huang, Zhijiong; Zhong, Liuju; Lin, Hui

    2015-05-01

    Accurate and gridded VOC emission inventories are important for improving regional air quality model performance. In this study, a four-level VOC emission source categorization system was proposed. A 2010-based gridded Pearl River Delta (PRD) regional VOC emission inventory was developed with more comprehensive source coverage, latest emission factors, and updated activity data. The total anthropogenic VOC emission was estimated to be about 117.4 × 10(4)t, in which on-road mobile source shared the largest contribution, followed by industrial solvent use and industrial processes sources. Among the industrial solvent use source, furniture manufacturing and shoemaking were major VOC emission contributors. The spatial surrogates of VOC emission were updated for major VOC sources such as industrial sectors and gas stations. Subsector-based temporal characteristics were investigated and their temporal variations were characterized. The impacts of updated VOC emission estimates and spatial surrogates were evaluated by modeling O₃ concentration in the PRD region in the July and October of 2010, respectively. The results indicated that both updated emission estimates and spatial allocations can effectively reduce model bias on O₃ simulation. Further efforts should be made on the refinement of source classification, comprehensive collection of activity data, and spatial-temporal surrogates in order to reduce uncertainty in emission inventory and improve model performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. New paradigms for Salmonella source attribution based on microbial subtyping.

    PubMed

    Mughini-Gras, Lapo; Franz, Eelco; van Pelt, Wilfrid

    2018-05-01

    Microbial subtyping is the most common approach for Salmonella source attribution. Typically, attributions are computed using frequency-matching models like the Dutch and Danish models based on phenotyping data (serotyping, phage-typing, and antimicrobial resistance profiling). Herewith, we critically review three major paradigms facing Salmonella source attribution today: (i) the use of genotyping data, particularly Multi-Locus Variable Number of Tandem Repeats Analysis (MLVA), which is replacing traditional Salmonella phenotyping beyond serotyping; (ii) the integration of case-control data into source attribution to improve risk factor identification/characterization; (iii) the investigation of non-food sources, as attributions tend to focus on foods of animal origin only. Population genetics models or simplified MLVA schemes may provide feasible options for source attribution, although there is a strong need to explore novel modelling options as we move towards whole-genome sequencing as the standard. Classical case-control studies are enhanced by incorporating source attribution results, as individuals acquiring salmonellosis from different sources have different associated risk factors. Thus, the more such analyses are performed the better Salmonella epidemiology will be understood. Reparametrizing current models allows for inclusion of sources like reptiles, the study of which improves our understanding of Salmonella epidemiology beyond food to tackle the pathogen in a more holistic way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Integrated modeling approach using SELECT and SWAT models to simulate source loading and in-stream conditions of fecal indicator bacteria.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2016-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.

  6. Application of SELECT and SWAT models to simulate source load, fate, and transport of fecal bacteria in watersheds.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2017-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.

  7. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  8. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  9. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    PubMed Central

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2018-01-01

    The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526

  10. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    NASA Astrophysics Data System (ADS)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  11. Sources and Trends of Nitrogen Loading to New England Estuaries

    EPA Science Inventory

    A database of nitrogen (N) loading components to estuaries of the conterminous United States has been developed through application of regional SPARROW models. The original SPARROW models predict average detrended loads by source based on average flow conditions and 2002 source t...

  12. The Emergence of Open-Source Software in North America

    ERIC Educational Resources Information Center

    Pan, Guohua; Bonk, Curtis J.

    2007-01-01

    Unlike conventional models of software development, the open source model is based on the collaborative efforts of users who are also co-developers of the software. Interest in open source software has grown exponentially in recent years. A "Google" search for the phrase open source in early 2005 returned 28.8 million webpage hits, while…

  13. Comparison of the landslide susceptibility models in Taipei Water Source Domain, Taiwan

    NASA Astrophysics Data System (ADS)

    WU, C. Y.; Yeh, Y. C.; Chou, T. H.

    2017-12-01

    Taipei Water Source Domain, locating at the southeast of Taipei Metropolis, is the main source of water resource in this region. Recently, the downstream turbidity often soared significantly during the typhoon period because of the upstream landslides. The landslide susceptibilities should be analysed to assess the influence zones caused by different rainfall events, and to ensure the abilities of this domain to serve enough and quality water resource. Generally, the landslide susceptibility models can be established based on either a long-term landslide inventory or a specified landslide event. Sometimes, there is no long-term landslide inventory in some areas. Thus, the event-based landslide susceptibility models are established widely. However, the inventory-based and event-based landslide susceptibility models may result in dissimilar susceptibility maps in the same area. So the purposes of this study were to compare the landslide susceptibility maps derived from the inventory-based and event-based models, and to interpret how to select a representative event to be included in the susceptibility model. The landslide inventory from Typhoon Tim in July, 1994 and Typhoon Soudelor in August, 2015 was collected, and used to establish the inventory-based landslide susceptibility model. The landslides caused by Typhoon Nari and rainfall data were used to establish the event-based model. The results indicated the high susceptibility slope-units were located at middle upstream Nan-Shih Stream basin.

  14. Seasonally-Dynamic SPARROW Modeling of Nitrogen Flux Using Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A. B.; Moore, R. B.; Shih, J.; Nolin, A. W.; Macauley, M.; Alexander, R. B.

    2013-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe dynamically calibrated SPARROW models of total nitrogen flux in three sub-regional watersheds: the Potomac River Basin, Long Island Sound drainage, and coastal South Carolina drainage. The models are based on seasonal water quality and watershed input data for a total 170 monitoring stations for the period 2001 to 2008. Frequently-reported, spatially-detailed input data on the phenology of agricultural production, terrestrial vegetation growth, and snow melt are often challenging requirements of seasonal modeling of reactive nitrogen. In this NASA-funded research, we use Enhanced Vegetation Index (EVI), gross primary production and snow/ice cover data from MODIS to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The spatial reference frames of the models are 1:100,000-scale stream networks, and the computational time steps are 0.25-year seasons. Precipitation and temperature data are from PRISM. The model formulation accounts for storage of nitrogen from nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Once calibrated, model source terms based on previous season export allow for recursive dynamic simulation of stream flux: gradual increases or decreases in export occur as source supply rates and hydrologic forcing change. Based on an assumption that removal of nitrogen from watershed storage to stream channels and to 'permanent' sinks (e.g. the atmosphere and deep groundwater) occur as parallel first-order processes, the models can be used to estimate the approximate residence times of nonpoint source nitrogen in the watersheds.

  15. A hybrid phase-space and histogram source model for GPU-based Monte Carlo radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Zavgorodni, Sergei

    2014-12-01

    In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics

  16. Three dimensional global modeling of atmospheric CO2

    NASA Technical Reports Server (NTRS)

    Fung, I.; Hansen, J.; Rind, D.

    1983-01-01

    A model was developed to study the prospects of extracting information on carbon dioxide sources and sinks from observed CO2 variations. The approach uses a three dimensional global transport model, based on winds from a 3-D general circulation model (GCM), to advect CO2 noninteractively, i.e., as a tracer, with specified sources and sinks of CO2 at the surface. The 3-D model employed is identified and biosphere, ocean and fossil fuel sources and sinks are discussed. Some preliminary model results are presented.

  17. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  18. Source apportionment of airborne particulate matter using organic compounds as tracers

    NASA Astrophysics Data System (ADS)

    Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    A chemical mass balance receptor model based on organic compounds has been developed that relates source contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.

  19. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  20. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  1. A Seismic Source Model for Central Europe and Italy

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Williams, C.; Onur, T.

    2006-12-01

    We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.

  2. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    NASA Astrophysics Data System (ADS)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  3. Application of a water quality model in the White Cart water catchment, Glasgow, UK.

    PubMed

    Liu, S; Tucker, P; Mansell, M; Hursthouse, A

    2003-03-01

    Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.

  4. Body and Surface Wave Modeling of Observed Seismic Events. Part 2.

    DTIC Science & Technology

    1987-05-12

    is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical S coordinates in an asymptotic form which...using line source (2-D) theory. It is based on expand - ing the complete three dimensional solution of the wave equation expressed in cylindrical...generating synthetic point-source seismograms for shear dislocation sources using line source (2-D) theory. It is based on expanding the complete three

  5. A prototype of the procedure of strong ground motion prediction for intraslab earthquake based on characterized source model

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Sekiguchi, H.

    2011-12-01

    We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

  6. An acoustic glottal source for vocal tract physical models

    NASA Astrophysics Data System (ADS)

    Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti

    2017-11-01

    A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.

  7. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  8. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  9. Opportunities and Challenges in Supply-Side Simulation: Physician-Based Models

    PubMed Central

    Gresenz, Carole Roan; Auerbach, David I; Duarte, Fabian

    2013-01-01

    Objective To provide a conceptual framework and to assess the availability of empirical data for supply-side microsimulation modeling in the context of health care. Data Sources Multiple secondary data sources, including the American Community Survey, Health Tracking Physician Survey, and SK&A physician database. Study Design We apply our conceptual framework to one entity in the health care market—physicians—and identify, assess, and compare data available for physician-based simulation models. Principal Findings Our conceptual framework describes three broad types of data required for supply-side microsimulation modeling. Our assessment of available data for modeling physician behavior suggests broad comparability across various sources on several dimensions and highlights the need for significant integration of data across multiple sources to provide a platform adequate for modeling. A growing literature provides potential estimates for use as behavioral parameters that could serve as the models' engines. Sources of data for simulation modeling that account for the complex organizational and financial relationships among physicians and other supply-side entities are limited. Conclusions A key challenge for supply-side microsimulation modeling is optimally combining available data to harness their collective power. Several possibilities also exist for novel data collection. These have the potential to serve as catalysts for the next generation of supply-side-focused simulation models to inform health policy. PMID:23347041

  10. Application of remote sensing-based two-source energy balance model for mapping field surface fluxes with composite and component surface temperatures

    USDA-ARS?s Scientific Manuscript database

    Operational application of a remote sensing-based two source energy balance model (TSEB) to estimate evaportranspiration (ET) and the components evaporation (E), transpiration (T) at a range of space and time scales is very useful for managing water resources in arid and semiarid watersheds. The TSE...

  11. Source apportionment for fine particulate matter in a Chinese city using an improved gas-constrained method and comparison with multiple receptor models.

    PubMed

    Shi, Guoliang; Liu, Jiayuan; Wang, Haiting; Tian, Yingze; Wen, Jie; Shi, Xurong; Feng, Yinchang; Ivey, Cesunica E; Russell, Armistead G

    2018-02-01

    PM 2.5 is one of the most studied atmospheric pollutants due to its adverse impacts on human health and welfare and the environment. An improved model (the chemical mass balance gas constraint-Iteration: CMBGC-Iteration) is proposed and applied to identify source categories and estimate source contributions of PM 2.5. The CMBGC-Iteration model uses the ratio of gases to PM as constraints and considers the uncertainties of source profiles and receptor datasets, which is crucial information for source apportionment. To apply this model, samples of PM 2.5 were collected at Tianjin, a megacity in northern China. The ambient PM 2.5 dataset, source information, and gas-to-particle ratios (such as SO 2 /PM 2.5 , CO/PM 2.5 , and NOx/PM 2.5 ratios) were introduced into the CMBGC-Iteration to identify the potential sources and their contributions. Six source categories were identified by this model and the order based on their contributions to PM 2.5 was as follows: secondary sources (30%), crustal dust (25%), vehicle exhaust (16%), coal combustion (13%), SOC (7.6%), and cement dust (0.40%). In addition, the same dataset was also calculated by other receptor models (CMB, CMB-Iteration, CMB-GC, PMF, WALSPMF, and NCAPCA), and the results obtained were compared. Ensemble-average source impacts were calculated based on the seven source apportionment results: contributions of secondary sources (28%), crustal dust (20%), coal combustion (18%), vehicle exhaust (17%), SOC (11%), and cement dust (1.3%). The similar results of CMBGC-Iteration and ensemble method indicated that CMBGC-Iteration can produce relatively appropriate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Source apportionment of airborne particulate matter using organic compounds as tracers

    NASA Astrophysics Data System (ADS)

    Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    A chemical mass balance receptor model based on organic compounds has been developed that relates sours; contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution:; from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.

  13. Advanced RF Sources Based on Novel Nonlinear Transmission Lines

    DTIC Science & Technology

    2015-01-26

    microwave (HPM) sources. It is also critical to thin film devices and integrated circuits, carbon nanotube based cathodes and interconnects, field emitters ... line model (TLM) in Fig. 6b. Our model is compared with TLM, shown in Fig. 7a. When the interface resistance rc is small, TLM becomes inaccurate...due to current crowding. Fig. 6. (a) Electrical contact including specific interfacial resistivity ρc, and (b) its transmission line model

  14. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  15. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  16. A stable isotope model for combined source apportionment and degradation quantification of environmental pollutants

    NASA Astrophysics Data System (ADS)

    Lutz, Stefanie; Van Breukelen, Boris

    2014-05-01

    Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.

  17. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  18. Evaluation of the source area of rooftop scalar measurements in London, UK using wind tunnel and modelling approaches.

    NASA Astrophysics Data System (ADS)

    Brocklehurst, Aidan; Boon, Alex; Barlow, Janet; Hayden, Paul; Robins, Alan

    2014-05-01

    The source area of an instrument is an estimate of the area of ground over which the measurement is generated. Quantification of the source area of a measurement site provides crucial context for analysis and interpretation of the data. A range of computational models exists to calculate the source area of an instrument, but these are usually based on assumptions which do not hold for instruments positioned very close to the surface, particularly those surrounded by heterogeneous terrain i.e. urban areas. Although positioning instrumentation at higher elevation (i.e. on masts) is ideal in urban areas, this can be costly in terms of installation and maintenance costs and logistically difficult to position instruments in the ideal geographical location. Therefore, in many studies, experimentalists turn to rooftops to position instrumentation. Experimental validations of source area models for these situations are very limited. In this study, a controlled tracer gas experiment was conducted in a wind tunnel based on a 1:200 scale model of a measurement site used in previous experimental work in central London. The detector was set at the location of the rooftop site as the tracer was released at a range of locations within the surrounding streets and rooftops. Concentration measurements are presented for a range of wind angles, with the spread of concentration measurements indicative of the source area distribution. Clear evidence of wind channeling by streets is seen with the shape of the source area strongly influenced by buildings upwind of the measurement point. The results of the wind tunnel study are compared to scalar concentration source areas generated by modelling approaches based on meteorological data from the central London experimental site and used in the interpretation of continuous carbon dioxide (CO2) concentration data. Initial conclusions will be drawn as to how to apply scalar concentration source area models to rooftop measurement sites and suggestions for their improvement to incorporate effects such as channeling.

  19. Propeller sheet cavitation noise source modeling and inversion

    NASA Astrophysics Data System (ADS)

    Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae

    2014-02-01

    Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.

  20. An image-based skeletal dosimetry model for the ICRP reference adult female—internal electron sources

    NASA Astrophysics Data System (ADS)

    O'Reilly, Shannon E.; DeWeese, Lindsay S.; Maynard, Matthew R.; Rajon, Didier A.; Wayson, Michael B.; Marshall, Emily L.; Bolch, Wesley E.

    2016-12-01

    An image-based skeletal dosimetry model for internal electron sources was created for the ICRP-defined reference adult female. Many previous skeletal dosimetry models, which are still employed in commonly used internal dosimetry software, do not properly account for electron escape from trabecular spongiosa, electron cross-fire from cortical bone, and the impact of marrow cellularity on active marrow self-irradiation. Furthermore, these existing models do not employ the current ICRP definition of a 50 µm bone endosteum (or shallow marrow). Each of these limitations was addressed in the present study. Electron transport was completed to determine specific absorbed fractions to both active and shallow marrow of the skeletal regions of the University of Florida reference adult female. The skeletal macrostructure and microstructure were modeled separately. The bone macrostructure was based on the whole-body hybrid computational phantom of the UF series of reference models, while the bone microstructure was derived from microCT images of skeletal region samples taken from a 45 years-old female cadaver. The active and shallow marrow are typically adopted as surrogate tissue regions for the hematopoietic stem cells and osteoprogenitor cells, respectively. Source tissues included active marrow, inactive marrow, trabecular bone volume, trabecular bone surfaces, cortical bone volume, and cortical bone surfaces. Marrow cellularity was varied from 10 to 100 percent for active marrow self-irradiation. All other sources were run at the defined ICRP Publication 70 cellularity for each bone site. A total of 33 discrete electron energies, ranging from 1 keV to 10 MeV, were either simulated or analytically modeled. The method of combining skeletal macrostructure and microstructure absorbed fractions assessed using MCNPX electron transport was found to yield results similar to those determined with the PIRT model applied to the UF adult male skeletal dosimetry model. Calculated skeletal averaged absorbed fractions for each source-target combination were found to follow similar trends of more recent dosimetry models (image-based models) but did not follow results from skeletal models based upon assumptions of an infinite expanse of trabecular spongiosa.

  1. Near-Field Tsunami Models with Rapid Earthquake Source Inversions from Land and Ocean-Based Observations: The Potential for Forecast and Warning

    NASA Astrophysics Data System (ADS)

    Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.

    2013-12-01

    Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is achieved. This is an indirect approach to tsunami warning, it relies on automatic determination of the earthquake source prior to tsunami simulation. It is more robust than ad-hoc approaches because it relies on computation of a finite-extent centroid moment tensor to objectively determine the style of faulting and the fault plane geometry on which to launch the heterogeneous static slip inversion. Operator interaction and physical assumptions are minimal. Thus, the approach can provide the initial conditions for tsunami simulation (seafloor motion) irrespective of the type of earthquake source and relies heavily on oceanic wave gauge measurements for source determination. It reliably distinguishes among strike-slip, normal and thrust faulting events, all of which have been observed recently to occur in subduction zones and pose distinct tsunami hazards.

  2. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  3. [Nitrogen non-point source pollution identification based on ArcSWAT in Changle River].

    PubMed

    Deng, Ou-Ping; Sun, Si-Yang; Lü, Jun

    2013-04-01

    The ArcSWAT (Soil and Water Assessment Tool) model was adopted for Non-point source (NPS) nitrogen pollution modeling and nitrogen source apportionment for the Changle River watershed, a typical agricultural watershed in Southeast China. Water quality and hydrological parameters were monitored, and the watershed natural conditions (including soil, climate, land use, etc) and pollution sources information were also investigated and collected for SWAT database. The ArcSWAT model was established in the Changle River after the calibrating and validating procedures of the model parameters. Based on the validated SWAT model, the contributions of different nitrogen sources to river TN loading were quantified, and spatial-temporal distributions of NPS nitrogen export to rivers were addressed. The results showed that in the Changle River watershed, Nitrogen fertilizer, nitrogen air deposition and nitrogen soil pool were the prominent pollution sources, which contributed 35%, 32% and 25% to the river TN loading, respectively. There were spatial-temporal variations in the critical sources for NPS TN export to the river. Natural sources, such as soil nitrogen pool and atmospheric nitrogen deposition, should be targeted as the critical sources for river TN pollution during the rainy seasons. Chemical nitrogen fertilizer application should be targeted as the critical sources for river TN pollution during the crop growing season. Chemical nitrogen fertilizer application, soil nitrogen pool and atmospheric nitrogen deposition were the main sources for TN exported from the garden plot, forest and residential land, respectively. However, they were the main sources for TN exported both from the upland and paddy field. These results revealed that NPS pollution controlling rules should focus on the spatio-temporal distribution of NPS pollution sources.

  4. Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources

    NASA Astrophysics Data System (ADS)

    Davoodi, M.; Mesgari, M. S.

    2015-12-01

    Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.

  5. [Effects of attitude formation, persuasive message, and source expertise on attitude change: an examination based on the Elaboration Likelihood Model and the Attitude Formation Theory].

    PubMed

    Nakamura, M; Saito, K; Wakabayashi, M

    1990-04-01

    The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.

  6. Wet deposition of mercury at a New York state rural site: Concentrations, fluxes, and source areas

    NASA Astrophysics Data System (ADS)

    Lai, Soon-onn; Holsen, Thomas M.; Hopke, Philip K.; Liu, Peng

    Event-based mercury (Hg) precipitation samples were collected with a modified MIC-B sampler between September 2003 and April 2005 at Potsdam, NY to investigate Hg in wet deposition and identify potential source areas using the potential source contribution function (PCSF) and residence time weighted concentration (RTWC) models. The volume-weighted mean (VWM) concentration and wet deposition flux were 5.5ngL-1 and 7.6μgm-2 during the study period, and 5.5ngL-1 and 5.9μgm-2 in 2004, respectively, and show seasonal trends with larger values in the spring and summer. The PSCF model results matched known source areas based on an emission inventory better than did the RTWC results based on the spatial correlation index. Both modeling results identified large Hg source areas that contain a number of coal-fired power plants located in the Upper Ohio River Valley and in southeastern Michigan, as well as in Quebec and Ontario where there are metal production facilities, waste incinerators and paper mills. Emissions from the Atlantic Ocean were also determined to be a potential source.

  7. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  8. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  9. A simulation-based analytic model of radio galaxies

    NASA Astrophysics Data System (ADS)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  10. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  11. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  12. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Spectral matching research for light-emitting diode-based neonatal jaundice therapeutic device light source

    NASA Astrophysics Data System (ADS)

    Gan, Ruting; Guo, Zhenning; Lin, Jieben

    2015-09-01

    To decrease the risk of bilirubin encephalopathy and minimize the need for exchange transfusions, we report a novel design for light source of light-emitting diode (LED)-based neonatal jaundice therapeutic device (NJTD). The bilirubin absorption spectrum in vivo was regarded as target. Based on spectral constructing theory, we used commercially available LEDs with different peak wavelengths and full width at half maximum as matching light sources. Simple genetic algorithm was first proposed as the spectral matching method. The required LEDs number at each peak wavelength was calculated, and then, the commercial light source sample model of the device was fabricated to confirm the spectral matching technology. In addition, the corresponding spectrum was measured and the effect was analyzed finally. The results showed that fitted spectrum was very similar to the target spectrum with 98.86 % matching degree, and the actual device model has a spectrum close to the target with 96.02 % matching degree. With higher fitting degree and efficiency, this matching algorithm is very suitable for light source matching technology of LED-based spectral distribution, and bilirubin absorption spectrum in vivo will be auspicious candidate for the target spectrum of new LED-based NJTD light source.

  14. Atmospheric concentrations, sources and gas-particle partitioning of PAHs in Beijing after the 29th Olympic Games.

    PubMed

    Ma, Wan-Li; Sun, De-Zhi; Shen, Wei-Guo; Yang, Meng; Qi, Hong; Liu, Li-Yan; Shen, Ji-Min; Li, Yi-Fan

    2011-07-01

    A comprehensive sampling campaign was carried out to study atmospheric concentration of polycyclic aromatic hydrocarbons (PAHs) in Beijing and to evaluate the effectiveness of source control strategies in reducing PAHs pollution after the 29th Olympic Games. The sub-cooled liquid vapor pressure (logP(L)(o))-based model and octanol-air partition coefficient (K(oa))-based model were applied based on each seasonal dateset. Regression analysis among log K(P), logP(L)(o) and log K(oa) exhibited high significant correlations for four seasons. Source factors were identified by principle component analysis and contributions were further estimated by multiple linear regression. Pyrogenic sources and coke oven emission were identified as major sources for both the non-heating and heating seasons. As compared with literatures, the mean PAH concentrations before and after the 29th Olympic Games were reduced by more than 60%, indicating that the source control measures were effective for reducing PAHs pollution in Beijing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  16. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    PubMed

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.

  18. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  19. Estimation of source locations of total gaseous mercury measured in New York State using trajectory-based models

    NASA Astrophysics Data System (ADS)

    Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.

    Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.

  20. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    PubMed

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  1. Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images

    NASA Astrophysics Data System (ADS)

    Allman, Derek; Reiter, Austin; Bell, Muyinatu

    2018-02-01

    We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.

  2. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  3. Source preference and ambiguity aversion: models and evidence from behavioral and neuroimaging experiments.

    PubMed

    Chew, Soo Hong; Li, King King; Chark, Robin; Zhong, Songfa

    2008-01-01

    This experimental economics study using brain imaging techniques investigates the risk-ambiguity distinction in relation to the source preference hypothesis (Fox & Tversky, 1995) in which identically distributed risks arising from different sources of uncertainty may engender distinct preferences for the same decision maker, contrary to classical economic thinking. The use of brain imaging enables sharper testing of the implications of different models of decision-making including Chew and Sagi's (2008) axiomatization of source preference. Using fMRI, brain activations were observed when subjects make 48 sequential binary choices among even-chance lotteries based on whether the trailing digits of a number of stock prices at market closing would be odd or even. Subsequently, subjects rate familiarity of the stock symbols. When contrasting brain activation from more familiar sources with those from less familiar ones, regions appearing to be more active include the putamen, medial frontal cortex, and superior temporal gyrus. ROI analysis showed that the activation patterns in the familiar-unfamiliar and unfamiliar-familiar contrasts are similar to those in the risk-ambiguity and ambiguity-risk contrasts reported by Hsu et al. (2005). This supports the conjecture that the risk-ambiguity distinction can be subsumed by the source preference hypothesis. Our odd-even design has the advantage of inducing the same "unambiguous" probability of half for each subject in each binary comparison. Our finding supports the implications of the Chew-Sagi model and rejects models based on global probabilistic sophistication, including rank-dependent models derived from non-additive probabilities, e.g., Choquet expected utility and cumulative prospect theory, as well as those based on multiple priors, e.g., alpha-maxmin. The finding in Hsu et al. (2005) that orbitofrontal cortex lesion patients display neither ambiguity aversion nor risk aversion offers further support to the Chew-Sagi model. Our finding also supports the Levy et al. (2007) contention of a single valuation system encompassing risk and ambiguity aversion. This is the first neuroimaging study of the source preference hypothesis using a design which can discriminate among decision models ranging from risk-based ones to those relying on multiple priors.

  4. Microbial risk assessment of drinking water based on hydrodynamic modelling of pathogen concentrations in source water.

    PubMed

    Sokolova, Ekaterina; Petterson, Susan R; Dienus, Olaf; Nyström, Fredrik; Lindgren, Per-Eric; Pettersson, Thomas J R

    2015-09-01

    Norovirus contamination of drinking water sources is an important cause of waterborne disease outbreaks. Knowledge on pathogen concentrations in source water is needed to assess the ability of a drinking water treatment plant (DWTP) to provide safe drinking water. However, pathogen enumeration in source water samples is often not sufficient to describe the source water quality. In this study, the norovirus concentrations were characterised at the contamination source, i.e. in sewage discharges. Then, the transport of norovirus within the water source (the river Göta älv in Sweden) under different loading conditions was simulated using a hydrodynamic model. Based on the estimated concentrations in source water, the required reduction of norovirus at the DWTP was calculated using quantitative microbial risk assessment (QMRA). The required reduction was compared with the estimated treatment performance at the DWTP. The average estimated concentration in source water varied between 4.8×10(2) and 7.5×10(3) genome equivalents L(-1); and the average required reduction by treatment was between 7.6 and 8.8 Log10. The treatment performance at the DWTP was estimated to be adequate to deal with all tested loading conditions, but was heavily dependent on chlorine disinfection, with the risk of poor reduction by conventional treatment and slow sand filtration. To our knowledge, this is the first article to employ discharge-based QMRA, combined with hydrodynamic modelling, in the context of drinking water. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Competition-Based Learning: A Model for the Integration of Competitions with Project-Based Learning Using Open Source LMS

    ERIC Educational Resources Information Center

    Issa, Ghassan; Hussain, Shakir M.; Al-Bahadili, Hussein

    2014-01-01

    In an effort to enhance the learning process in higher education, a new model for Competition-Based Learning (CBL) is presented. The new model utilizes two well-known learning models, namely, the Project-Based Learning (PBL) and competitions. The new model is also applied in a networked environment with emphasis on collective learning as well as…

  6. Research on precise modeling of buildings based on multi-source data fusion of air to ground

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong

    2016-03-01

    Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.

  7. Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.

  8. W17_geonuc “Application of the Spectral Element Method to improvement of Ground-based Nuclear Explosion Monitoring”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Rougier, Esteban; Lei, Zhou

    This project is in support of the Source Physics Experiment SPE (Snelson et al. 2013), which aims to develop new seismic source models of explosions. One priority of this program is first principle numerical modeling to validate and extend current empirical models.

  9. Identification and elucidation of anthropogenic source contribution in PM10 pollutant: Insight gain from dispersion and receptor models.

    PubMed

    Roy, Debananda; Singh, Gurdeep; Yadav, Pankaj

    2016-10-01

    Source apportionment study of PM 10 (Particulate Matter) in a critically polluted area of Jharia coalfield, India has been carried out using Dispersion model, Principle Component Analysis (PCA) and Chemical Mass Balance (CMB) techniques. Dispersion model Atmospheric Dispersion Model (AERMOD) was introduced to simplify the complexity of sources in Jharia coalfield. PCA and CMB analysis indicates that monitoring stations near the mining area were mainly affected by the emission from open coal mining and its associated activities such as coal transportation, loading and unloading of coal. Mine fire emission also contributed a considerable amount of particulate matters in monitoring stations. Locations in the city area were mostly affected by vehicular, Liquid Petroleum Gas (LPG) & Diesel Generator (DG) set emissions, residential, and commercial activities. The experimental data sampling and their analysis could aid understanding how dispersion based model technique along with receptor model based concept can be strategically used for quantitative analysis of Natural and Anthropogenic sources of PM 10 . Copyright © 2016. Published by Elsevier B.V.

  10. Pairing top-down and bottom-up approaches to analyze catchment scale management of water quality and quantity

    NASA Astrophysics Data System (ADS)

    Lovette, J. P.; Duncan, J. M.; Band, L. E.

    2016-12-01

    Watershed management requires information on the hydrologic impacts of local to regional land use, land cover and infrastructure conditions. Management of runoff volumes, storm flows, and water quality can benefit from large scale, "top-down" screening tools, using readily available information, as well as more detailed, "bottom-up" process-based models that explicitly track local runoff production and routing from sources to receiving water bodies. Regional scale data, available nationwide through the NHD+, and top-down models based on aggregated catchment information provide useful tools for estimating regional patterns of peak flows, volumes and nutrient loads at the catchment level. Management impacts can be estimated with these models, but have limited ability to resolve impacts beyond simple changes to land cover proportions. Alternatively, distributed process-based models provide more flexibility in modeling management impacts by resolving spatial patterns of nutrient source, runoff generation, and uptake. This bottom-up approach can incorporate explicit patterns of land cover, drainage connectivity, and vegetation extent, but are typically applied over smaller areas. Here, we first model peak flood flows and nitrogen loads across North Carolina's 70,000 NHD+ catchments using USGS regional streamflow regression equations and the SPARROW model. We also estimate management impact by altering aggregated sources in each of these models. To address the missing spatial implications of the top-down approach, we further explore the demand for riparian buffers as a management strategy, simulating the accumulation of nutrient sources along flow paths and the potential mitigation of these sources through forested buffers. We use the Regional Hydro-Ecological Simulation System (RHESSys) to model changes across several basins in North Carolina's Piedmont and Blue Ridge regions, ranging in size from 15 - 1,130 km2. The two approaches provide a complementary set of tools for large area screening, followed by smaller, more process based assessment and design tools.

  11. Estimating Biases for Regional Methane Fluxes using Co-emitted Tracers

    NASA Astrophysics Data System (ADS)

    Bambha, R.; Safta, C.; Michelsen, H. A.; Cui, X.; Jeong, S.; Fischer, M. L.

    2017-12-01

    Methane is a powerful greenhouse gas, and the development and improvement of emissions models rely on understanding the flux of methane released from anthropogenic sources relative to releases from other sources. Increasing production of shale oil and gas in the mid-latitudes and associated fugitive emissions are suspected to be a dominant contributor to the global methane increase. Landfills, sewage treatment, and other sources may be dominant sources in some parts of the U.S. Large discrepancies between emissions models present a great challenge to reconciling atmospheric measurements with inventory-based estimates for various emissions sectors. Current approaches for measuring regional emissions yield highly uncertain estimates because of the sparsity of measurement sites and the presence of multiple simultaneous sources. Satellites can provide wide spatial coverage at the expense of much lower measurement precision compared to ground-based instruments. Methods for effective assimilation of data from a variety of sources are critically needed to perform regional GHG attribution with existing measurements and to determine how to structure future measurement systems including satellites. We present a hierarchical Bayesian framework to estimate surface methane fluxes based on atmospheric concentration measurements and a Lagrangian transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport). Structural errors in the transport model are estimated with the help of co-emitted traces species with well defined decay rates. We conduct the analyses at regional scales that are based on similar geographical and meteorological conditions. For regions where data are informative, we further refine flux estimates by emissions sector and infer spatially and temporally varying biases parameterized as spectral random field representations.

  12. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  13. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  14. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less

  15. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    NASA Astrophysics Data System (ADS)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to monitoring cognitive or mental states of human operators in attention-critical settings or in passive brain-computer interfaces.

  16. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  17. Monte Carlo modelling of large scale NORM sources using MCNP.

    PubMed

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  18. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  19. Photochemical grid model implementation and application of VOC, NOx, and O3 source apportionment

    EPA Science Inventory

    For the purposes of developing optimal emissions control strategies, efficient approaches are needed to identify the major sources or groups of sources that contribute to elevated ozone (O3) concentrations. Source-based apportionment techniques implemented in photochemical grid m...

  20. Including non-dietary sources into an exposure assessment of the European Food Safety Authority: The challenge of multi-sector chemicals such as Bisphenol A.

    PubMed

    von Goetz, N; Pirow, R; Hart, A; Bradley, E; Poças, F; Arcella, D; Lillegard, I T L; Simoneau, C; van Engelen, J; Husoy, T; Theobald, A; Leclercq, C

    2017-04-01

    In the most recent risk assessment for Bisphenol A for the first time a multi-route aggregate exposure assessment was conducted by the European Food Safety Authority. This assessment includes exposure via dietary sources, and also contributions of the most important non-dietary sources. Both average and high aggregate exposure were calculated by source-to-dose modeling (forward calculation) for different age groups and compared with estimates based on urinary biomonitoring data (backward calculation). The aggregate exposure estimates obtained by forward and backward modeling are in the same order of magnitude, with forward modeling yielding higher estimates associated with larger uncertainty. Yet, only forward modeling can indicate the relative contribution of different sources. Dietary exposure, especially via canned food, appears to be the most important exposure source and, based on the central aggregate exposure estimates, contributes around 90% to internal exposure to total (conjugated plus unconjugated) BPA. Dermal exposure via thermal paper and to a lesser extent via cosmetic products may contribute around 10% for some age groups. The uncertainty around these estimates is considerable, but since after dermal absorption a first-pass metabolism of BPA by conjugation is lacking, dermal sources may be of equal or even higher toxicological relevance than dietary sources. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Optimum load distribution between heat sources based on the Cournot model

    NASA Astrophysics Data System (ADS)

    Penkovskii, A. V.; Stennikov, V. A.; Khamisov, O. V.

    2015-08-01

    One of the widespread models of the heat supply of consumers, which is represented in the "Single buyer" format, is considered. The methodological base proposed for its description and investigation presents the use of principles of the theory of games, basic propositions of microeconomics, and models and methods of the theory of hydraulic circuits. The original mathematical model of the heat supply system operating under conditions of the "Single buyer" organizational structure provides the derivation of a solution satisfying the market Nash equilibrium. The distinctive feature of the developed mathematical model is that, along with problems solved traditionally within the bounds of bilateral relations of heat energy sources-heat consumer, it considers a network component with its inherent physicotechnical properties of the heat network and business factors connected with costs of the production and transportation of heat energy. This approach gives the possibility to determine optimum levels of load of heat energy sources. These levels provide the given heat energy demand of consumers subject to the maximum profit earning of heat energy sources and the fulfillment of conditions for formation of minimum heat network costs for a specified time. The practical realization of the search of market equilibrium is considered by the example of a heat supply system with two heat energy sources operating on integrated heat networks. The mathematical approach to the solution search is represented in the graphical form and illustrates computations based on the stepwise iteration procedure for optimization of levels of loading of heat energy sources (groping procedure by Cournot) with the corresponding computation of the heat energy price for consumers.

  2. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  3. A Large Underestimate of Formic Acid from Tropical Fires: Constraints from Space-Borne Measurements.

    PubMed

    Chaliyakunnel, S; Millet, D B; Wells, K C; Cady-Pereira, K E; Shephard, M W

    2016-06-07

    Formic acid (HCOOH) is one of the most abundant carboxylic acids and a dominant source of atmospheric acidity. Recent work indicates a major gap in the HCOOH budget, with atmospheric concentrations much larger than expected from known sources. Here, we employ recent space-based observations from the Tropospheric Emission Spectrometer with the GEOS-Chem atmospheric model to better quantify the HCOOH source from biomass burning, and assess whether fire emissions can help close the large budget gap for this species. The space-based data reveal a severe model HCOOH underestimate most prominent over tropical burning regions, suggesting a major missing source of organic acids from fires. We develop an approach for inferring the fractional fire contribution to ambient HCOOH and find, based on measurements over Africa, that pyrogenic HCOOH:CO enhancement ratios are much higher than expected from direct emissions alone, revealing substantial secondary organic acid production in fire plumes. Current models strongly underestimate (by 10 ± 5 times) the total primary and secondary HCOOH source from African fires. If a 10-fold bias were to extend to fires in other regions, biomass burning could produce 14 Tg/a of HCOOH in the tropics or 16 Tg/a worldwide. However, even such an increase would only represent 15-20% of the total required HCOOH source, implying the existence of other larger missing sources.

  4. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  5. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  6. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  7. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  8. Identifying PM2.5 and PM0.1 sources for epidemiological studies in California.

    PubMed

    Hu, Jianlin; Zhang, Hongliang; Chen, Shuhua; Ying, Qi; Wiedinmyer, Christine; Vandenberghe, Francois; Kleeman, Michael J

    2014-05-06

    The University of California-Davis_Primary (UCD_P) model was applied to simultaneously track ∼ 900 source contributions to primary particulate matter (PM) in California for seven continuous years (January 1st, 2000 to December 31st, 2006). Predicted source contributions to primary PM2.5 mass, PM1.8 elemental carbon (EC), PM1.8 organic carbon (OC), PM0.1 EC, and PM0.1 OC were in general agreement with the results from previous source apportionment studies using receptor-based techniques. All sources were further subjected to a constraint check based on model performance for PM trace elemental composition. A total of 151 PM2.5 sources and 71 PM0.1 sources contained PM elements that were predicted at concentrations in general agreement with measured values at nearby monitoring sites. Significant spatial heterogeneity was predicted among the 151 PM2.5 and 71 PM0.1 source concentrations, and significantly different seasonal profiles were predicted for PM2.5 and PM0.1 in central California vs southern California. Population-weighted concentrations of PM emitted from various sources calculated using the UCD_P model spatial information differed from the central monitor estimates by up to 77% for primary PM2.5 mass and 148% for PM2.5 EC because the central monitor concentration is not representative of exposure for nearby population. The results from the UCD_P model provide enhanced source apportionment information for epidemiological studies to examine the relationship between health effects and concentrations of primary PM from individual sources.

  9. New insight on petroleum system modeling of Ghadames basin, Libya

    NASA Astrophysics Data System (ADS)

    Bora, Deepender; Dubey, Siddharth

    2015-12-01

    Underdown and Redfern (2008) performed a detailed petroleum system modeling of the Ghadames basin along an E-W section. However, hydrocarbon generation, migration and accumulation changes significantly across the basin due to complex geological history. Therefore, a single section can't be considered representative for the whole basin. This study aims at bridging this gap by performing petroleum system modeling along a N-S section and provides new insights on source rock maturation, generation and migration of the hydrocarbons using 2D basin modeling. This study in conjunction with earlier work provides a 3D context of petroleum system modeling in the Ghadames basin. Hydrocarbon generation from the lower Silurian Tanezzuft formation and the Upper Devonian Aouinet Ouenine started during the late Carboniferous. However, high subsidence rate during middle to late Cretaceous and elevated heat flow in Cenozoic had maximum impact on source rock transformation and hydrocarbon generation whereas large-scale uplift and erosion during Alpine orogeny has significant impact on migration and accumulation. Visible migration observed along faults, which reactivated during Austrian unconformity. Peak hydrocarbon expulsion reached during Oligocene for both the Tanezzuft and the Aouinet Ouenine source rocks. Based on modeling results, capillary entry pressure driven downward expulsion of hydrocarbons from the lower Silurian Tanezzuft formation to the underlying Bir Tlacsin formation observed during middle Cretaceous. Kinetic modeling has helped to model hydrocarbon composition and distribution of generated hydrocarbons from both the source rocks. Application of source to reservoir tracking technology suggest some accumulations at shallow stratigraphic level has received hydrocarbons from both the Tanezzuft and Aouinet Ouenine source rocks, implying charge mixing. Five petroleum systems identified based on source to reservoir correlation technology in Petromod*. This Study builds upon the original work of Underdown and Redfern, 2008 and offers new insights and interpretation of the data.

  10. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  11. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  12. DEVELOPMENT OF A MODEL FOR REAL TIME CO CONCENTRATIONS NEAR ROADWAYS

    EPA Science Inventory

    Although emission standards for mobile sources continue to be tightened, tailpipe emissions in urban areas continue to be a major source of human exposure to air toxics. Current human exposure models using simplified assumptions based on fixed air monitoring stations and region...

  13. Applications of a thermal-based two-source energy balance model using Priestley-Taylor approach for surface temperature partitioning (TSEB_PTT) under advective conditions

    USDA-ARS?s Scientific Manuscript database

    Operational application of the two source energy balance model (TSEB) which can estimate evaportranspiration (ET) and the components evaporation (E), transpiration (T) of the land surface in different climates is very useful for many applications in hydrology and agriculture. The TSEB model uses an ...

  14. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  15. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

    PubMed

    Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

    2018-06-01

    The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  17. Proceedings of the Numerical Modeling for Underground Nuclear Test Monitoring Symposium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, S.R.; Kamm, J.R.

    1993-11-01

    The purpose of the meeting was to discuss the state-of-the-art in numerical simulations of nuclear explosion phenomenology with applications to test ban monitoring. We focused on the uniqueness of model fits to data, the measurement and characterization of material response models, advanced modeling techniques, and applications of modeling to monitoring problems. The second goal of the symposium was to establish a dialogue between seismologists and explosion-source code calculators. The meeting was divided into five main sessions: explosion source phenomenology, material response modeling, numerical simulations, the seismic source, and phenomenology from near source to far field. We feel the symposium reachedmore » many of its goals. Individual papers submitted at the conference are indexed separately on the data base.« less

  18. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  19. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  20. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  1. Detecting black bear source-sink dynamics using individual-based genetic graphs.

    PubMed

    Draheim, Hope M; Moore, Jennifer A; Etter, Dwayne; Winterstein, Scott R; Scribner, Kim T

    2016-07-27

    Source-sink dynamics affects population connectivity, spatial genetic structure and population viability for many species. We introduce a novel approach that uses individual-based genetic graphs to identify source-sink areas within a continuously distributed population of black bears (Ursus americanus) in the northern lower peninsula (NLP) of Michigan, USA. Black bear harvest samples (n = 569, from 2002, 2006 and 2010) were genotyped at 12 microsatellite loci and locations were compared across years to identify areas of consistent occupancy over time. We compared graph metrics estimated for a genetic model with metrics from 10 ecological models to identify ecological factors that were associated with sources and sinks. We identified 62 source nodes, 16 of which represent important source areas (net flux > 0.7) and 79 sink nodes. Source strength was significantly correlated with bear local harvest density (a proxy for bear density) and habitat suitability. Additionally, resampling simulations showed our approach is robust to potential sampling bias from uneven sample dispersion. Findings demonstrate black bears in the NLP exhibit asymmetric gene flow, and individual-based genetic graphs can characterize source-sink dynamics in continuously distributed species in the absence of discrete habitat patches. Our findings warrant consideration of undetected source-sink dynamics and their implications on harvest management of game species. © 2016 The Author(s).

  2. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms.

    PubMed

    Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling

    2015-09-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.

  3. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  4. Comparison of two trajectory based models for locating particle sources for two rural New York sites

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Hopke, Philip K.; Liu, Wei

    Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.

  5. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  6. The efficient model to define a single light source position by use of high dynamic range image of 3D scene

    NASA Astrophysics Data System (ADS)

    Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han

    2016-10-01

    One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.

  7. Real-time identification of indoor pollutant source positions based on neural network locator of contaminant sources and optimized sensor networks.

    PubMed

    Vukovic, Vladimir; Tabares-Velasco, Paulo Cesar; Srebric, Jelena

    2010-09-01

    A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational fluid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.

  8. Application of Molecular Typing Results in Source Attribution Models: The Case of Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) of Salmonella Isolates Obtained from Integrated Surveillance in Denmark.

    PubMed

    de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine

    2016-03-01

    Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.

  9. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.

    PubMed

    Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul

    2017-07-20

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.

  10. Integrating watershed– and farm–scale models to target critical source areas while maintaining farm economic viability

    USDA-ARS?s Scientific Manuscript database

    Nonpoint source pollution from agriculture and the impacts of mitigating best management practices are commonly evaluated based on hydrologic boundaries using watershed models. However, management practice effectiveness is impacted by which of the feasible practices are actually selected, implemente...

  11. Fermi Large Area Telescope First Source Catalog

    DOE PAGES

    Abdo, A. A.; Ackermann, M.; Ajello, M.; ...

    2010-05-25

    Here, we present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions,more » defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. In conclusion, care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.« less

  12. Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations

    NASA Astrophysics Data System (ADS)

    Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang

    2017-09-01

    Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.

  13. Predicting the digestible energy of corn determined with growing swine from nutrient composition and cross-species measurements.

    PubMed

    Smith, B; Hassen, A; Hinds, M; Rice, D; Jones, D; Sauber, T; Iiams, C; Sevenich, D; Allen, R; Owens, F; McNaughton, J; Parsons, C

    2015-03-01

    The DE values of corn grain for pigs will differ among corn sources. More accurate prediction of DE may improve diet formulation and reduce diet cost. Corn grain sources ( = 83) were assayed with growing swine (20 kg) in DE experiments with total collection of feces, with 3-wk-old broiler chick in nitrogen-corrected apparent ME (AME) trials and with cecectomized adult roosters in nitrogen-corrected true ME (TME) studies. Additional AME data for the corn grain source set was generated based on an existing near-infrared transmittance prediction model (near-infrared transmittance-predicted AME [NIT-AME]). Corn source nutrient composition was determined by wet chemistry methods. These data were then used to 1) test the accuracy of predicting swine DE of individual corn sources based on available literature equations and nutrient composition and 2) develop models for predicting DE of sources from nutrient composition and the cross-species information gathered above (AME, NIT-AME, and TME). The overall measured DE, AME, NIT-AME, and TME values were 4,105 ± 11, 4,006 ± 10, 4,004 ± 10, and 4,086 ± 12 kcal/kg DM, respectively. Prediction models were developed using 80% of the corn grain sources; the remaining 20% was reserved for validation of the developed prediction equation. Literature equations based on nutrient composition proved imprecise for predicting corn DE; the root mean square error of prediction ranged from 105 to 331 kcal/kg, an equivalent of 2.6 to 8.8% error. Yet among the corn composition traits, 4-variable models developed in the current study provided adequate prediction of DE (model ranging from 0.76 to 0.79 and root mean square error [RMSE] of 50 kcal/kg). When prediction equations were tested using the validation set, these models had a 1 to 1.2% error of prediction. Simple linear equations from AME, NIT-AME, or TME provided an accurate prediction of DE for individual sources ( ranged from 0.65 to 0.73 and RMSE ranged from 50 to 61 kcal/kg). Percentage error of prediction based on the validation data set was greater (1.4%) for the TME model than for the NIT-AME or AME models (1 and 1.2%, respectively), indicating that swine DE values could be accurately predicted by using AME or NIT-AME. In conclusion, regression equations developed from broiler measurements or from analyzed nutrient composition proved adequate to reliably predict the DE of commercially available corn hybrids for growing pigs.

  14. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  15. Fermi Large Area Telescope third source catalog

    DOE PAGES

    Acero, F.; Ackermann, M.; Ajello, M.; ...

    2015-06-12

    Here, we present the third Fermi Large Area Telescope (LAT) source catalog (3FGL) of sources in the 100 MeV–300 GeV range. Based on the first 4 yr of science data from the Fermi Gamma-ray Space Telescope mission, it is the deepest yet in this energy range. Relative to the Second Fermi LAT catalog, the 3FGL catalog incorporates twice as much data, as well as a number of analysis improvements, including improved calibrations at the event reconstruction level, an updated model for Galactic diffuse γ-ray emission, a refined procedure for source detection, and improved methods for associating LAT sources with potential counterparts at other wavelengths. The 3FGL catalog includes 3033 sources abovemore » $$4\\sigma $$ significance, with source location regions, spectral properties, and monthly light curves for each. Of these, 78 are flagged as potentially being due to imperfections in the model for Galactic diffuse emission. Twenty-five sources are modeled explicitly as spatially extended, and overall 238 sources are considered as identified based on angular extent or correlated variability (periodic or otherwise) observed at other wavelengths. For 1010 sources we have not found plausible counterparts at other wavelengths. More than 1100 of the identified or associated sources are active galaxies of the blazar class; several other classes of non-blazar active galaxies are also represented in the 3FGL. Pulsars represent the largest Galactic source class. As a result, from source counts of Galactic sources we estimate that the contribution of unresolved sources to the Galactic diffuse emission is ~3% at 1 GeV.« less

  16. DEVELOPMENT OF A WATERSHED-BASED MERCURY POLLUTION CHARACTERIZATION SYSTEM

    EPA Science Inventory

    To investigate total mercury loadings to streams in a watershed, we have developed a watershed-based source quantification model ? Watershed Mercury Characterization System. The system uses the grid-based GIS modeling technology to calculate total soil mercury concentrations and ...

  17. Clawpack: Building an open source ecosystem for solving hyperbolic PDEs

    USGS Publications Warehouse

    Iverson, Richard M.; Mandli, K.T.; Ahmadia, Aron J.; Berger, M.J.; Calhoun, Donna; George, David L.; Hadjimichael, Y.; Ketcheson, David I.; Lemoine, Grady L.; LeVeque, Randall J.

    2016-01-01

    Clawpack is a software package designed to solve nonlinear hyperbolic partial differential equations using high-resolution finite volume methods based on Riemann solvers and limiters. The package includes a number of variants aimed at different applications and user communities. Clawpack has been actively developed as an open source project for over 20 years. The latest major release, Clawpack 5, introduces a number of new features and changes to the code base and a new development model based on GitHub and Git submodules. This article provides a summary of the most significant changes, the rationale behind some of these changes, and a description of our current development model. Clawpack: building an open source ecosystem for solving hyperbolic PDEs.

  18. Current Source Based on H-Bridge Inverter with Output LCL Filter

    NASA Astrophysics Data System (ADS)

    Blahnik, Vojtech; Talla, Jakub; Peroutka, Zdenek

    2015-09-01

    The paper deals with a control of current source with an LCL output filter. The controlled current source is realized as a single-phase inverter and output LCL filter provides low ripple of output current. However, systems incorporating LCL filters require more complex control strategies and there are several interesting approaches to the control of this type of converter. This paper presents the inverter control algorithm, which combines model based control with a direct current control based on resonant controllers and single-phase vector control. The primary goal is to reduce the current ripple and distortion under required limits and provides fast and precise control of output current. The proposed control technique is verified by measurements on the laboratory model.

  19. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  20. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  1. Analyzing Variability in Landscape Nutrient Loading Using Spatially-Explicit Maps in the Great Lakes Basin

    NASA Astrophysics Data System (ADS)

    Hamlin, Q. F.; Kendall, A. D.; Martin, S. L.; Whitenack, H. D.; Roush, J. A.; Hannah, B. A.; Hyndman, D. W.

    2017-12-01

    Excessive loading of nitrogen and phosphorous to the landscape has caused biologically and economically damaging eutrophication and harmful algal blooms in the Great Lakes Basin (GLB) and across the world. We mapped source-specific loads of nitrogen and phosphorous to the landscape using broadly available data across the GLB. SENSMap (Spatially Explicit Nutrient Source Map) is a 30m resolution snapshot of nutrient loads ca. 2010. We use these maps to study variable nutrient loading and provide this information to watershed managers through NOAA's GLB Tipping Points Planner. SENSMap individually maps nutrient point sources and six non-point sources: 1) atmospheric deposition, 2) septic tanks, 3) non-agricultural chemical fertilizer, 4) agricultural chemical fertilizer, 5) manure, and 6) nitrogen fixation from legumes. To model source-specific loads at high resolution, SENSMap synthesizes a wide range of remotely sensed, surveyed, and tabular data. Using these spatially explicit nutrient loading maps, we can better calibrate local land use-based water quality models and provide insight to watershed managers on how to focus nutrient reduction strategies. Here we examine differences in dominant nutrient sources across the GLB, and how those sources vary by land use. SENSMap's high resolution, source-specific approach offers a different lens to understand nutrient loading than traditional semi-distributed or land use based models.

  2. Numerical simulations of Asian dust storms using a coupled climate-aerosol microphysical model

    NASA Astrophysics Data System (ADS)

    Su, Lin; Toon, Owen B.

    2009-07-01

    We have developed a three-dimensional coupled microphysical/climate model based on the National Center for Atmospheric Research Community Atmospheres Model and the University of Colorado/NASA Community Aerosol and Radiation Model for Atmospheres. We have used the model to investigate the sources, removal processes, transport, and optical properties of Asian dust aerosol and its impact on downwind regions. The model simulations are conducted primarily during the time frame of the Aerosol Characterization Experiment-Asia field experiment (March-May 2001) since considerable in situ data are available at that time. Our dust source function follows Ginoux et al. (2001). We modified the dust source function by using the friction velocity instead of the 10-m wind based on wind erosion theory, by adding a size-dependent threshold friction velocity following Marticorena and Bergametti (1995) and by adding a soil moisture correction. A Weibull distribution is implemented to estimate the subgrid-scale wind speed variability. We use eight size bins for mineral dust ranging from 0.1 to 10 μm radius. Generally, the model reproduced the aerosol optical depth retrieved by the ground-based Aerosol Robotic Network (AERONET) Sun photometers at six study sites ranging in location from near the Asian dust sources to the Eastern Pacific region. By constraining the dust complex refractive index from AERONET retrievals near the dust source, we also find the single-scattering albedo to be consistent with AERONET retrievals. However, large regional variations are observed due to local pollution. The timing of dust events is comparable to the National Institute for Environmental Studies (NIES) lidar data in Beijing and Nagasaki. However, the simulated dust aerosols are at higher altitudes than those observed by the NIES lidar.

  3. Optimization of radioactive sources to achieve the highest precision in three-phase flow meters using Jaya algorithm.

    PubMed

    Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M

    2018-05-17

    Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. SeaQuaKE: Sea-optimized Quantum Key Exchange

    DTIC Science & Technology

    2014-11-01

    ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair source based on spontaneous four-wave mixing for...Distribution Special Notice (13-SN- 0004 under ONRBAA13-001). In this technical report, we describe modeling results of an entangled photon - pair ...areas over the last quarter include (i) development of a wavelength-dependent, entangled photon - pair source model and (ii) end-to-end system modeling

  5. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  7. A study of the sources and sinks of methane and methyl chloroform using a global three-dimensional Lagrangian tropospheric tracer transport model

    NASA Technical Reports Server (NTRS)

    Taylor, John A.; Brasseur, G. P.; Zimmerman, P. R.; Cicerone, R. J.

    1991-01-01

    Sources and sinks of methane and methyl chloroform are investigated using a global three-dimensional Lagrangian tropospheric tracer transport model with parameterized hydroxyl and temperature fields. Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). The second model identified source regions for methane from rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies, indicating that either the assumption that a uniform fraction of NPP is converted to methane is not valid for rice paddies, or that NPP is underestimated for rice paddies, or that present methane emission estimates from rice paddies are too high.

  8. Process Relationships for Evaluating the Role of Light-induced Inactivation of Enterococci at Selected Beaches and Nearby Tributaries of the Great Lakes

    EPA Science Inventory

    One approach to predictive modeling of biological contamination of recreational waters and drinking water sources involves applying process-based models that consider microbial sources, hydrodynamic transport, and microbial fate. Fecal indicator bacteria such as enterococci have ...

  9. Community LINE Source Model (C-LINE)

    EPA Science Inventory

    This presentation provides an introduction for the live demo and explains the purpose of C-LINE and its key features. C-LINE is a web-based model designed to inform the community user of local air quality impacts due to mobile-sources in their region of interest using a simplifie...

  10. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  11. Comparison of hybrid receptor models to locate PCB sources in Chicago

    NASA Astrophysics Data System (ADS)

    Hsu, Ying-Kuang; Holsen, Thomas M.; Hopke, Philip K.

    Results of three hybrid receptor models, potential source contribution function (PSCF), concentration weighted trajectory (CWT), and residence time weighted concentration (RTWC), were compared for locating polychlorinated biphenyl (PCB) sources contributing to the atmospheric concentrations in Chicago. Variations of these models, including PSCF using mean and 75% criterion concentrations, joint probability PSCF (JP-PSCF), changes of point filters and grid cell sizes for RTWC, and PSCF using wind trajectories started at different altitudes, are also discussed. Modeling results were relatively consistent between models. However, no single model provided as complete information as was obtained by using all of them. CWT and 75% PSCF appears to be able to distinguish between larger sources and moderate ones. RTWC resolved high potential source areas. RTWC and JP-PSCF pooling data from all sampling sites removed the trailing effect often seen in PSCF modeling. PSCF results using average concentration criteria, appears to identify both moderate and major sources. Each model has advantages and disadvantages. However, used in combination, they provide information that is not available if only one of them is used. For short-range atmospheric transport, PSCF results were consistent when using wind trajectories starting at different heights. Based on the archived PCB data, the modeling results indicate there is a large potential source area between Joliet and Kankakee, IL, and two moderate sources to the northwest and south of Chicago. On the south side of Chicago in the neighborhood of Lake Calumet, several PCB sources were identified. Other unidentified potential source location(s) will require additional upwind/downwind field sampling to verify modeling results.

  12. Kinematic and Dynamic Source Rupture Scenario for Potential Megathrust Event along the Southernmost Ryukyu Trench

    NASA Astrophysics Data System (ADS)

    Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.

    2017-12-01

    Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.

  13. Human health risk assessment: models for predicting the effective exposure duration of on-site receptors exposed to contaminated groundwater.

    PubMed

    Baciocchi, Renato; Berardi, Simona; Verginelli, Iason

    2010-09-15

    Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Estimation of contribution ratios of pollutant sources to a specific section based on an enhanced water quality model.

    PubMed

    Cao, Bibo; Li, Chuan; Liu, Yan; Zhao, Yue; Sha, Jian; Wang, Yuqiu

    2015-05-01

    Because water quality monitoring sections or sites could reflect the water quality status of rivers, surface water quality management based on water quality monitoring sections or sites would be effective. For the purpose of improving water quality of rivers, quantifying the contribution ratios of pollutant resources to a specific section is necessary. Because physical and chemical processes of nutrient pollutants are complex in water bodies, it is difficult to quantitatively compute the contribution ratios. However, water quality models have proved to be effective tools to estimate surface water quality. In this project, an enhanced QUAL2Kw model with an added module was applied to the Xin'anjiang Watershed, to obtain water quality information along the river and to assess the contribution ratios of each pollutant source to a certain section (the Jiekou state-controlled section). Model validation indicated that the results were reliable. Then, contribution ratios were analyzed through the added module. Results show that among the pollutant sources, the Lianjiang tributary contributes the largest part of total nitrogen (50.43%), total phosphorus (45.60%), ammonia nitrogen (32.90%), nitrate (nitrite + nitrate) nitrogen (47.73%), and organic nitrogen (37.87%). Furthermore, contribution ratios in different reaches varied along the river. Compared with pollutant loads ratios of different sources in the watershed, an analysis of contribution ratios of pollutant sources for each specific section, which takes the localized chemical and physical processes into consideration, was more suitable for local-regional water quality management. In summary, this method of analyzing the contribution ratios of pollutant sources to a specific section based on the QUAL2Kw model was found to support the improvement of the local environment.

  15. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  16. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    NASA Astrophysics Data System (ADS)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the little detail about the chemical components of this source used in the models. The sensitivity tests show that chemical transport models show better performances when displaying a detailed set of sources (14) than when using a simplified one (only 8). It was also observed that an enhanced vertical profiling can improve the estimation of specific sources, such as industry, under complex meteorological conditions and that an insufficient spatial resolution in urban areas can impact on the capabilities of models to estimate the contribution of diffuse primary sources (e.g. traffic). Both families of models identify traffic and biomass burning as the first and second most contributing categories, respectively, to elemental carbon. The results of this study demonstrate that the source apportionment assessment methodology developed by the JRC is applicable to any kind of SA model. The same methodology is implemented in the on-line DeltaSA tool to support source apportionment model evaluation (http://source-apportionment.jrc.ec.europa.eu/).

  17. Monitoring and modeling as a continuing learning process: the use of hydrological models in a general probabilistic framework.

    NASA Astrophysics Data System (ADS)

    Baroni, G.; Gräff, T.; Reinstorf, F.; Oswald, S. E.

    2012-04-01

    Nowadays uncertainty and sensitivity analysis are considered basic tools for the assessment of hydrological models and the evaluation of the most important sources of uncertainty. In this context, in the last decades several methods have been developed and applied in different hydrological conditions. However, in most of the cases, the studies have been done by investigating mainly the influence of the parameter uncertainty on the simulated outputs and few approaches tried to consider also other sources of uncertainty i.e. input and model structure. Moreover, several constrains arise when spatially distributed parameters are involved. To overcome these limitations a general probabilistic framework based on Monte Carlo simulations and the Sobol method has been proposed. In this study, the general probabilistic framework was applied at field scale using a 1D physical-based hydrological model (SWAP). Furthermore, the framework was extended at catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are applied in two different experimental sites in Germany: a relatively flat cropped field close to Potsdam (Brandenburg) and a small mountainous catchment with agricultural land use (Schaefertal, Harz Mountains). For both cases, input and parameters are considered as major sources of uncertainty. Evaluation of the models was based on soil moisture detected at plot scale in different depths and, for the catchment site, also with daily discharge values. The study shows how the framework can take into account all the various sources of uncertainty i.e. input data, parameters (either in scalar or spatially distributed form) and model structures. The framework can be used in a loop in order to optimize further monitoring activities used to improve the performance of the model. In the particular applications, the results show how the sources of uncertainty are specific for each process considered. The influence of the input data as well as the presence of compensating errors become clear by the different processes simulated.

  18. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  19. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    PubMed

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  1. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Bandyopadhyay, M.; Chakraborty, A.

    2016-02-15

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the samemore » authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.« less

  2. Deuteron nuclear data for the design of accelerator-based neutron sources: Measurement, model analysis, evaluation, and application

    NASA Astrophysics Data System (ADS)

    Watanabe, Yukinobu; Kin, Tadahiro; Araki, Shouhei; Nakayama, Shinsuke; Iwamoto, Osamu

    2017-09-01

    A comprehensive research program on deuteron nuclear data motivated by development of accelerator-based neutron sources is being executed. It is composed of measurements of neutron and gamma-ray yields and production cross sections, modelling of deuteron-induced reactions and code development, nuclear data evaluation and benchmark test, and its application to medical radioisotopes production. The goal of this program is to develop a state-of-the-art deuteron nuclear data library up to 200 MeV which will be useful for the design of future (d,xn) neutron sources. The current status and future plan are reviewed.

  3. Modelling remediation scenarios in historical mining catchments.

    PubMed

    Gamarra, Javier G P; Brewer, Paul A; Macklin, Mark G; Martin, Katherine

    2014-01-01

    Local remediation measures, particularly those undertaken in historical mining areas, can often be ineffective or even deleterious because erosion and sedimentation processes operate at spatial scales beyond those typically used in point-source remediation. Based on realistic simulations of a hybrid landscape evolution model combined with stochastic rainfall generation, we demonstrate that similar remediation strategies may result in differing effects across three contrasting European catchments depending on their topographic and hydrologic regimes. Based on these results, we propose a conceptual model of catchment-scale remediation effectiveness based on three basic catchment characteristics: the degree of contaminant source coupling, the ratio of contaminated to non-contaminated sediment delivery, and the frequency of sediment transport events.

  4. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  5. A modified receptor model for source apportionment of heavy metal pollution in soil.

    PubMed

    Huang, Ying; Deng, Meihua; Wu, Shaofu; Japenga, Jan; Li, Tingqiang; Yang, Xiaoe; He, Zhenli

    2018-07-15

    Source apportionment is a crucial step toward reduction of heavy metal pollution in soil. Existing methods are generally based on receptor models. However, overestimation or underestimation occurs when they are applied to heavy metal source apportionment in soil. Therefore, a modified model (PCA-MLRD) was developed, which is based on principal component analysis (PCA) and multiple linear regression with distance (MLRD). This model was applied to a case study conducted in a peri-urban area in southeast China where soils were contaminated by arsenic (As), cadmium (Cd), mercury (Hg) and lead (Pb). Compared with existing models, PCA-MLRD is able to identify specific sources and quantify the extent of influence for each emission. The zinc (Zn)-Pb mine was identified as the most important anthropogenic emission, which affected approximately half area for Pb and As accumulation, and approximately one third for Cd. Overall, the influence extent of the anthropogenic emissions decreased in the order of mine (3 km) > dyeing mill (2 km) ≈ industrial hub (2 km) > fluorescent factory (1.5 km) > road (0.5 km). Although algorithm still needs to improved, the PCA-MLRD model has the potential to become a useful tool for heavy metal source apportionment in soil. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. 77 FR 11974 - Approval and Promulgation of Implementation Plans; State of Iowa Regional Haze State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-28

    ... and beyond. The modeling was based on PM Source Apportionment Technology (PSAT) for the Comprehensive... sources and the State adequately determined the apportionment of those pollutants from sources located... Class I areas caused by emissions of air pollutants from numerous sources located over a wide geographic...

  7. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. The risk assessment of sudden water pollution for river network system under multi-source random emission

    NASA Astrophysics Data System (ADS)

    Li, D.

    2016-12-01

    Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.

  9. A novel solution for LED wall lamp design and simulation

    NASA Astrophysics Data System (ADS)

    Ge, Rui; Hong, Weibin; Li, Kuangqi; Liang, Pengxiang; Zhao, Fuli

    2014-11-01

    The model of the wall washer lamp and the practical illumination application have been established with a new design of the lens to meet the uniform illumination demand for wall washer lamp based on the Lambertian light sources. Our secondary optical design of freeform surface lens to LED wall washer lamp based on the conservation law of energy and Snell's law can improve the lighting effects as a uniform illumination. With the relationship between the surface of the lens and the surface of the target, a great number of discrete points of the freeform profile curve were obtained through the iterative method. After importing the data into our modeling program, the optical entity was obtained. Finally, to verify the feasibility of the algorithm, the model was simulated by specialized software, with both the LED Lambertian point source and LED panel source model.

  10. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  11. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  12. A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.

    PubMed

    Yu, Hongyang; Khan, Faisal; Veitch, Brian

    2017-09-01

    Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.

  13. A large and ubiquitous source of atmospheric formic acid

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Baasandorj, M.; Farmer, D. K.; Thornton, J. A.; Baumann, K.; Brophy, P.; Chaliyakunnel, S.; de Gouw, J. A.; Graus, M.; Hu, L.; Koss, A.; Lee, B. H.; Lopez-Hilfiker, F. D.; Neuman, J. A.; Paulot, F.; Peischl, J.; Pollack, I. B.; Ryerson, T. B.; Warneke, C.; Williams, B. J.; Xu, J.

    2015-06-01

    Formic acid (HCOOH) is one of the most abundant acids in the atmosphere, with an important influence on precipitation chemistry and acidity. Here we employ a chemical transport model (GEOS-Chem CTM) to interpret recent airborne and ground-based measurements over the US Southeast in terms of the constraints they provide on HCOOH sources and sinks. Summertime boundary layer concentrations average several parts-per-billion, 2-3× larger than can be explained based on known production and loss pathways. This indicates one or more large missing HCOOH sources, and suggests either a key gap in current understanding of hydrocarbon oxidation or a large, unidentified, direct flux of HCOOH. Model-measurement comparisons implicate biogenic sources (e.g., isoprene oxidation) as the predominant HCOOH source. Resolving the unexplained boundary layer concentrations based (i) solely on isoprene oxidation would require a 3× increase in the model HCOOH yield, or (ii) solely on direct HCOOH emissions would require approximately a 25× increase in its biogenic flux. However, neither of these can explain the high HCOOH amounts seen in anthropogenic air masses and in the free troposphere. The overall indication is of a large biogenic source combined with ubiquitous chemical production of HCOOH across a range of precursors. Laboratory work is needed to better quantify the rates and mechanisms of carboxylic acid production from isoprene and other prevalent organics. Stabilized Criegee intermediates (SCIs) provide a large model source of HCOOH, while acetaldehyde tautomerization accounts for ~ 15% of the simulated global burden. Because carboxylic acids also react with SCIs and catalyze the reverse tautomerization reaction, HCOOH buffers against its own production by both of these pathways. Based on recent laboratory results, reaction between CH3O2 and OH could provide a major source of atmospheric HCOOH; however, including this chemistry degrades the model simulation of CH3OOH and NOx : CH3OOH. Developing better constraints on SCI and RO2 + OH chemistry is a high priority for future work. The model neither captures the large diurnal amplitude in HCOOH seen in surface air, nor its inverted vertical gradient at night. This implies a substantial bias in our current representation of deposition as modulated by boundary layer dynamics, and may indicate an HCOOH sink underestimate and thus an even larger missing source. A more robust treatment of surface deposition is a key need for improving simulations of HCOOH and related trace gases, and our understanding of their budgets.

  14. Towards a street-level pollen concentration and exposure forecast

    NASA Astrophysics Data System (ADS)

    van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard

    2015-04-01

    Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.

  15. Future of family support: Projected living arrangements and income sources of older people in Hong Kong up to 2030.

    PubMed

    Ng, Kok-Hoe

    2016-06-01

    The study aims to project future trends in living arrangements and access to children's cash contributions and market income sources among older people in Hong Kong. A cell-based model was constructed by combining available population projections, labour force projections, an extrapolation of the historical trend in living arrangements based on national survey datasets and a regression model on income sources. Under certain assumptions, the proportion of older people living with their children may decline from 59 to 48% during 2006-2030. Although access to market income sources may improve slightly, up to 20% of older people may have no access to either children's financial support or market income sources, and will not live with their children by 2030. Family support is expected to contract in the next two decades. Public pensions should be expanded to protect financially vulnerable older people. © 2015 AJA Inc.

  16. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

  17. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    PubMed

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  18. Green Power Grids: How Energy from Renewable Sources Affects Networks and Markets

    PubMed Central

    Mureddu, Mario; Caldarelli, Guido; Chessa, Alessandro; Scala, Antonio; Damiano, Alfonso

    2015-01-01

    The increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources. This is fundamentally changing the configuration of energy management and is introducing new problems that are only partly understood. In particular, renewable energies introduce fluctuations which cause an increased request for conventional energy sources to balance energy requests at short notice. In order to develop an effective usage of low-carbon sources, such fluctuations must be understood and tamed. In this paper we present a microscopic model for the description and for the forecast of short time fluctuations related to renewable sources in order to estimate their effects on the electricity market. To account for the inter-dependencies in the energy market and the physical power dispatch network, we use a statistical mechanics approach to sample stochastic perturbations in the power system and an agent based approach for the prediction of the market players’ behavior. Our model is data-driven; it builds on one-day-ahead real market transactions in order to train agents’ behaviour and allows us to deduce the market share of different energy sources. We benchmarked our approach on the Italian market, finding a good accordance with real data. PMID:26335705

  19. Green Power Grids: How Energy from Renewable Sources Affects Networks and Markets.

    PubMed

    Mureddu, Mario; Caldarelli, Guido; Chessa, Alessandro; Scala, Antonio; Damiano, Alfonso

    2015-01-01

    The increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources. This is fundamentally changing the configuration of energy management and is introducing new problems that are only partly understood. In particular, renewable energies introduce fluctuations which cause an increased request for conventional energy sources to balance energy requests at short notice. In order to develop an effective usage of low-carbon sources, such fluctuations must be understood and tamed. In this paper we present a microscopic model for the description and for the forecast of short time fluctuations related to renewable sources in order to estimate their effects on the electricity market. To account for the inter-dependencies in the energy market and the physical power dispatch network, we use a statistical mechanics approach to sample stochastic perturbations in the power system and an agent based approach for the prediction of the market players' behavior. Our model is data-driven; it builds on one-day-ahead real market transactions in order to train agents' behaviour and allows us to deduce the market share of different energy sources. We benchmarked our approach on the Italian market, finding a good accordance with real data.

  20. The usability of the optical parametric amplification of light for high-angular-resolution imaging and fast astrometry

    NASA Astrophysics Data System (ADS)

    Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.

    2018-05-01

    High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.

  1. A model-based analysis of extinction ratio effects on phase-OTDR distributed acoustic sensing system performance

    NASA Astrophysics Data System (ADS)

    Aktas, Metin; Maral, Hakan; Akgun, Toygar

    2018-02-01

    Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.

  2. Dissociation of item and source memory in rhesus monkeys.

    PubMed

    Basile, Benjamin M; Hampton, Robert R

    2017-09-01

    Source memory, or memory for the context in which a memory was formed, is a defining characteristic of human episodic memory and source memory errors are a debilitating symptom of memory dysfunction. Evidence for source memory in nonhuman primates is sparse despite considerable evidence for other types of sophisticated memory and the practical need for good models of episodic memory in nonhuman primates. A previous study showed that rhesus monkeys confused the identity of a monkey they saw with a monkey they heard, but only after an extended memory delay. This suggests that they initially remembered the source - visual or auditory - of the information but forgot the source as time passed. Here, we present a monkey model of source memory that is based on this previous study. In each trial, monkeys studied two images, one that they simply viewed and touched and the other that they classified as a bird, fish, flower, or person. In a subsequent memory test, they were required to select the image from one source but avoid the other. With training, monkeys learned to suppress responding to images from the to-be-avoided source. After longer memory intervals, monkeys continued to show reliable item memory, discriminating studied images from distractors, but made many source memory errors. Monkeys discriminated source based on study method, not study order, providing preliminary evidence that our manipulation of retention interval caused errors due to source forgetting instead of source confusion. Finally, some monkeys learned to select remembered images from either source on cue, showing that they did indeed remember both items and both sources. This paradigm potentially provides a new model to study a critical aspect of episodic memory in nonhuman primates. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  4. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  5. Role of the source to building lateral separation distance in petroleum vapor intrusion.

    PubMed

    Verginelli, Iason; Capobianco, Oriana; Baciocchi, Renato

    2016-06-01

    The adoption of source to building separation distances to screen sites that need further field investigation is becoming a common practice for the evaluation of the vapor intrusion pathway at sites contaminated by petroleum hydrocarbons. Namely, for the source to building vertical distance, the screening criteria for petroleum vapor intrusion have been deeply investigated in the recent literature and fully addressed in the recent guidelines issued by ITRC and U.S.EPA. Conversely, due to the lack of field and modeling studies, the source to building lateral distance received relatively low attention. To address this issue, in this work we present a steady-state vapor intrusion analytical model incorporating a piecewise first-order aerobic biodegradation limited by oxygen availability that accounts for lateral source to building separation. The developed model can be used to evaluate the role and relevance of lateral vapor attenuation as well as to provide a site-specific assessment of the lateral screening distances needed to attenuate vapor concentrations to risk-based values. The simulation outcomes showed to be consistent with field data and 3-D numerical modeling results reported in previous studies and, for shallow sources, with the screening criteria recommended by U.S.EPA for the vertical separation distance. Indeed, although petroleum vapors can cover maximum lateral distances up to 25-30m, as highlighted by the comparison of model outputs with field evidences of vapor migration in the subsurface, simulation results by this new model indicated that, regardless of the source concentration and depth, 6m and 7m lateral distances are sufficient to attenuate petroleum vapors below risk-based values for groundwater and soil sources, respectively. However, for deep sources (>5m) and for low to moderate source concentrations (benzene concentrations lower than 5mg/L in groundwater and 0.5mg/kg in soil) the above criteria were found extremely conservative as the model results indicated that for such scenarios the lateral screening distance may be set equal to zero. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  7. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  8. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm.

    PubMed

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration.

  9. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  10. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  11. Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs

    PubMed Central

    Ahmed, Faisal

    2017-01-01

    Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction “LINE-P”, a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P’s accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways. PMID:28726745

  12. Advances in the Two Source Energy Balance (TSEB) model using very high resolution remote sensing data in vineyards

    USDA-ARS?s Scientific Manuscript database

    The thermal-based Two Source Energy Balance (TSEB) model partitions the water and energy fluxes from vegetation and soil components providing thus the ability for estimating soil evaporation (E) and canopy transpiration (T) separately. However, it is crucial for ET partitioning to retrieve reliable ...

  13. Monte Carlo Determination of Dosimetric Parameters of a New (125)I Brachytherapy Source According to AAPM TG-43 (U1) Protocol.

    PubMed

    Baghani, Hamid Reza; Lohrabian, Vahid; Aghamiri, Mahmoud Reza; Robatjazi, Mostafa

    2016-03-01

    (125)I is one of the important sources frequently used in brachytherapy. Up to now, several different commercial models of this source type have been introduced to the clinical radiation oncology applications. Recently, a new source model, IrSeed-125, has been added to this list. The aim of the present study is to determine the dosimetric parameters of this new source model based on the recommendations of TG-43 (U1) protocol using Monte Carlo simulation. The dosimetric characteristics of Ir-125 including dose rate constant, radial dose function, 2D anisotropy function and 1D anisotropy function were determined inside liquid water using MCNPX code and compared to those of other commercially available iodine sources. The dose rate constant of this new source was found to be 0.983+0.015 cGyh-1U-1 that was in good agreement with the TLD measured data (0.965 cGyh-1U-1). The 1D anisotropy function at 3, 5, and 7 cm radial distances were obtained as 0.954, 0.953 and 0.959, respectively. The results of this study showed that the dosimetric characteristics of this new brachytherapy source are comparable with those of other commercially available sources. Furthermore, the simulated parameters were in accordance with the previously measured ones. Therefore, the Monte Carlo calculated dosimetric parameters could be employed to obtain the dose distribution around this new brachytherapy source based on TG-43 (U1) protocol.

  14. Modeling Ozone in the Eastern U.S. using a Fuel-Based Mobile Source Emissions Inventory.

    PubMed

    McDonald, Brian C; McKeen, Stuart A; Cui, Yu Yan; Ahmadov, Ravan; Kim, Si-Wan; Frost, Gregory J; Pollack, Ilana B; Peischl, Jeff; Ryerson, Thomas B; Holloway, John S; Graus, Martin; Warneke, Carsten; Gilman, Jessica B; de Gouw, Joost A; Kaiser, Jennifer; Keutsch, Frank N; Hanisco, Thomas F; Wolfe, Glenn M; Trainer, Michael

    2018-06-22

    Recent studies suggest overestimates in current U.S. emission inventories of nitrogen oxides (NO x = NO + NO 2 ). Here, we expand a previously developed fuel-based inventory of motor-vehicle emissions (FIVE) to the continental U.S. for the year 2013, and evaluate our estimates of mobile source emissions with the U.S. Environmental Protection Agency's National Emissions Inventory (NEI) interpolated to 2013. We find that mobile source emissions of NO x and carbon monoxide (CO) in the NEI are higher than FIVE by 28% and 90%, respectively. Using a chemical transport model, we model mobile source emissions from FIVE, and find consistent levels of urban NO x and CO as measured during the Southeast Nexus (SENEX) Study in 2013. Lastly, we assess the sensitivity of ozone (O 3 ) over the Eastern U.S. to uncertainties in mobile source NO x emissions and biogenic volatile organic compound (VOC) emissions. The ground-level O 3 is sensitive to reductions in mobile source NO x emissions, most notably in the Southeastern U.S. and during O 3 exceedance events, under the revised standard proposed in 2015 (>70 ppb, 8 h maximum). This suggests that decreasing mobile source NO x emissions could help in meeting more stringent O 3 standards in the future.

  15. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  16. Measurement and Modeling of Near Road & Near-Port Air Quality

    EPA Science Inventory

    Air pollution from mobile sources has been identified by numerous organizations as a potential public health concern. Based upon multiple near-road and near-source monitoring studies, both busy roadways and large emission sources at ports can significantly impact local air qualit...

  17. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  18. A Semantic Transformation Methodology for the Secondary Use of Observational Healthcare Data in Postmarketing Safety Studies.

    PubMed

    Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B

    2018-01-01

    Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.

  19. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  20. Reply by the Authors to C. K. W. Tam

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.

    2002-01-01

    The prediction of noise generation and radiation by turbulence has been the subject of continuous research for over fifty years. The essential problem is how to model the noise sources when one s knowledge of the detailed space-time properties of the turbulence is limited. We attempted to provide a comparison of models based on acoustic analogies and recent alternative models. Our goal was to demonstrate that the predictive capabilities of any model are based on the choice of the turbulence property that is modeled as a source of noise. Our general definition of an acoustic analogy is a rearrangement of the equations of motion into the form L(u) = Q, where L is a linear operator that reduces to an acoustic propagation operator outside a region upsilon; u is a variable that reduces to acoustic pressure (or a related linear acoustic variable) outside upsilon; and Q is a source term that can be meaningfully estimated without knowing u and tends to zero outside upsilon.

  1. Velocity Model Using the Large-N Seismic Array from the Source Physics Experiment (SPE)

    NASA Astrophysics Data System (ADS)

    Chen, T.; Snelson, C. M.

    2016-12-01

    The Source Physics Experiment (SPE) is a multi-institutional, multi-disciplinary project that consists of a series of chemical explosions conducted at the Nevada National Security Site (NNSS). The goal of SPE is to understand the complicated effect of geological structures on seismic wave propagation and source energy partitioning, develop and validate physics-based modeling, and ultimately better monitor low-yield nuclear explosions. A Large-N seismic array was deployed at the SPE site to image the full 3D wavefield from the most recent SPE-5 explosion on April 26, 2016. The Large-N seismic array consists of 996 geophones (half three-component and half vertical-component sensors), and operated for one month, recording the SPE-5 shot, ambient noise, and additional controlled-sources (a large hammer). This study uses Large-N array recordings of the SPE-5 chemical explosion to develop high resolution images of local geologic structures. We analyze different phases of recorded seismic data and construct a velocity model based on arrival times. The results of this study will be incorporated into the large modeling and simulation efforts as ground-truth further validating the models.

  2. A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.

    PubMed

    Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng

    2015-01-01

    Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.

  3. Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources

    NASA Astrophysics Data System (ADS)

    Dai, Fa Foster

    Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.

  4. High-resolution observations of low-luminosity gigahertz-peaked spectrum and compact steep-spectrum sources

    NASA Astrophysics Data System (ADS)

    Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.

    2018-06-01

    We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.

  5. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  6. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  7. Evaluation of stormwater micropollutant source control and end-of-pipe control strategies using an uncertainty-calibrated integrated dynamic simulation model.

    PubMed

    Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S

    2015-03-15

    The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. CHARACTERIZING SPATIAL AND TEMPORAL DYNAMICS: DEVELOPMENT OF A GRID-BASED WATERSHED MERCURY LOADING MODEL

    EPA Science Inventory

    A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...

  9. FloorspaceJS - A New, Open Source, Web-Based Geometry Editor for Building Energy Modeling (BEM): Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macumber, Daniel L; Horowitz, Scott G; Schott, Marjorie

    Across most industries, desktop applications are being rapidly migrated to web applications for a variety of reasons. Web applications are inherently cross platform, mobile, and easier to distribute than desktop applications. Fueling this trend are a wide range of free, open source libraries and frameworks that make it incredibly easy to develop powerful web applications. The building energy modeling community is just beginning to pick up on these larger trends, with a small but growing number of building energy modeling applications starting on or moving to the web. This paper presents a new, open source, web based geometry editor formore » Building Energy Modeling (BEM). The editor is written completely in JavaScript and runs in a modern web browser. The editor works on a custom JSON file format and is designed to be integrated into a variety of web and desktop applications. The web based editor is available to use as a standalone web application at: https://nrel.github.io/openstudio-geometry-editor/. An example integration is demonstrated with the OpenStudio desktop application. Finally, the editor can be easily integrated with a wide range of possible building energy modeling web applications.« less

  10. A new method to quantify the health risks from sources of perfluoroalkyl substances, combined with positive matrix factorization and risk assessment models.

    PubMed

    Xu, Jiao; Shi, Guo-Liang; Guo, Chang-Sheng; Wang, Hai-Ting; Tian, Ying-Ze; Huangfu, Yan-Qi; Zhang, Yuan; Feng, Yin-Chang; Xu, Jian

    2018-01-01

    A hybrid model based on the positive matrix factorization (PMF) model and the health risk assessment model for assessing risks associated with sources of perfluoroalkyl substances (PFASs) in water was established and applied at Dianchi Lake to test its applicability. The new method contains 2 stages: 1) the sources of PFASs were apportioned by the PMF model and 2) the contribution of health risks from each source was calculated by the new hybrid model. Two factors were extracted by PMF, with factor 1 identified as aqueous fire-fighting foams source and factor 2 as fluoropolymer manufacturing and processing and perfluorooctanoic acid production source. The health risk of PFASs in the water assessed by the health risk assessment model was 9.54 × 10 -7  a -1 on average, showing no obvious adverse effects to human health. The 2 sources' risks estimated by the new hybrid model ranged from 2.95 × 10 -10 to 6.60 × 10 -6  a -1 and from 1.64 × 10 -7 to 1.62 × 10 -6  a -1 , respectively. The new hybrid model can provide useful information on the health risks of PFAS sources, which is helpful for pollution control and environmental management. Environ Toxicol Chem 2018;37:107-115. © 2017 SETAC. © 2017 SETAC.

  11. Geospatial Analysis of Atmospheric Haze Effect by Source and Sink Landscape

    NASA Astrophysics Data System (ADS)

    Yu, T.; Xu, K.; Yuan, Z.

    2017-09-01

    Based on geospatial analysis model, this paper analyzes the relationship between the landscape patterns of source and sink in urban areas and atmospheric haze pollution. Firstly, the classification result and aerosol optical thickness (AOD) of Wuhan are divided into a number of square grids with the side length of 6 km, and the category level landscape indices (PLAND, PD, COHESION, LPI, FRAC_MN) and AOD of each grid are calculated. Then the source and sink landscapes of atmospheric haze pollution are selected based on the analysis of the correlation between landscape indices and AOD. Next, to make the following analysis more efficient, the indices selected before should be determined through the correlation coefficient between them. Finally, due to the spatial dependency and spatial heterogeneity of the data used in this paper, spatial autoregressive model and geo-weighted regression model are used to analyze atmospheric haze effect by source and sink landscape from the global and local level. The results show that the source landscape of atmospheric haze pollution is the building, and the sink landscapes are shrub and woodland. PLAND, PD and COHESION are suitable for describing the atmospheric haze effect by source and sink landscape. Comparing these models, the fitting effect of SLM, SEM and GWR is significantly better than that of OLS model. The SLM model is superior to the SEM model in this paper. Although the fitting effect of GWR model is more unsuited than that of SLM, the influence degree of influencing factors on atmospheric haze of different geography can be expressed clearer. Through the analysis results of these models, following conclusions can be summarized: Reducing the proportion of source landscape area and increasing the degree of fragmentation could cut down aerosol optical thickness; And distributing the source and sink landscape evenly and interspersedly could effectively reduce aerosol optical thickness which represents atmospheric haze pollution; For Wuhan City, the method of adjusting the built-up area slightly and planning the non-built-up areas reasonably can be taken to reduce atmospheric haze pollution.

  12. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  13. Unique effects and moderators of effects of sources on self-efficacy: A model-based meta-analysis.

    PubMed

    Byars-Winston, Angela; Diestelmann, Jacob; Savoy, Julia N; Hoyt, William T

    2017-11-01

    Self-efficacy beliefs are strong predictors of academic pursuits, performance, and persistence, and in theory are developed and maintained by 4 classes of experiences Bandura (1986) referred to as sources: performance accomplishments (PA), vicarious learning (VL), social persuasion (SP), and affective arousal (AA). The effects of sources on self-efficacy vary by performance domain and individual difference factors. In this meta-analysis (k = 61 studies of academic self-efficacy; N = 8,965), we employed B. J. Becker's (2009) model-based approach to examine cumulative effects of the sources as a set and unique effects of each source, controlling for the others. Following Becker's recommendations, we used available data to create a correlation matrix for the 4 sources and self-efficacy, then used these meta-analytically derived correlations to test our path model. We further examined moderation of these associations by subject area (STEM vs. non-STEM), grade, sex, and ethnicity. PA showed by far the strongest unique association with self-efficacy beliefs. Subject area was a significant moderator, with sources collectively predicting self-efficacy more strongly in non-STEM (k = 14) compared with STEM (k = 47) subjects (R2 = .37 and .22, respectively). Within studies of STEM subjects, grade level was a significant moderator of the coefficients in our path model, as were 2 continuous study characteristics (percent non-White and percent female). Practical implications of the findings and future research directions are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Assessment of State-of-the-Art Dust Emission Scheme in GEOS

    NASA Technical Reports Server (NTRS)

    Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine

    2017-01-01

    The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.

  15. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  16. Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach

    NASA Astrophysics Data System (ADS)

    Ramezani, Zeinab; Orouji, Ali A.

    2018-01-01

    A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  18. RF model of the distribution system as a communication channel, phase 2. Volume 4: Sofware source program and illustrations ASCII database listings

    NASA Technical Reports Server (NTRS)

    Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.

    1982-01-01

    Listings of source programs and some illustrative examples of various ASCII data base files are presented. The listings are grouped into the following categories: main programs, subroutine programs, illustrative ASCII data base files. Within each category files are listed alphabetically.

  19. An intercomparison of three remote sensing-based energy balance models using large aperture scintillometer measurements over a wheat-corn production region

    USDA-ARS?s Scientific Manuscript database

    This paper compares three remote sensing-based models for estimating evapotranspiration (ET), namely the Surface Energy Balance System (SEBS), the Two-Source Energy Balance (TSEB) model, and the surface Temperature-Vegetation index Triangle (TVT). The models used as input MODIS/TERRA products and gr...

  20. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  1. Determination of X-ray flux using silicon pin diodes

    PubMed Central

    Owen, Robin L.; Holton, James M.; Schulze-Briese, Clemens; Garman, Elspeth F.

    2009-01-01

    Accurate measurement of photon flux from an X-ray source, a parameter required to calculate the dose absorbed by the sample, is not yet routinely available at macromolecular crystallography beamlines. The development of a model for determining the photon flux incident on pin diodes is described here, and has been tested on the macromolecular crystallography beamlines at both the Swiss Light Source, Villigen, Switzerland, and the Advanced Light Source, Berkeley, USA, at energies between 4 and 18 keV. These experiments have shown that a simple model based on energy deposition in silicon is sufficient for determining the flux incident on high-quality silicon pin diodes. The derivation and validation of this model is presented, and a web-based tool for the use of the macromolecular crystallography and wider synchrotron community is introduced. PMID:19240326

  2. Start-up Characteristics of Swallow-tailed Axial-grooved Heat Pipe under the conditions of Multiple Heat Sources

    NASA Astrophysics Data System (ADS)

    Zhang, Renping

    2017-12-01

    A mathematical model was developed for predicting start-up characteristics of Swallow-tailed Axial-grooved Heat Pipe under the conditions of Multiple Heat Sources. The effects of heat capacitance of heat source, liquid-vapour interfacial evaporation-condensation heat transfer, shear stress at the interface was considered in current model. The interfacial evaporating mass flow rate is based on the kinetic analysis. Time variations of evaporating mass rate, wall temperature and liquid velocity are studied from the start-up to steady state. The calculated results show that wall temperature demonstrates step transition at the junction between the heat source and non-existent heat source on the evaporator. The liquid velocity changes drastically at the evaporator section, however, it has slight variation at the evaporator section without heat source. When the effect of heat source is ignored, the numerical temperature demonstrates a quicker response. With the consideration of capacitance of the heat source, the data obtained from the proposed model agree well with the experimental results.

  3. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.

  4. Checking the validity of superimposing analytical deformation models and implications for numerical modelling of dikes and magma chambers

    NASA Astrophysics Data System (ADS)

    Pascal, K.; Neuberg, J. W.; Rivalta, E.

    2011-12-01

    The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system is represented by several sources, their respective deformation fields are summed, and the assumption of homogeneity in the half-space is violated. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or opening of the sources and their relative position. We also investigated various numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. In the former case, the dike opening was either defined as two boundaries displaced from a central location, or as one boundary displaced relative to the other. We finally considered two case studies based on Soufrière Hills Volcano (Montserrat, West Indies) and the Dabbahu rift segment (Afar, Ethiopia) magmatic systems. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. Their magnitude may be comparable with the errors due to neglecting the topography, the inhomogeneities in crustal properties or more realistic rheologies. In the models considered, the errors induced when neglecting the source interaction can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources. Furthermore, this study underlines fundamental issues related to the numerical method chosen to model a dike or a magma chamber. It clearly demonstrates that, while the magma compressibility can be neglected to model the deformation due to one source or distant sources, it is necessary to take it into account in models combining close sources.

  5. Microcomputer pollution model for civilian airports and Air Force bases. Model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segal, H.M.; Hamilton, P.L.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less

  6. A virtual photon energy fluence model for Monte Carlo dose calculation.

    PubMed

    Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan

    2003-03-01

    The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.

  7. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  8. KaBOB: ontology-based semantic integration of biomedical databases.

    PubMed

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.

  9. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model123

    PubMed Central

    Dong, Junzi; Colburn, H. Steven

    2016-01-01

    In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056

  10. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model(1,2,3).

    PubMed

    Dong, Junzi; Colburn, H Steven; Sen, Kamal

    2016-01-01

    In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.

  11. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  12. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  13. Teacher Views on School Administrators' Organizational Power Sources and Their Change Management Behaviours

    ERIC Educational Resources Information Center

    Argon, Türkan; Dilekçi, Ümit

    2016-01-01

    This study aimed to determine school administrators' organizational power sources and change management behaviours based on Bolu central district primary and secondary school teachers' views. The study conducted with relational screening model reached 286 teachers. School Administrators' Organizational Power Sources Scale and Change Management…

  14. Appropriate evidence sources for populating decision analytic models within health technology assessment (HTA): a systematic review of HTA manuals and health economic guidelines.

    PubMed

    Zechmeister-Koss, Ingrid; Schnell-Inderst, Petra; Zauner, Günther

    2014-04-01

    An increasing number of evidence sources are relevant for populating decision analytic models. What is needed is detailed methodological advice on which type of data is to be used for what type of model parameter. We aim to identify standards in health technology assessment manuals and economic (modeling) guidelines on appropriate evidence sources and on the role different types of data play within a model. Documents were identified via a call among members of the International Network of Agencies for Health Technology Assessment and by hand search. We included documents from Europe, the United States, Canada, Australia, and New Zealand as well as transnational guidelines written in English or German. We systematically summarized in a narrative manner information on appropriate evidence sources for model parameters, their advantages and limitations, data identification methods, and data quality issues. A large variety of evidence sources for populating models are mentioned in the 28 documents included. They comprise research- and non-research-based sources. Valid and less appropriate sources are identified for informing different types of model parameters, such as clinical effect size, natural history of disease, resource use, unit costs, and health state utility values. Guidelines do not provide structured and detailed advice on this issue. The article does not include information from guidelines in languages other than English or German, and the information is not tailored to specific modeling techniques. The usability of guidelines and manuals for modeling could be improved by addressing the issue of evidence sources in a more structured and comprehensive format.

  15. Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Malin, Martha J.

    Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.

  16. Development of efficient and cost-effective distributed hydrological modeling tool MWEasyDHM based on open-source MapWindow GIS

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao

    2011-09-01

    Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.

  17. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  18. Development of atmospheric N2O isotopomers model based on a chemistry-coupled atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.

    2009-12-01

    It is well known that isotopic information is useful to qualitatively understand cycles and constrain sources of some atmospheric species, but so far there has been no study to model N2O isotopomers throughout the atmosphere from the troposphere to the stratosphere, including realistic surface N2O isotopomers emissions. We have started to develop a model to simulate spatiotemporal variations of the atmospheric N2O isotopomers in both the troposphere and the stratosphere, based on a chemistry-coupled atmospheric general circulation model, in order to obtain more accurate quantitative understanding of the global N2O cycle. For surface emissions of the isotopomers, combination of EDGAR-based anthropogenic and soil fluxes and monthly varying GEIA oceanic fluxes are factored, using isotopic values of global total sources estimated from firn-air analyses based long-term trend of the atmospheric N2O isotopomers. Isotopic fractionations in chemical reactions are considered for photolysis and photo-oxidation of N2O in the stratosphere. The isotopic fractionation coefficients have been employed from studies based on laboratory experiments, but we also will test the coefficients determined by theoretical calculations. In terms of the global N2O isotopomer budgets, precise quantification of the sources is quite challenging, because even the spatiotemporal variabilities of N2O sources have never been adequately estimated. Therefore, we have firstly started validation of simulated isotopomer results in the stratosphere, by using the isotopomer profiles obtained by balloon observations. N2O concentration profiles are mostly well reproduced, partly because of realistic reproduction of dynamical processes by nudging with reanalysis meteorological data. However, the concentration in the polar vortex tends to be overestimated, probably due to relatively coarse wave-length resolution in photolysis calculation. Such model features also appear in the isotopomers results, which are almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.

  19. Open Source Cloud-Based Technologies for Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Valari, E.; Stylianidis, E.

    2018-05-01

    This paper presents a Cloud-based open source system for storing and processing data from a 3D survey approach. More specifically, we provide an online service for viewing, storing and analysing BIM. Cloud technologies were used to develop a web interface as a BIM data centre, which can handle large BIM data using a server. The server can be accessed by many users through various electronic devices anytime and anywhere so they can view online 3D models using browsers. Nowadays, the Cloud computing is engaged progressively in facilitating BIM-based collaboration between the multiple stakeholders and disciplinary groups for complicated Architectural, Engineering and Construction (AEC) projects. Besides, the development of Open Source Software (OSS) has been rapidly growing and their use tends to be united. Although BIM and Cloud technologies are extensively known and used, there is a lack of integrated open source Cloud-based platforms able to support all stages of BIM processes. The present research aims to create an open source Cloud-based BIM system that is able to handle geospatial data. In this effort, only open source tools will be used; from the starting point of creating the 3D model with FreeCAD to its online presentation through BIMserver. Python plug-ins will be developed to link the two software which will be distributed and freely available to a large community of professional for their use. The research work will be completed by benchmarking four Cloud-based BIM systems: Autodesk BIM 360, BIMserver, Graphisoft BIMcloud and Onuma System, which present remarkable results.

  20. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  1. Odor-conditioned rheotaxis of the sea lamprey: modeling, analysis and validation

    USGS Publications Warehouse

    Choi, Jongeun; Jean, Soo; Johnson, Nicholas S.; Brant, Cory O.; Li, Weiming

    2013-01-01

    Mechanisms for orienting toward and locating an odor source are sought in both biology and engineering. Chemical ecology studies have demonstrated that adult female sea lamprey show rheotaxis in response to a male pheromone with dichotomous outcomes: sexually mature females locate the source of the pheromone whereas immature females swim by the source and continue moving upstream. Here we introduce a simple switching mechanism modeled after odor-conditioned rheotaxis for the sea lamprey as they search for the source of a pheromone in a one-dimensional riverine environment. In this strategy, the females move upstream only if they detect that the pheromone concentration is higher than a threshold value and drifts down (by turning off control action to save energy) otherwise. In addition, we propose various uncertainty models such as measurement noise, actuator disturbance, and a probabilistic model of a concentration field in turbulent flow. Based on the proposed model with uncertainties, a convergence analysis showed that with this simplistic switching mechanism, the lamprey converges to the source location on average in spite of all such uncertainties. Furthermore, a slightly modified model and its extensive simulation results explain the behaviors of immature female lamprey near the source location.

  2. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  3. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Comparison of source apportionment of PM2.5 using receptor models in the main hub port city of East Asia: Busan

    NASA Astrophysics Data System (ADS)

    Jeong, Ju-Hee; Shon, Zang-Ho; Kang, Minsung; Song, Sang-Keun; Kim, Yoo-Keun; Park, Jinsoo; Kim, Hyunjae

    2017-01-01

    The contributions of various PM2.5 emission sources to ambient PM2.5 levels during 2013 in the main hub port city (Busan, South Korea) of East Asia was quantified using several receptor modeling techniques. Three receptor models of principal component analysis/absolute principal component score (PCA/APCS), positive matrix factorization (PMF), and chemical mass balance (CMB) were used to apportion the source of PM2.5 obtained from the target city. The results of the receptor models indicated that the secondary formation of PM2.5 was the dominant (45-60%) contributor to PM2.5 levels in the port city of Busan. The PMF and PCA/APCS suggested that ship emission was a non-negligible contributor of PM2.5 (up to about 10%) in the study area, whereas it was a negligible contributor based on CMB. The magnitude of source contribution estimates to PM2.5 levels differed significantly among these three models due to their limitations (e.g., PM2.5 emission source profiles and restrictions of the models). Potential source contribution function and concentration-weighted trajectory analyses indicated that long-range transport from sources in the eastern China and Yellow Sea contributed significantly to the level of PM2.5 in Busan.

  5. A VGI data integration framework based on linked data model

    NASA Astrophysics Data System (ADS)

    Wan, Lin; Ren, Rongrong

    2015-12-01

    This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.

  6. Apparent Explosion Moments from Rg Waves Recorded on SPE: Implications for the Late-Time Damage Source Model

    NASA Astrophysics Data System (ADS)

    Patton, H. J.; Larmat, C. S.; Rougier, E.

    2016-12-01

    Seismic moments for chemical shots making up Phase I of the Source Physics Experiments (SPE) are estimated from 6 Hz Rg waves under the assumption that the shots are pure explosions. These apparent explosion moments are compared to moments determined using the Reduced Displacement Potential (RDP) method applied to free field data. LIDAR/photogrammetry observations, strong ground motions on the free surface near ground zero, and moment tensor inversion results are evidence in support of the fourth shot SPE-4P being essentially a pure explosion. The apparent moment for SPE-4P is 9 × 1010 Nm in good agreement with the RDP moment 8 × 1010 Nm. In stark contrast, apparent moments for the first three shots are three to four times smaller than RDP moments. Data show that spallation occurred on these shots, as well as permanent deformations detected with ground-based LIDAR. As such, the source medium suffered late-time damage. The late-time damage source model predicts destructive interference between Rg waves radiated by explosion and damage sources, which reduces amplitudes and explains why apparent moments are smaller than RDP moments based on compressional energy emitted directly from the source. SPE-5 was conducted at roughly the same yield-scaled burial depth as SPE-2 and -3, but with five times the yield. As such, the damage source model predicts less reduction of apparent moment. At this writing, preliminary results from Rg interferometry and RDP moments confirm this prediction. SPE-6 is scheduled for the fall of 2016, and it should have the strongest damage source of all SPE shots. The damage model predicts that the polarity of Rg waves could be reversed. Realization of this prediction will be strong confirmation of the late-time damage source model. This abstract has a Los Alamos National Laboratory Unlimited Release Number LA-UR-16-25709.

  7. [Comparison of precision in retrieving soybean leaf area index based on multi-source remote sensing data].

    PubMed

    Gao, Lin; Li, Chang-chun; Wang, Bao-shan; Yang Gui-jun; Wang, Lei; Fu, Kui

    2016-01-01

    With the innovation of remote sensing technology, remote sensing data sources are more and more abundant. The main aim of this study was to analyze retrieval accuracy of soybean leaf area index (LAI) based on multi-source remote sensing data including ground hyperspectral, unmanned aerial vehicle (UAV) multispectral and the Gaofen-1 (GF-1) WFV data. Ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), difference vegetation index (DVI), and triangle vegetation index (TVI) were used to establish LAI retrieval models, respectively. The models with the highest calibration accuracy were used in the validation. The capability of these three kinds of remote sensing data for LAI retrieval was assessed according to the estimation accuracy of models. The experimental results showed that the models based on the ground hyperspectral and UAV multispectral data got better estimation accuracy (R² was more than 0.69 and RMSE was less than 0.4 at 0.01 significance level), compared with the model based on WFV data. The RVI logarithmic model based on ground hyperspectral data was little superior to the NDVI linear model based on UAV multispectral data (The difference in E(A), R² and RMSE were 0.3%, 0.04 and 0.006, respectively). The models based on WFV data got the lowest estimation accuracy with R2 less than 0.30 and RMSE more than 0.70. The effects of sensor spectral response characteristics, sensor geometric location and spatial resolution on the soybean LAI retrieval were discussed. The results demonstrated that ground hyperspectral data were advantageous but not prominent over traditional multispectral data in soybean LAI retrieval. WFV imagery with 16 m spatial resolution could not meet the requirements of crop growth monitoring at field scale. Under the condition of ensuring the high precision in retrieving soybean LAI and working efficiently, the approach to acquiring agricultural information by UAV remote sensing could yet be regarded as an optimal plan. Therefore, in the case of more and more available remote sensing information sources, agricultural UAV remote sensing could become an important information resource for guiding field-scale crop management and provide more scientific and accurate information for precision agriculture research.

  8. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  9. Advancing Physically-Based Flow Simulations of Alluvial Systems Through Atmospheric Noble Gases and the Novel 37Ar Tracer Method

    NASA Astrophysics Data System (ADS)

    Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip

    2017-12-01

    To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.

  10. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.

    PubMed

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-06-07

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.

  11. The Relationship between Agriculture Knowledge Bases for Teaching and Sources of Knowledge

    ERIC Educational Resources Information Center

    Rice, Amber H.; Kitchel, Tracy

    2015-01-01

    The purpose of this study was to describe the agriculture knowledge bases for teaching of agriculture teachers and to see if a relationship existed between years of teaching experience, sources of knowledge, and development of pedagogical content knowledge (PCK), using quantitative methods. A model of PCK from mathematics was utilized as a…

  12. Diagnosing soil moisture anomalies and neglected soil moisture source/sink processes via a thermal infrared-based two-source energy balance model

    USDA-ARS?s Scientific Manuscript database

    Atmospheric processes, especially those that occur in the surface and boundary layer, are significantly impacted by soil moisture (SM). Due to the observational gaps in the ground-based monitoring of SM, methodologies have been developed to monitor SM from satellite platforms. While many have focuse...

  13. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  14. In-time source tracking of watershed loads of Taihu Lake Basin, China based on spatial relationship modeling.

    PubMed

    Wang, Ce; Bi, Jun; Zhang, Xu-Xiang; Fang, Qiang; Qi, Yi

    2018-05-25

    Influent river carrying cumulative watershed load plays a significant role in promoting nuisance algal bloom in river-fed lake. It is most relevant to discern in-stream water quality exceedance and evaluate the spatial relationship between risk location and potential pollution sources. However, no comprehensive studies of source tracking in watershed based on management grid have been conducted for refined water quality management, particularly for plain terrain with complex river network. In this study, field investigations were implemented during 2014 in Taige Canal watershed of Taihu Lake Basin. A Geographical Information System (GIS)-based spatial relationship model was established to characterize the spatial relationships of "point (point-source location and monitoring site)-line (river segment)-plane (catchment)." As a practical exemplification, in-time source tracking was triggered on April 15, 2015 at Huangnianqiao station, where TN and TP concentration violated the water quality standard (TN 4.0 mg/L, TP 0.15 mg/L). Of the target grid cells, 53 and 46 were identified as crucial areas having high pollution intensity for TN and TP pollution, respectively. The estimated non-point source load in each grid cell could be apportioned into different source types based on spatial pollution-related entity objects. We found that the non-point source load derived from rural sewage and livestock and poultry breeding accounted for more than 80% of total TN or TP load than another source type of crop farming. The approach in this study would be of great benefit to local authorities for identifying the serious polluted regions and efficiently making environmental policies to reduce watershed load.

  15. Probability model for atmospheric sulfur dioxide concentrations in the area of Venice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttazzoni, C.; Lavagnini, I.; Marani, A.

    1986-09-01

    This paper deals with a comparative screening of existing air quality models based on their ability to simulate the distribution of sulfur dioxide data in the Venetian area. Investigations have been carried out on sulfur dioxide dispersion in the atmosphere of the Venetian area. The studies have been mainly focused on transport models (Gaussian, plume and K-models) aiming at meaningful correlations of sources and receptors. Among the results, a noteworthy disagreement of simulated and experimental data, due to the lack of thorough knowledge of source field conditions and of local meteorology of the sea-land transition area, has been shown. Investigationsmore » with receptor oriented models (based, e.g., on time series analysis, Fourier analysis, or statistical distributions) have also been performed.« less

  16. 40 CFR 93.159 - Procedures for conformity determinations of general Federal actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... based on the applicable air quality models, data bases, and other requirements specified in the most... applicable air quality models, data bases, and other requirements specified in the most recent version of the... data are available, such as actual stack test data from stationary sources which are part of the...

  17. A Model to Assess the Behavioral Impacts of Consultative Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Mak, Brenda; Lyytinen, Kalle

    1997-01-01

    This research model studies the behavioral impacts of consultative knowledge based systems (KBS). A study of graduate students explored to what extent their decisions were affected by user participation in updating the knowledge base; ambiguity of decision setting; routinization of usage; and source credibility of the expertise embedded in the…

  18. Systematically biological prioritizing remediation sites based on datasets of biological investigations and heavy metals in soil

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen

    2015-04-01

    Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.

  19. Detector location selection based on VIP analysis in near-infrared detection of dural hematoma.

    PubMed

    Sun, Qiuming; Zhang, Yanjun; Ma, Jun; Tian, Feng; Wang, Huiquan; Liu, Dongyuan

    2018-03-01

    Detection of dural hematoma based on multi-channel near-infrared differential absorbance has the advantages of rapid and non-invasive detection. The location and number of detectors around the light source are critical for reducing the pathological characteristics of the prediction model on dural hematoma degree. Therefore, rational selection of detector numbers and their distances from the light source is very important. In this paper, a detector position screening method based on Variable Importance in the Projection (VIP) analysis is proposed. A preliminary modeling based on Partial Least Squares method (PLS) for the prediction of dural position μ a was established using light absorbance information from 30 detectors located 2.0-5.0 cm from the light source with a 0.1 cm interval. The mean relative error (MRE) of the dural position μ a prediction model was 4.08%. After VIP analysis, the number of detectors was reduced from 30 to 4 and the MRE of the dural position μ a prediction was reduced from 4.08% to 2.06% after the reduction in detector numbers. The prediction model after VIP detector screening still showed good prediction of the epidural position μ a . This study provided a new approach and important reference on the selection of detector location in near-infrared dural hematoma detection.

  20. Modeling and performance analysis of QoS data

    NASA Astrophysics Data System (ADS)

    Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.

    2016-09-01

    The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.

  1. SU-E-T-366: Clinical Implementation of MR-Guided Vaginal Cylinder Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owrangi, A; Jolly, S; Balter, J

    2014-06-01

    Purpose: To evaluate the accuracy of MR-based vaginal brachytherapy source localization using an in-house MR-visible marker versus the alignment of an applicator model to MR images. Methods: Three consecutive patients undergoing vaginal HDR brachytherapy with a plastic cylinder were scanned with both CT and MRI (including T1- and T2- weighted images). An MR-visible source localization marker, consisting of a sealed thin catheter filled with either water (for T2 contrast) or Gd-doped water (for T1 contrast), was assembled shortly before scanning. Clinically, the applicator channel was digitized on CT with an x-ray marker. To evaluate the efficacy of MR-based applicator reconstruction,more » each MR image volume was aligned locally to the CT images based on the region containing the cylinder. Applicator digitization was performed on the MR images using (1) the MR visible marker and (2) alignment of an applicator surface model from Varian's Brachytherapy Planning software to the MRI images. Resulting source positions were compared with the original CT digitization. Results: Although the source path was visualized by the MR marker, the applicator tip proved difficult to identify due to challenges in achieving a watertight seal. This resulted in observed displacements of the catheter tip, at times >1cm. Deviations between the central source positions identified via aligning the applicator surface model to MR and using the xray marker on CT ranged from 0.07 – 0.19 cm and 0.07 – 0.20 cm on T1- weighted and T2-weighted images, respectively. Conclusion: Based on the current study, aligning the applicator model to MRI provides a practical, current approach to perform MR-based brachytherapy planning. Further study is needed to produce catheters with reliably and reproducibly identifiable tips. Attempts are being made to improve catheter seals, as well as to increase the viscosity of the contrast material to decrease fluid mobility inside the catheter.« less

  2. Comparison of receptor models for source apportionment of the PM10 in Zaragoza (Spain).

    PubMed

    Callén, M S; de la Cruz, M T; López, J M; Navarro, M V; Mastral, A M

    2009-08-01

    Receptor models are useful to understand the chemical and physical characteristics of air pollutants by identifying their sources and by estimating contributions of each source to receptor concentrations. In this work, three receptor models based on principal component analysis with absolute principal component scores (PCA-APCS), Unmix and positive matrix factorization (PMF) were applied to study for the first time the apportionment of the airborne particulate matter less or equal than 10microm (PM10) in Zaragoza, Spain, during 1year sampling campaign (2003-2004). The PM10 samples were characterized regarding their concentrations in inorganic components: trace elements and ions and also organic components: polycyclic aromatic hydrocarbons (PAH) not only in the solid phase but also in the gas phase. A comparison of the three receptor models was carried out in order to do a more robust characterization of the PM10. The three models predicted that the major sources of PM10 in Zaragoza were related to natural sources (60%, 75% and 47%, respectively, for PCA-APCS, Unmix and PMF) although anthropogenic sources also contributed to PM10 (28%, 25% and 39%). With regard to the anthropogenic sources, while PCA and PMF allowed high discrimination in the sources identification associated with different combustion sources such as traffic and industry, fossil fuel, biomass and fuel-oil combustion, heavy traffic and evaporative emissions, the Unmix model only allowed the identification of industry and traffic emissions, evaporative emissions and heavy-duty vehicles. The three models provided good correlations between the experimental and modelled PM10 concentrations with major precision and the closest agreement between the PMF and PCA models.

  3. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  4. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  5. Impedance cardiography: What is the source of the signal?

    NASA Astrophysics Data System (ADS)

    Patterson, R. P.

    2010-04-01

    Impedance cardiography continues to be investigated for various applications. Instruments for its use are available commercially. Almost all of the recent presentations and articles along with commercial advertisements have assumed that aortic volume pulsation is the source of the signal. A review of the literature will reveal that there is no clear evidence for this assumption. Starting with the first paper on impedance cardiography in 1964, which assumed the lung was the source of the signal, the presentation will review many studies in the 60's, 70's and 80's, which suggest the aorta and other vessels as well as atria and again the lung as possible sources. Current studies based on high resolution thoracic models will be presented that show the aorta as contributing only approximately 1% of the total impedance measurement, making it an unlikely candidate for the major contributor to the signal. Combining the results of past studies along with recent work based on models, suggest other vessels and regions as possible sources.

  6. Quantitative assessment of the flow pattern in the southern Arava Valley (Israel) by environmental tracers and a mixing cell model

    NASA Astrophysics Data System (ADS)

    Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.

    1992-08-01

    This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.

  7. Space based inverse modeling of seasonal variations of anthropogenic and natural emissions of nitrogen oxides over China and effects of uncertainties in model meteorology and chemistry

    NASA Astrophysics Data System (ADS)

    Lin, J.

    2011-12-01

    Nitrogen oxides (NOx ≡ NO + NO2) are important atmospheric constituents affecting the tropospheric chemistry, surface air quality and climatic forcing. They are emitted both from anthropogenic and from natural (soil, lightning, biomass burning, etc.) sources, which can be estimated inversely from satellite remote sensing of the vertical column densities (VCDs) of nitrogen dioxide (NO2) in the troposphere. Based on VCDs of NO2 retrieved from OMI, a novel approach is developed in this study to separate anthropogenic emissions of NOx from natural sources over East China for 2006. It exploits the fact that anthropogenic and natural emissions vary with seasons with distinctive patterns. The global chemical transport model (CTM) GEOS-Chem is used to establish the relationship between VCDs of NO2 and emissions of NOx for individual sources. Derived soil emissions are compared to results from a newly developed bottom-up approach. Effects of uncertainties in model meteorology and chemistry over China, an important source of errors in the emission inversion, are evaluated systematically for the first time. Meteorological measurements from space and the ground are used to analyze errors in meteorological parameters driving the CTM.

  8. POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.

    PubMed

    Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei

    2018-03-01

    Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.

  9. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  10. Assessment of rockfall susceptibility by integrating statistical and physically-based approaches

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Crosta, Giovanni; Carrara, Alberto; Agliardi, Federico

    In Val di Fassa (Dolomites, Eastern Italian Alps) rockfalls constitute the most significant gravity-induced natural disaster that threatens both the inhabitants of the valley, who are few, and the thousands of tourists who populate the area in summer and winter. To assess rockfall susceptibility, we developed an integrated statistical and physically-based approach that aimed to predict both the susceptibility to onset and the probability that rockfalls will attain specific reaches. Through field checks and multi-temporal aerial photo-interpretation, we prepared a detailed inventory of both rockfall source areas and associated scree-slope deposits. Using an innovative technique based on GIS tools and a 3D rockfall simulation code, grid cells pertaining to the rockfall source-area polygons were classified as active or inactive, based on the state of activity of the associated scree-slope deposits. The simulation code allows one to link each source grid cell with scree deposit polygons by calculating the trajectory of each simulated launch of blocks. By means of discriminant analysis, we then identified the mix of environmental variables that best identifies grid cells with low or high susceptibility to rockfalls. Among these variables, structural setting, land use, and morphology were the most important factors that led to the initiation of rockfalls. We developed 3D simulation models of the runout distance, intensity and frequency of rockfalls, whose source grid cells corresponded either to the geomorphologically-defined source polygons ( geomorphological scenario) or to study area grid cells with slope angle greater than an empirically-defined value of 37° ( empirical scenario). For each scenario, we assigned to the source grid cells an either fixed or variable onset susceptibility; the latter was derived from the discriminant model group (active/inactive) membership probabilities. Comparison of these four models indicates that the geomorphological scenario with variable onset susceptibility appears to be the most realistic model. Nevertheless, political and legal issues seem to guide local administrators, who tend to select the more conservative empirically-based scenario as a land-planning tool.

  11. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  12. Assessment of well vulnerability for groundwater source protection based on a solute transport model: a case study from Jilin City, northeast China

    NASA Astrophysics Data System (ADS)

    Huan, Huan; Wang, Jinsheng; Lai, Desheng; Teng, Yanguo; Zhai, Yuanzheng

    2015-05-01

    Well vulnerability assessment is essential for groundwater source protection. A quantitative approach to assess well vulnerability in a well capture zone is presented, based on forward solute transport modeling. This method was applied to three groundwater source areas (Jiuzhan, Hadawan and Songyuanhada) in Jilin City, northeast China. The ratio of the maximum contaminant concentration at the well to the released concentration at the contamination source ( c max/ c 0) was determined as the well vulnerability indicator. The results indicated that well vulnerability was higher close to the pumping well. The well vulnerability in each groundwater source area was low. Compared with the other two source areas, the cone of depression at Jiuzhan resulted in higher spatial variability of c max/ c 0 and lower minimum c max/ c 0 by three orders of magnitude. Furthermore, a sensitivity analysis indicated that the denitrification rate in the aquifer was the most sensitive with respect to well vulnerability. A process to derive a NO3-N concentration at the pumping well is presented, based on determining the maximum nitrate loading limit to satisfy China's drinking-water quality standards. Finally, the advantages, disadvantages and prospects for improving the precision of this well vulnerability assessment approach are discussed.

  13. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  14. Analyzing the contribution of climate change to long-term variations in sediment nitrogen sources for reservoirs/lakes.

    PubMed

    Xia, Xinghui; Wu, Qiong; Zhu, Baotong; Zhao, Pujun; Zhang, Shangwei; Yang, Lingyan

    2015-08-01

    We applied a mixing model based on stable isotopic δ(13)C, δ(15)N, and C:N ratios to estimate the contributions of multiple sources to sediment nitrogen. We also developed a conceptual model describing and analyzing the impacts of climate change on nitrogen enrichment. These two models were conducted in Miyun Reservoir to analyze the contribution of climate change to the variations in sediment nitrogen sources based on two (210)Pb and (137)Cs dated sediment cores. The results showed that during the past 50years, average contributions of soil and fertilizer, submerged macrophytes, N2-fixing phytoplankton, and non-N2-fixing phytoplankton were 40.7%, 40.3%, 11.8%, and 7.2%, respectively. In addition, total nitrogen (TN) contents in sediment showed significant increasing trends from 1960 to 2010, and sediment nitrogen of both submerged macrophytes and phytoplankton sources exhibited significant increasing trends during the past 50years. In contrast, soil and fertilizer sources showed a significant decreasing trend from 1990 to 2010. According to the changing trend of N2-fixing phytoplankton, changes of temperature and sunshine duration accounted for at least 43% of the trend in the sediment nitrogen enrichment over the past 50years. Regression analysis of the climatic factors on nitrogen sources showed that the contributions of precipitation, temperature, and sunshine duration to the variations in sediment nitrogen sources ranged from 18.5% to 60.3%. The study demonstrates that the mixing model provides a robust method for calculating the contribution of multiple nitrogen sources in sediment, and this study also suggests that N2-fixing phytoplankton could be regarded as an important response factor for assessing the impacts of climate change on nitrogen enrichment. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Children's Ability to Distinguish between Memories from Multiple Sources: Implications for the Quality and Accuracy of Eyewitness Statements.

    ERIC Educational Resources Information Center

    Roberts, Kim P.

    2002-01-01

    Outlines five perspectives addressing alternate aspects of the development of children's source monitoring: source-monitoring theory, fuzzy-trace theory, schema theory, person-based perspective, and mental-state reasoning model. Discusses research areas with relation to forensic developmental psychology: agent identity, prospective processing,…

  16. Examining Sources of Gender DIF in Mathematics Assessments Using a Confirmatory Multidimensional Model Approach

    ERIC Educational Resources Information Center

    Mendes-Barnett, Sharon; Ercikan, Kadriye

    2006-01-01

    This study contributes to understanding sources of gender differential item functioning (DIF) on mathematics tests. This study focused on identifying sources of DIF and differential bundle functioning for boys and girls on the British Columbia Principles of Mathematics Exam (Grade 12) using a confirmatory SIBTEST approach based on a…

  17. Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux

    NASA Astrophysics Data System (ADS)

    Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.

    2017-12-01

    Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order-of-magnitude reductions. Additionally, sites may require monitoring for a minimum of 5-years in order to sufficiently evaluate remedial performance. The study shows that enhanced anaerobic source zone bioremediation contributed to a modest reduction of source zone contaminant mass discharge and appears to have mitigated rebound of chlorinated ethenes.

  18. Modeling study of a proposed field calibration source using K-40 and high-Z targets for sodium iodide detectors

    DOE PAGES

    Rogers, Jeremy; Marianno, Craig; Kallenbach, Gene; ...

    2016-06-01

    Calibration sources based on the primordial isotope potassium-40 ( 40K) have reduced controls on the source’s activity due to its terrestrial ubiquity and very low specific activity. Potassium–40’s beta emissions and 1,460.8 keV gamma ray can be used to induce K-shell fluorescence x rays in high-Z metals between 60 and 80 keV. A gamma ray calibration source that uses potassium chloride salt and a high-Z metal to create a two-point calibration for a sodium iodide field gamma spectroscopy instrument is thus proposed. The calibration source was designed in collaboration with the Sandia National Laboratory using the Monte Carlo N-Particle eXtendedmore » (MCNPX) transport code. Two methods of x-ray production were explored. First, a thin high-Z layer (HZL) was interposed between the detector and the potassium chloride-urethane source matrix. Second, bismuth metal powder was homogeneously mixed with a urethane binding agent to form a potassium chloride-bismuth matrix (KBM). The bismuth-based source was selected as the development model because it is inexpensive, nontoxic, and outperforms the high-Z layer method in simulation. As a result, based on the MCNPX studies, sealing a mixture of bismuth powder and potassium chloride into a thin plastic case could provide a light, inexpensive field calibration source.« less

  19. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  20. Fault- and Area-Based PSHA in Nepal using OpenQuake: New Insights from the 2015 M7.8 Gorkha-Nepal Earthquake

    NASA Astrophysics Data System (ADS)

    Stevens, Victoria

    2017-04-01

    The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.

  1. PHYSIOLOGICALLY BASED PHARMACOKINETIC MODEL FOR HUMAN EXPOSURES TO METHYL TERTIARY-BUTYL ETHER

    EPA Science Inventory

    Humans can be exposed by inhalation, ingestion, or dermal absorption to methyl tertiary-butyl ether (MTBE), an oxygenated fuel additive, from contaminated water sources. The purpose of this research was to develop a physiologically based pharmacokinetic model describing in human...

  2. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  3. Model of a thin film optical fiber fluorosensor

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The efficiency of core-light injection from sources in the cladding of an optical fiber is modeled analytically by means of the exact field solution of a step-profile fiber. The analysis is based on the techniques by Marcuse (1988) in which the sources are treated as infinitesimal electric currents with random phase and orientation that excite radiation fields and bound modes. Expressions are developed based on an infinite cladding approximation which yield the power efficiency for a fiber coated with fluorescent sources in the core/cladding interface. Marcuse's results are confirmed for the case of a weakly guiding cylindrical fiber with fluorescent sources uniformly distributed in the cladding, and the power efficiency is shown to be practically constant for variable wavelengths and core radii. The most efficient fibers have the thin film located at the core/cladding boundary, and fibers with larger differences in the indices of refraction are shown to be the most efficient.

  4. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  5. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  6. Fingerprinting Sources of Suspended Sediment in a Canadian Agricultural Watershed Using the MixSIAR Bayesian Unmixing Model

    NASA Astrophysics Data System (ADS)

    Smith, J. P.; Owens, P. N.; Gaspar, L.; Lobb, D. A.; Petticrew, E. L.

    2015-12-01

    An understanding of sediment redistribution processes and the main sediment sources within a watershed is needed to support watershed management strategies. The fingerprinting technique is increasingly being recognized as a method for establishing the source of the sediment transported within watersheds. However, the different behaviour of the various fingerprinting properties has been recognized as a major limitation of the technique, and the uncertainty associated with tracer selection needs to be addressed. There are also questions associated with which modelling approach (frequentist or Bayesian) is the best to unmix complex environmental mixtures, such as river sediment. This study aims to compare and evaluate the differences between fingerprinting predictions provided by a Bayesian unmixing model (MixSIAR) using different groups of tracer properties for use in sediment source identification. We used fallout radionuclides (e.g. 137Cs) and geochemical elements (e.g. As) as conventional fingerprinting properties, and colour parameters as emerging properties; both alone and in combination. These fingerprinting properties are being used (i.e. Koiter et al., 2013; Barthod et al., 2015) to determine the proportional contributions of fine sediment in the South Tobacco Creek Watershed, an agricultural watershed located in Manitoba, Canada. We show that the unmixing model using a combination of fallout radionuclides and geochemical tracers gave similar results to the model based on colour parameters. Furthermore, we show that a model that combines all tracers (i.e. radionuclide/geochemical and colour) gave similar results, showing that sediment sources change from predominantly topsoil in the upper reaches of the watershed to channel bank and bedrock outcrop material in the lower reaches. Barthod LRM et al. (2015). Selecting color-based tracers and classifying sediment sources in the assessment of sediment dynamics using sediment source fingerprinting. J Environ Qual. Doi:10.2134/jeq2015.01.0043 Koiter AJ et al. (2013). Investigating the role of connectivity and scale in assessing the sources of sediment in an agricultural watershed in the Canadian prairies using sediment source fingerprinting. J Soils Sediments, 13, 1676-1691.

  7. Knowledge Acquisition of Generic Queries for Information Retrieval

    PubMed Central

    Seol, Yoon-Ho; Johnson, Stephen B.; Cimino, James J.

    2002-01-01

    Several studies have identified clinical questions posed by health care professionals to understand the nature of information needs during clinical practice. To support access to digital information sources, it is necessary to integrate the information needs with a computer system. We have developed a conceptual guidance approach in information retrieval, based on a knowledge base that contains the patterns of information needs. The knowledge base uses a formal representation of clinical questions based on the UMLS knowledge sources, called the Generic Query model. To improve the coverage of the knowledge base, we investigated a method for extracting plausible clinical questions from the medical literature. This poster presents the Generic Query model, shows how it is used to represent the patterns of clinical questions, and describes the framework used to extract knowledge from the medical literature.

  8. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  9. Sensitivity of WRF-chem predictions to dust source function specification in West Asia

    NASA Astrophysics Data System (ADS)

    Nabavi, Seyed Omid; Haimberger, Leopold; Samimi, Cyrus

    2017-02-01

    Dust storms tend to form in sparsely populated areas covered by only few observations. Dust source maps, known as source functions, are used in dust models to allocate a certain potential of dust release to each place. Recent research showed that the well known Ginoux source function (GSF), currently used in Weather Research and Forecasting Model coupled with Chemistry (WRF-chem), exhibits large errors over some regions in West Asia, particularly near the IRAQ/Syrian border. This study aims to improve the specification of this critical part of dust forecasts. A new source function based on multi-year analysis of satellite observations, called West Asia source function (WASF), is therefore proposed to raise the quality of WRF-chem predictions in the region. WASF has been implemented in three dust schemes of WRF-chem. Remotely sensed and ground-based observations have been used to verify the horizontal and vertical extent and location of simulated dust clouds. Results indicate that WRF-chem performance is significantly improved in many areas after the implementation of WASF. The modified runs (long term simulations over the summers 2008-2012, using nudging) have yielded an average increase of Spearman correlation between observed and forecast aerosol optical thickness by 12-16 percent points compared to control runs with standard source functions. They even outperform MACC and DREAM dust simulations over many dust source regions. However, the quality of the forecasts decreased with distance from sources, probably due to deficiencies in the transport and deposition characteristics of the forecast model in these areas.

  10. Simulation of RF power and multi-cusp magnetic field requirement for H- ion sources

    NASA Astrophysics Data System (ADS)

    Pathak, Manish; Senecha, V. K.; Kumar, Rajnish; Ghodke, Dharmraj. V.

    2016-12-01

    A computer simulation study for multi-cusp RF based H- ion source has been carried out using energy and particle balance equation for inductively coupled uniformly dense plasma considering sheath formation near the boundary wall of the plasma chamber for RF ion source used as high current injector for 1 Gev H- Linac project for SNS applications. The average reaction rates for different reactions responsible for H- ion production and destruction have been considered in the simulation model. The RF power requirement for the caesium free H- ion source for a maximum possible H- ion beam current has been derived by evaluating the required current and RF voltage fed to the coil antenna using transformer model for Inductively Coupled Plasma (ICP). Different parameters of RF based H- ion source like excited hydrogen molecular density, H- ion density, RF voltage and current of RF antenna have been calculated through simulations in the presence and absence of multicusp magnetic field to distinctly observe the effect of multicusp field. The RF power evaluated for different H- ion current values have been compared with the experimental reported results showing reasonably good agreement considering the fact that some RF power will be reflected from the plasma medium. The results obtained have helped in understanding the optimum field strength and field free regions suitable for volume emission based H- ion sources. The compact RF ion source exhibits nearly 6 times better efficiency compare to large diameter ion source.

  11. Analysis of classical Fourier, SPL and DPL heat transfer model in biological tissues in presence of metabolic and external heat source

    NASA Astrophysics Data System (ADS)

    Kumar, Dinesh; Singh, Surjan; Rai, K. N.

    2016-06-01

    In this paper, the temperature distribution in a finite biological tissue in presence of metabolic and external heat source when the surface subjected to different type of boundary conditions is studied. Classical Fourier, single-phase-lag (SPL) and dual-phase-lag (DPL) models were developed for bio-heat transfer in biological tissues. The analytical solution obtained for all the three models using Laplace transform technique and results are compared. The effect of the variability of different parameters such as relaxation time, metabolic heat source, spatial heat source, different type boundary conditions on temperature distribution in different type of the tissues like muscle, tumor, fat, dermis and subcutaneous based on three models are analyzed and discussed in detail. The result obtained in three models is compared with experimental observation of Stolwijk and Hardy (Pflug Arch 291:129-162, 1966). It has been observe that the DPL bio-heat transfer model provides better result in comparison of other two models. The value of metabolic and spatial heat source in boundary condition of first, second and third kind for different type of thermal therapies are evaluated.

  12. Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction

    NASA Astrophysics Data System (ADS)

    Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele

    2017-09-01

    Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.

  13. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  14. Light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities based on hybrid simplified spherical harmonics with radiosity model

    PubMed Central

    Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin

    2013-01-01

    Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. PMID:24156077

  15. Light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities based on hybrid simplified spherical harmonics with radiosity model.

    PubMed

    Yang, Defu; Chen, Xueli; Peng, Zhen; Wang, Xiaorui; Ripoll, Jorge; Wang, Jing; Liang, Jimin

    2013-01-01

    Modeling light propagation in the whole body is essential and necessary for optical imaging. However, non-scattering, low-scattering and high absorption regions commonly exist in biological tissues, which lead to inaccuracy of the existing light transport models. In this paper, a novel hybrid light transport model that couples the simplified spherical harmonics approximation (SPN) with the radiosity theory (HSRM) was presented, to accurately describe light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities. In the model, the radiosity theory was used to characterize the light transport in non-scattering regions and the SPN was employed to handle the scattering problems, including subsets of low-scattering and high absorption. A Neumann source constructed by the light transport in the non-scattering region and formed at the interface between the non-scattering and scattering regions was superposed into the original light source, to couple the SPN with the radiosity theory. The accuracy and effectiveness of the HSRM was first verified with both regular and digital mouse model based simulations and a physical phantom based experiment. The feasibility and applicability of the HSRM was then investigated by a broad range of optical properties. Lastly, the influence of depth of the light source on the model was also discussed. Primary results showed that the proposed model provided high performance for light transport in turbid media with non-scattering, low-scattering and high absorption heterogeneities.

  16. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) Model-I: building an emissions data base

    NASA Astrophysics Data System (ADS)

    Smith, S. N.; Mueller, S. F.

    2010-05-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates non-methane volatile organic compound (NMVOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, windblown dust particulate, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (NMVOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere. The seasonality and relative importance of the various natural emissions categories are described.

  17. Modeling natural emissions in the Community Multiscale Air Quality (CMAQ) model - Part 1: Building an emissions data base

    NASA Astrophysics Data System (ADS)

    Smith, S. N.; Mueller, S. F.

    2010-01-01

    A natural emissions inventory for the continental United States and surrounding territories is needed in order to use the US Environmental Protection Agency Community Multiscale Air Quality (CMAQ) Model for simulating natural air quality. The CMAQ air modeling system (including the Sparse Matrix Operator Kernel Emissions (SMOKE) emissions processing system) currently estimates volatile organic compound (VOC) emissions from biogenic sources, nitrogen oxide (NOx) emissions from soils, ammonia from animals, several types of particulate and reactive gas emissions from fires, as well as windblown dust and sea salt emissions. However, there are several emission categories that are not commonly treated by the standard CMAQ Model system. Most notable among these are nitrogen oxide emissions from lightning, reduced sulfur emissions from oceans, geothermal features and other continental sources, and reactive chlorine gas emissions linked with sea salt chloride. A review of past emissions modeling work and existing global emissions data bases provides information and data necessary for preparing a more complete natural emissions data base for CMAQ applications. A model-ready natural emissions data base is developed to complement the anthropogenic emissions inventory used by the VISTAS Regional Planning Organization in its work analyzing regional haze based on the year 2002. This new data base covers a modeling domain that includes the continental United States plus large portions of Canada, Mexico and surrounding oceans. Comparing July 2002 source data reveals that natural emissions account for 16% of total gaseous sulfur (sulfur dioxide, dimethylsulfide and hydrogen sulfide), 44% of total NOx, 80% of reactive carbonaceous gases (VOCs and carbon monoxide), 28% of ammonia, 96% of total chlorine (hydrochloric acid, nitryl chloride and sea salt chloride), and 84% of fine particles (i.e., those smaller than 2.5 μm in size) released into the atmosphere. The seasonality and relative importance of the various natural emissions categories are described.

  18. Collimating lens for light-emitting-diode light source based on non-imaging optics.

    PubMed

    Wang, Guangzhen; Wang, Lili; Li, Fuli; Zhang, Gongjian

    2012-04-10

    A collimating lens for a light-emitting-diode (LED) light source is an essential device widely used in lighting engineering. Lens surfaces are calculated by geometrical optics and nonimaging optics. This design progress does not rely on any software optimization and any complex iterative process. This method can be used for any type of light source not only Lambertian. The theoretical model is based on point source. But the practical LED source has a certain size. So in the simulation, an LED chip whose size is 1 mm*1 mm is used to verify the feasibility of the model. The mean results show that the lenses have a very compact structure and good collimating performance. Efficiency is defined as the ratio of the flux in the illuminated plane to the flux from LED source without considering the lens material transmission. Just investigating the loss in the designed lens surfaces, the two types of lenses have high efficiencies of more than 90% and 99%, respectively. Most lighting area (possessing 80% flux) radii are no more than 5 m when the illuminated plane is 200 m away from the light source.

  19. Assessing the Financial Benefits of Faster Development Times: The Case of Single-source Versus Multi-vendor Outsourced Biopharmaceutical Manufacturing.

    PubMed

    DiMasi, Joseph A; Smith, Zachary; Getz, Kenneth A

    2018-05-10

    The extent to which new drug developers can benefit financially from shorter development times has implications for development efficiency and innovation incentives. We provided a real-world example of such gains by using recent estimates of drug development costs and returns. Time and fee data were obtained on 5 single-source manufacturing projects. Time and fees were modeled for these projects as if the drug substance and drug product processes had been contracted separately from 2 vendors. The multi-vendor model was taken as the base case, and financial impacts from single-source contracting were determined relative to the base case. The mean and median after-tax financial benefits of shorter development times from single-source contracting were $44.7 million and $34.9 million, respectively (2016 dollars). The after-tax increases in sponsor fees from single-source contracting were small in comparison (mean and median of $0.65 million and $0.25 million). For the data we examined, single-source contracting yielded substantial financial benefits over multi-source contracting, even after accounting for somewhat higher sponsor fees. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  20. Hydrodynamic modelling of the microbial water quality in a drinking water source as input for risk reduction management

    NASA Astrophysics Data System (ADS)

    Sokolova, Ekaterina; Pettersson, Thomas J. R.; Bergstedt, Olof; Hermansson, Malte

    2013-08-01

    To mitigate the faecal contamination of drinking water sources and, consequently, to prevent waterborne disease outbreaks, an estimation of the contribution from different sources to the total faecal contamination at the raw water intake of a drinking water treatment plant is needed. The aim of this article was to estimate how much different sources contributed to the faecal contamination at the water intake in a drinking water source, Lake Rådasjön in Sweden. For this purpose, the fate and transport of faecal indicator Escherichia coli within Lake Rådasjön were simulated by a three-dimensional hydrodynamic model. The calibrated hydrodynamic model described the measured data on vertical temperature distribution in the lake well (the Pearson correlation coefficient was 0.99). The data on the E. coli load from the identified contamination sources were gathered and the fate and transport of E. coli released from these sources within the lake were simulated using the developed hydrodynamic model, taking the decay of the E. coli into account. The obtained modelling results were compared to the observed E. coli concentrations at the water intake. The results illustrated that the sources that contributed the most to the faecal contamination at the water intake in Lake Rådasjön were the discharges from the on-site sewers and the main inflow to the lake - the river Mölndalsån. Based on the modelling results recommendations for water producers were formulated. The study demonstrated that this modelling approach is a useful tool for estimating the contribution from different sources to the faecal contamination at the water intake of a drinking water treatment plant and provided decision-support information for the reduction of risks posed to the drinking water source.

  1. Velocity analysis of simultaneous-source data using high-resolution semblance—coping with the strong noise

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Qu, Shan; Zu, Shaohuan

    2016-02-01

    Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the normal moveout-based velocity picking approach.We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus it is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack Kirchhoff time migration approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.

  2. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  3. A Regional Assessment of Marine Vessel PM2.5 Impacts in the U.S. Pacific Northwest Using a Receptor Based Source Apportionment Method

    EPA Science Inventory

    This work reports the results of a regional receptor-based source apportionment analysis using the Positive Matrix Factorization (PMF) model on chemically speciated PM2.5 data from 36 urban and rural monitoring sites within the U.S. Pacific Northwest. The approach taken is to mo...

  4. Noise source and reactor stability estimation in a boiling water reactor using a multivariate autoregressive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanemoto, S.; Andoh, Y.; Sandoz, S.A.

    1984-10-01

    A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less

  5. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  6. The Source of Adult Age Differences in Event-Based Prospective Memory: A Multinomial Modeling Approach

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2006-01-01

    Event-based prospective memory involves remembering to perform an action in response to a particular future event. Normal younger and older adults performed event-based prospective memory tasks in 2 experiments. The authors applied a formal multinomial processing tree model of prospective memory (Smith & Bayen, 2004) to disentangle age differences…

  7. Decision analysis of emergency ventilation and evacuation strategies against suddenly released contaminant indoors by considering the uncertainty of source locations.

    PubMed

    Cai, Hao; Long, Weiding; Li, Xianting; Kong, Lingjuan; Xiong, Shuang

    2010-06-15

    In case hazardous contaminants are suddenly released indoors, the prompt and proper emergency responses are critical to protect occupants. This paper aims to provide a framework for determining the optimal combination of ventilation and evacuation strategies by considering the uncertainty of source locations. The certainty of source locations is classified as complete certainty, incomplete certainty, and complete uncertainty to cover all the possible situations. According to this classification, three types of decision analysis models are presented. A new concept, efficiency factor of contaminant source (EFCS), is incorporated in these models to evaluate the payoffs of the ventilation and evacuation strategies. A procedure of decision-making based on these models is proposed and demonstrated by numerical studies of one hundred scenarios with ten ventilation modes, two evacuation modes, and five source locations. The results show that the models can be useful to direct the decision analysis of both the ventilation and evacuation strategies. In addition, the certainty of the source locations has an important effect on the outcomes of the decision-making. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm.

    PubMed

    Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. SHEDS-HT: An Integrated Probabilistic Exposure Model for Prioritizing Exposures to Chemicals with Near-Field and Dietary Sources

    EPA Science Inventory

    United States Environmental Protection Agency (USEPA) researchers are developing a strategy for highthroughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologi...

  10. Source apportionment of heavy metals in agricultural soil based on PMF: A case study in Hexi Corridor, northwest China.

    PubMed

    Guan, Qingyu; Wang, Feifei; Xu, Chuanqi; Pan, Ninghui; Lin, Jinkuo; Zhao, Rui; Yang, Yanyan; Luo, Haiping

    2018-02-01

    Hexi Corridor is the most important base of commodity grain and producing area for cash crops. However, the rapid development of agriculture and industry has inevitably led to heavy metal contamination in the soils. Multivariate statistical analysis, GIS-based geostatistical methods and Positive Matrix Factorization (PMF) receptor modeling techniques were used to understand the levels of heavy metals and their source apportionment for agricultural soil in Hexi Corridor. The results showed that the average concentrations of Cr, Cu, Ni, Pb and Zn were lower than the secondary standard of soil environmental quality; however, the concentrations of eight metals (Cr, Cu, Mn, Ni, Pb, Ti, V and Zn) were higher than background values, and their corresponding enrichment factor values were significantly greater than 1. Different degrees of heavy metal pollution occurred in the agricultural soils; specifically, Ni had the most potential for impacting human health. The results from the multivariate statistical analysis and GIS-based geostatistical methods indicated both natural sources (Co and W) and anthropogenic sources (Cr, Cu, Mn, Ni, Pb, Ti, V and Zn). To better identify pollution sources of heavy metals in the agricultural soils, the PMF model was applied. Further source apportionment revealed that enrichments of Pb and Zn were attributed to traffic sources; Cr and Ni were closely related to industrial activities, including mining, smelting, coal combustion, iron and steel production and metal processing; Zn and Cu originated from agricultural activities; and V, Ti and Mn were derived from oil- and coal-related activities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.

  12. Open Source GIS based integrated watershed management

    NASA Astrophysics Data System (ADS)

    Byrne, J. M.; Lindsay, J.; Berg, A. A.

    2013-12-01

    Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address challenging resource management issues in industry, government and nongovernmental agencies. Current research and analysis tools were developed to manage meteorological, climatological, and land and water resource data efficiently at high resolution in space and time. The deliverable for this work is a Whitebox-GENESYS open-source resource management capacity with routines for GIS based watershed management including water in agriculture and food production. We are adding urban water management routines through GENESYS in 2013-15 with an engineering PhD candidate. Both Whitebox-GAT and GENESYS are already well-established tools. The proposed research will combine these products to create an open-source geomatics based water resource management tool that is revolutionary in both capacity and availability to a wide array of Canadian and global users

  13. Direct and Indirect Measurements and Modeling of Methane Emissions in Indianapolis, Indiana.

    PubMed

    Lamb, Brian K; Cambaliza, Maria O L; Davis, Kenneth J; Edburg, Steven L; Ferrara, Thomas W; Floerchinger, Cody; Heimburger, Alexie M F; Herndon, Scott; Lauvaux, Thomas; Lavoie, Tegan; Lyon, David R; Miles, Natasha; Prasad, Kuldeep R; Richardson, Scott; Roscioli, Joseph Robert; Salmon, Olivia E; Shepson, Paul B; Stirm, Brian H; Whetstone, James

    2016-08-16

    This paper describes process-based estimation of CH4 emissions from sources in Indianapolis, IN and compares these with atmospheric inferences of whole city emissions. Emissions from the natural gas distribution system were estimated from measurements at metering and regulating stations and from pipeline leaks. Tracer methods and inverse plume modeling were used to estimate emissions from the major landfill and wastewater treatment plant. These direct source measurements informed the compilation of a methane emission inventory for the city equal to 29 Gg/yr (5% to 95% confidence limits, 15 to 54 Gg/yr). Emission estimates for the whole city based on an aircraft mass balance method and from inverse modeling of CH4 tower observations were 41 ± 12 Gg/yr and 81 ± 11 Gg/yr, respectively. Footprint modeling using 11 days of ethane/methane tower data indicated that landfills, wastewater treatment, wetlands, and other biological sources contribute 48% while natural gas usage and other fossil fuel sources contribute 52% of the city total. With the biogenic CH4 emissions omitted, the top-down estimates are 3.5-6.9 times the nonbiogenic city inventory. Mobile mapping of CH4 concentrations showed low level enhancement of CH4 throughout the city reflecting diffuse natural gas leakage and downstream usage as possible sources for the missing residual in the inventory.

  14. The Commercial Open Source Business Model

    NASA Astrophysics Data System (ADS)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  15. Newly graduated nurses' use of knowledge sources in clinical decision-making: an ethnographic study.

    PubMed

    Voldbjerg, Siri Lygum; Grønkjaer, Mette; Wiechula, Rick; Sørensen, Erik Elgaard

    2017-05-01

    To explore which knowledge sources newly graduated nurses' use in clinical decision-making and why and how they are used. In spite of an increased educational focus on skills and competencies within evidence-based practice, newly graduated nurses' ability to use components within evidence-based practice with a conscious and reflective use of research evidence has been described as being poor. To understand why, it is relevant to explore which other knowledge sources are used. This may shed light on why research evidence is sparsely used and ultimately inform approaches to strengthen the knowledgebase used in clinical decision-making. Ethnographic study using participant-observation and individual semistructured interviews of nine Danish newly graduated nurses in medical and surgical hospital settings. Newly graduates use of knowledge sources was described within three main structures: 'other', 'oneself' and 'gut feeling'. Educational preparation, transition into clinical practice and the culture of the setting influenced the knowledge sources used. The sources ranged from overt easily articulated knowledge sources to covert sources that were difficult to articulate. The limited articulation of certain sources inhibited the critical reflection on the reasoning behind decisions. Reflection is a prerequisite for an evidence-based practice where decisions should be transparent in order to consider if other evidentiary sources could be used. Although there is a complexity and variety to knowledge sources used, there is an imbalance with the experienced nurse playing a key role, functioning both as predominant source and a role model as to which sources are valued and used in clinical decision-making. If newly graduates are to be supported in an articulate and reflective use of a variety of sources, they have to be allocated to experienced nurses who model a reflective, articulate and balanced use of knowledge sources. © 2016 John Wiley & Sons Ltd.

  16. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  17. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    NASA Astrophysics Data System (ADS)

    Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.

    2011-12-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  18. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  19. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  20. SU-E-T-254: Development of a HDR-BT QA Tool for Verification of Source Position with Oncentra Applicator Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumazaki, Y; Miyaura, K; Hirai, R

    2015-06-15

    Purpose: To develop a High Dose Rate Brachytherapy (HDR-BT) quality assurance (QA) tool for verification of source position with Oncentra applicator modeling, and to report the results of radiation source positions with this tool. Methods: We developed a HDR-BT QA phantom and automated analysis software for verification of source position with Oncentra applicator modeling for the Fletcher applicator used in the MicroSelectron HDR system. This tool is intended for end-to-end tests that mimic the clinical 3D image-guided brachytherapy (3D-IGBT) workflow. The phantom is a 30x30x3 cm cuboid phantom with radiopaque markers, which are inserted into the phantom to evaluate applicatormore » tips and reference source positions; positions are laterally shifted 10 mm from the applicator axis. The markers are lead-based and scatter radiation to expose the films. Gafchromic RTQA2 films are placed on the applicators. The phantom includes spaces to embed the applicators. The source position is determined as the distance between the exposed source position and center position of two pairs of the first radiopaque markers. We generated a 3D-IGBT plan with applicator modeling. The first source position was 6 mm from the applicator tips, and the second source position was 10 mm from the first source position. Results: All source positions were consistent with the exposed positions within 1 mm for all Fletcher applicators using in-house software. Moreover, the distance between source positions was in good agreement with the reference distance. Applicator offset, determined as the distance from the applicator tips at the first source position in the treatment planning system, was accurate. Conclusion: Source position accuracy of applicator modeling used in 3D-IGBT was acceptable. This phantom and software will be useful as a HDR-BT QA tool for verification of source position with Oncentra applicator modeling.« less

  1. Source apportionment of population representative samples of PM(2.5) in three European cities using structural equation modelling.

    PubMed

    Ilacqua, Vito; Hänninen, Otto; Saarela, Kristina; Katsouyanni, Klea; Künzli, Nino; Jantunen, Matti

    2007-10-01

    Apportionment of urban particulate matter (PM) to sources is central for air quality management and efficient reduction of the substantial public health risks associated with fine particles (PM(2.5)). Traffic is an important source combustion particles, but also a significant source of resuspended particles that chemically resemble Earth's crust and that are not affected by development of cleaner motor technologies. A substantial fraction of urban ambient PM originates from long-range transport outside the immediate urban environment including secondary particles formed from gaseous emissions of mainly sulphur, nitrogen oxides and ammonia. Most source apportionment studies are based on small number of fixed monitoring sites and capture well population exposures to regional and long-range transported particles. However, concentrations from local sources are very unevenly distributed and the results from such studies are therefore poorly representative of the actual exposures. The current study uses PM(2.5) data observed at population based random sampled residential locations in Athens, Basle and Helsinki with 17 elemental constituents, selected VOCs (xylenes, trimethylbenzenes, nonane and benzene) and light absorbance (black smoke). The major sources identified across the three cities included crustal, salt, long-range transported inorganic and traffic sources. Traffic was associated separately with source categories with crustal (especially Athens and Helsinki) and long-range transported chemical composition (all cities). Remarkably high fractions of the variability of elemental (R(2)>0.6 except for Ca in Basle 0.38) and chemical concentrations (R(2)>0.5 except benzene in Basle 0.22 and nonane in Athens 0.39) are explained by the source factors of an SEM model. The RAINS model that is currently used as the main tool in developing European air quality management policies seems to capture the local urban fraction (the city delta term) quite well, but underestimates crustal particle levels in the three cities of the current study. Utilizing structural equation modelling parallel with traditional principal component analysis (PCA) provides an objective method to determine the number of factors to be retained in a model and allows for formal hypotheses testing.

  2. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  3. The chlorine budget of the present-day atmosphere - A modeling study

    NASA Technical Reports Server (NTRS)

    Weisenstein, Debra K.; Ko, Malcolm K. W.; Sze, Nien-Dak

    1992-01-01

    The contribution of source gases to the total amount of inorganic chlorine (ClY) is examined analytically with a time-dependent model employing 11 source gases. The source-gas emission data are described, and the modeling methodology is set forth with attention given to the data interpretation. The abundances and distributions are obtained for all 11 source gases with corresponding ClY production rates and mixing ratios. It is shown that the ClY production rate and the ClY mixing ratio for each source gas are spatially dependent, and the change in the relative contributions from 1950 to 1990 is given. Ozone changes in the past decade are characterized by losses in the polar and midlatitude lower stratosphere. The values for CFC-11, CCl4, and CH3CCl3 suggest that they are more evident in the lower stratosphere than is suggested by steady-state estimates based on surface concentrations.

  4. High frequency sound propagation in a network of interconnecting streets

    NASA Astrophysics Data System (ADS)

    Hewett, D. P.

    2012-12-01

    We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.

  5. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  6. Characterizing CO and NOy Sources and Relative Ambient Ratios in the Baltimore Area Using Ambient Measurements and Source Attribution Modeling

    NASA Astrophysics Data System (ADS)

    Simon, Heather; Valin, Luke C.; Baker, Kirk R.; Henderson, Barron H.; Crawford, James H.; Pusede, Sally E.; Kelly, James T.; Foley, Kristen M.; Chris Owen, R.; Cohen, Ronald C.; Timin, Brian; Weinheimer, Andrew J.; Possiel, Norm; Misenis, Chris; Diskin, Glenn S.; Fried, Alan

    2018-03-01

    Modeled source attribution information from the Community Multiscale Air Quality model was coupled with ambient data from the 2011 Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality Baltimore field study. We assess source contributions and evaluate the utility of using aircraft measured CO and NOy relationships to constrain emission inventories. We derive ambient and modeled ΔCO:ΔNOy ratios that have previously been interpreted to represent CO:NOy ratios in emissions from local sources. Modeled and measured ΔCO:ΔNOy are similar; however, measured ΔCO:ΔNOy has much more daily variability than modeled values. Sector-based tagging shows that regional transport, on-road gasoline vehicles, and nonroad equipment are the major contributors to modeled CO mixing ratios in the Baltimore area. In addition to those sources, on-road diesel vehicles, soil emissions, and power plants also contribute substantially to modeled NOy in the area. The sector mix is important because emitted CO:NOx ratios vary by several orders of magnitude among the emission sources. The model-predicted gasoline/diesel split remains constant across all measurement locations in this study. Comparison of ΔCO:ΔNOy to emitted CO:NOy is challenged by ambient and modeled evidence that free tropospheric entrainment, and atmospheric processing elevates ambient ΔCO:ΔNOy above emitted ratios. Specifically, modeled ΔCO:ΔNOy from tagged mobile source emissions is enhanced 5-50% above the emitted ratios at times and locations of aircraft measurements. We also find a correlation between ambient formaldehyde concentrations and measured ΔCO:ΔNOy suggesting that secondary CO formation plays a role in these elevated ratios. This analysis suggests that ambient urban daytime ΔCO:ΔNOy values are not reflective of emitted ratios from individual sources.

  7. Theoretical and Numerical Modeling of Transport of Land Use-Specific Fecal Source Identifiers

    NASA Astrophysics Data System (ADS)

    Bombardelli, F. A.; Sirikanchana, K. J.; Bae, S.; Wuertz, S.

    2008-12-01

    Microbial contamination in coastal and estuarine waters is of particular concern to public health officials. In this work, we advocate that well-formulated and developed mathematical and numerical transport models can be combined with modern molecular techniques in order to predict continuous concentrations of microbial indicators under diverse scenarios of interest, and that they can help in source identification of fecal pollution. As a proof of concept, we present initially the theory, numerical implementation and validation of one- and two-dimensional numerical models aimed at computing the distribution of fecal source identifiers in water bodies (based on Bacteroidales marker DNA sequences) coming from different land uses such as wildlife, livestock, humans, dogs or cats. These models have been developed to allow for source identification of fecal contamination in large bodies of water. We test the model predictions using diverse velocity fields and boundary conditions. Then, we present some preliminary results of an application of a three-dimensional water quality model to address the source of fecal contamination in the San Pablo Bay (SPB), United States, which constitutes an important sub-embayment of the San Francisco Bay. The transport equations for Bacteroidales include the processes of advection, diffusion, and decay of Bacteroidales. We discuss the validation of the developed models through comparisons of numerical results with field campaigns developed in the SPB. We determine the extent and importance of the contamination in the bay for two decay rates obtained from field observations, corresponding to total host-specific Bacteroidales DNA and host-specific viable Bacteroidales cells, respectively. Finally, we infer transport conditions in the SPB based on the numerical results, characterizing the fate of outflows coming from the Napa, Petaluma and Sonoma rivers.

  8. Statistical signatures of a targeted search by bacteria

    NASA Astrophysics Data System (ADS)

    Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve

    2017-12-01

    Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.

  9. Source spectra of the first four Source Physics Experiments (SPE) explosions from the frequency-domain moment-tensor inversion

    DOE PAGES

    Yang, Xiaoning

    2016-08-01

    In this study, I used seismic waveforms recorded within 2 km from the epicenter of the first four Source Physics Experiments (SPE) explosions to invert for the moment-tensor spectra of these explosions. I employed a one-dimensional (1D) Earth model for Green's function calculations. The model was developed from P- and R g-wave travel times and amplitudes. I selected data for the inversion based on the criterion that they had consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period,more » volumetric components of the moment-tensor spectra were well constrained.« less

  10. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  11. Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.

    PubMed

    Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G

    2014-12-01

    PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.

  12. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  13. Aerosol Sources, Absorption, and Intercontinental Transport: Synergies among Models, Remote Sensing, and Atmospheric Measurements

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Ginoux, Paul; Dubovik, Oleg; Holben, Brent; Kaufman, Yoram; chu, Allen; Anderson, Tad; Quinn, Patricia

    2003-01-01

    Aerosol climate forcing is one of the largest uncertainties in assessing the anthropogenic impact on the global climate system. This uncertainty arises from the poorly quantified aerosol sources, especially black carbon emissions, our limited knowledge of aerosol mixing state and optical properties, and the consequences of intercontinental transport of aerosols and their precursors. Here we use a global model GOCART to simulate atmospheric aerosols, including sulfate, black carbon, organic carbon, dust, and sea salt, from anthropogenic, biomass burning, and natural sources. We compare the model calculated aerosol extinction and absorption with those quantities from the ground-based sun photometer measurements from AERONET at several different wavelengths and the field observations from ACE-Asia, and model calculated total aerosol optical depth and fine mode fractions with the MODIS satellite retrieval. We will also estimate the intercontinental transport of pollution and dust aerosols from their source regions to other areas in different seasons.

  14. Aerosol Sources, Absorption, and Intercontinental Transport: Synergies Among Models, Remote Sensing, and Atmospheric Measurements

    NASA Technical Reports Server (NTRS)

    Chin, Mian; Chu, Allen; Levy, Robert; Remer, Lorraine; Kaufman, Yoram; Dubovik, Oleg; Holben, Brent; Eck, Tom; Anderson, Tad; Quinn, Patricia

    2004-01-01

    Aerosol climate forcing is one of the largest uncertainties in assessing the anthropogenic impact on the global climate system. This uncertainty arises from the poorly quantified aerosol sources, especially black carbon emissions, our limited knowledge of aerosol mixing state and optical properties, and the consequences of intercontinental transport of aerosols and their precursors. Here we use a global model GOCART to simulate atmospheric aerosols, including sulfate, black carbon, organic carbon, dust, and sea salt, from anthropogenic, .biomass burning, and natural sources. We compare the model calculated aerosol extinction and absorption with those quantities from the ground-based sun photometer measurements from AERON" at several different wavelengths and the field observations from ACE-Asia, and model calculated total aerosol optical depth and fine mode fractions with the MODIS satellite retrieval. We will also estimate the intercontinental transport of pollution and dust aerosols from their source regions to other areas in different seasons.

  15. Effluent trading in river systems through stochastic decision-making process: a case study.

    PubMed

    Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh

    2017-09-01

    The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.

  16. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    NASA Astrophysics Data System (ADS)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  17. Analysis of radiative and phase-change phenomena with application to space-based thermal energy storage

    NASA Technical Reports Server (NTRS)

    Lund, Kurt O.

    1991-01-01

    The simplified geometry for the analysis is an infinite, axis symmetric annulus with a specified solar flux at the outer radius. The inner radius is either adiabatic (modeling Flight Experiment conditions), or convective (modeling Solar Dynamic conditions). Liquid LiF either contacts the outer wall (modeling ground based testing), or faces a void gap at the outer wall (modeling possible space based conditions). The analysis is presented in three parts: Part 3 considers and adiabatic inner wall and linearized radiation equations; part 2 adds effects of convection at the inner wall; and part 1 includes the effect of the void gap, as well as previous effects, and develops the radiation model further. The main results are the differences in melting behavior which can occur between ground based 1 g experiments and the microgravity flight experiments. Under 1 gravity, melted PCM will always contact the outer wall having the heat flux source, thus providing conductance from this source to the phase change front. In space based tests where a void gap may likely form during solidification, the situation is reversed; radiation is now the only mode of heat transfer and the majority of melting takes place from the inner wall.

  18. Modeling the source contribution of heavy metals in surficial sediment and analysis of their historical changes in the vertical sediments of a drinking water reservoir

    NASA Astrophysics Data System (ADS)

    Wang, Guoqiang; A, Yinglan; Jiang, Hong; Fu, Qing; Zheng, Binghui

    2015-01-01

    Increasing water pollution in developing countries poses a significant threat to environmental health and human welfare. Understanding the spatial distribution and apportioning the sources of pollution are important for the efficient management of water resources. In this study, ten types of heavy metals were detected during 2010-2013 for all ambient samples and point sources samples. A pollution assessment based on the surficial sediment dataset by Enrichment Factor (EF) showed the surficial sediment was moderately contaminated. A comparison of the multivariate approach (principle components analysis/absolute principle component score, PCA/APCS) and the chemical mass balance model (CMB) shows that the identification of sources and calculation of source contribution based on the CMB were more objective and acceptable when source profiles were known and source composition was complex. The results of source apportionment for surficial heavy metals, both from PCA/APCS and CMB model, showed that the natural background (30%) was the most dominant contributor to the surficial heavy metals, followed by mining activities (29%). The contribution percentage of the natural background was negatively related to the degree of contamination. The peak concentrations of many heavy metals (Cu, Ba, Fe, As and Hg) were found in the middle layer of sediment, which is most likely due to the result of development of industry beginning in the 1970s. However, the highest concentration of Pb appeared in the surficial sediment layer, which was most likely due to the sharp increase in the traffic volume. The historical analysis of the sources based on the CMB showed that mining and the chemical industry are stable sources for all of the sections. The comparing of change rates of source contribution versus years indicated that the composition of the materials in estuary site (HF1) is sensitive to the input from the land, whereas center site (HF4) has a buffering effect on the materials from the land through a series of complex movements. These results provide information for the development of improved pollution control strategies for the lakes and reservoirs.

  19. Improving Agent Based Models and Validation through Data Fusion

    PubMed Central

    Laskowski, Marek; Demianyk, Bryan C.P.; Friesen, Marcia R.; McLeod, Robert D.; Mukhi, Shamir N.

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level. PMID:23569606

  20. Improving Agent Based Models and Validation through Data Fusion.

    PubMed

    Laskowski, Marek; Demianyk, Bryan C P; Friesen, Marcia R; McLeod, Robert D; Mukhi, Shamir N

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level.

  1. Coherent vertical structures in numerical simulations of buoyant plumes from wildland fires

    Treesearch

    Philip Cunningham; Scott L. Goodrick; M. Yousuff Hussaini; Rodman R. Linn

    2005-01-01

    The structure and dynamics of buoyant plumes arising from surface-based heat sources in a vertically sheared ambient atmospheric flow are examined via simulations of a three-dimensional, compressible numerical model. Simple circular heat sources and asymmetric elliptical ring heat sources that are representative of wildland fires of moderate intensity are considered....

  2. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    NASA Astrophysics Data System (ADS)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep rupture off Fukushima at 90-135 s. The dominant-period difference of the seismic-wave radiation between two deep ruptures off Miyagi may result from the mechanism that small-scale heterogeneities on the fault are removed by the first rupture. This difference can be also interpreted by the concept of multi-scale dynamic rupture (Ide & Aochi, 2005).

  3. Linear dependence between the wavefront gradient and the masked intensity for the point source with a CCD sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Ma, Liang; Wang, Bin

    2018-01-01

    In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.

  4. Interpreting intraplate tectonics for seismic hazard: a UK historical perspective

    NASA Astrophysics Data System (ADS)

    Musson, R. M. W.

    2012-04-01

    It is notoriously difficult to construct seismic source models for probabilistic seismic hazard assessment in intraplate areas on the basis of geological information, and many practitioners have given up the task in favour of purely seismicity-based models. This risks losing potentially valuable information in regions where the earthquake catalogue is short compared to the seismic cycle. It is interesting to survey how attitudes to this issue have evolved over the past 30 years. This paper takes the UK as an example, and traces the evolution of seismic source models through generations of hazard studies. It is found that in the UK, while the earliest studies did not consider regional tectonics in any way, there has been a gradual evolution towards more tectonically based models. Experience in other countries, of course, may differ.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; Abdelaziz, Omar; Qu, Ming

    This paper introduces a first-order physics-based model that accounts for the fundamental heat and mass transfer between a humid-air vapor stream on feed side to another flow stream on permeate side. The model comprises a few optional submodels for membrane mass transport; and it adopts a segment-by-segment method for discretizing heat and mass transfer governing equations for flow streams on feed and permeate sides. The model is able to simulate both dehumidifiers and energy recovery ventilators in parallel-flow, cross-flow, and counter-flow configurations. The predicted tresults are compared reasonably well with the measurements. The open-source codes are written in C++. Themore » model and open-source codes are expected to become a fundament tool for the analysis of membrane-based dehumidification in the future.« less

  6. Description of a New Predictive Modeling Approach That Correlates the Risk and Associated Cost of Well-Defined Diabetes-Related Complications With Changes in Glycated Hemoglobin (HbA1c)

    PubMed Central

    Fortwaengler, Kurt; Parkin, Christopher G.; Neeser, Kurt; Neumann, Monika; Mast, Oliver

    2017-01-01

    The modeling approach described here is designed to support the development of spreadsheet-based simple predictive models. It is based on 3 pillars: association of the complications with HbA1c changes, incidence of the complications, and average cost per event of the complication. For each pillar, the goal of the analysis was (1) to find results for a large diversity of populations with a focus on countries/regions, diabetes type, age, diabetes duration, baseline HbA1c value, and gender; (2) to assess the range of incidences and associations previously reported. Unlike simple predictive models, which mostly are based on only 1 source of information for each of the pillars, we conducted a comprehensive, systematic literature review. Each source found was thoroughly reviewed and only sources meeting quality expectations were considered. The approach allows avoidance of unintended use of extreme data. The user can utilize (1) one of the found sources, (2) the found range as validation for the found figures, or (3) the average of all found publications for an expedited estimate. The modeling approach is intended for use in average insulin-treated diabetes populations in which the baseline HbA1c values are within an average range (6.5% to 11.5%); it is not intended for use in individuals or unique diabetes populations (eg, gestational diabetes). Because the modeling approach only considers diabetes-related complications that are positively associated with HbA1c decreases, the costs of negatively associated complications (eg, severe hypoglycemic events) must be calculated separately. PMID:27510441

  7. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  8. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  9. Time Resolved Phonon Spectroscopy, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goett, Johnny; Zhu, Brian

    TRPS code was developed for the project "Time Resolved Phonon Spectroscopy". Routines contained in this piece of software were specially created to model phonon generation and tracking within materials that interact with ionizing radiation, particularly applicable to the modeling of cryogenic radiation detectors for dark matter and neutrino research. These routines were created to link seamlessly with the open source Geant4 framework for the modeling of radiation transport in matter, with the explicit intent of open sourcing them for eventual integration into that code base.

  10. Source Model of the MJMA 6.5 Plate-Boundary Earthquake at the Nankai Trough, Southwest Japan, on April 1, 2016, Based on Strong Motion Waveform Modeling

    NASA Astrophysics Data System (ADS)

    Asano, K.

    2017-12-01

    An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.

  11. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Fundamental mass transfer modeling of emission of volatile organic compounds from building materials

    NASA Astrophysics Data System (ADS)

    Bodalal, Awad Saad

    In this study, a mass transfer theory based model is presented for characterizing the VOC emissions from building materials. A 3-D diffusion model is developed to describe the emissions of volatile organic compounds (VOCs) from individual sources. Then the formulation is extended to include the emissions from composite sources (system comprising an assemblage of individual sources). The key parameters for the model (The diffusion coefficient of the VOC in the source material D, and the equilibrium partition coefficient k e) were determined independently (model parameters are determined without the use of chamber emission data). This procedure eliminated to a large extent the need for emission testing using environmental chambers, which is costly, time consuming, and may be subject to confounding sink effects. An experimental method is developed and implemented to measure directly the internal diffusion (D) and partition coefficients ( ke). The use of the method is illustrated for three types of VOC's: (i) Aliphatic Hydrocarbons, (ii) Aromatic Hydrocarbons and ( iii) Aldehydes, through typical dry building materials (carpet, plywood, particleboard, vinyl floor tile, gypsum board, sub-floor tile and OSB). Then correlations for predicting D and ke based solely on commonly available properties such as molecular weight and vapour pressure were proposed for each product and type of VOC. These correlations can be used to estimate the D and ke when direct measurement data are not available, and thus facilitate the prediction of VOC emissions from the building materials using mass transfer theory. The VOC emissions from a sub-floor material (made of the recycled automobile tires), and a particleboard are measured and predicted. Finally, a mathematical model to predict the diffusion coefficient through complex sources (floor adhesive) as a function of time was developed. Then this model (for diffusion coefficient in complex sources) was used to predict the emission rate from material system (namely, substrate//glue//vinyl tile).

  13. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  14. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  15. Buying and Selling Prices of Investments: Configural Weight Model of Interactions Predicts Violations of Joint Independence.

    PubMed

    Birnbaum; Zimmermann

    1998-05-01

    Judges evaluated buying and selling prices of hypothetical investments, based on the previous price of each investment and estimates of the investment's future value given by advisors of varied expertise. Effect of a source's estimate varied in proportion to the source's expertise, and it varied inversely with the number and expertise of other sources. There was also a configural effect in which the effect of a source's estimate was affected by the rank order of that source's estimate, in relation to other estimates of the same investment. These interactions were fit with a configural weight averaging model in which buyers and sellers place different weights on estimates of different ranks. This model implies that one can design a new experiment in which there will be different violations of joint independence in different viewpoints. Experiment 2 confirmed patterns of violations of joint independence predicted from the model fit in Experiment 1. Experiment 2 also showed that preference reversals between viewpoints can be predicted by the model of Experiment 1. Configural weighting provides a better account of buying and selling prices than either of two models of loss aversion or the theory of anchoring and insufficient adjustment. Copyright 1998 Academic Press.

  16. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  17. Monitoring of "all-weather" evapotranspiration using optical and passive microwave remote sensing imagery over the River Source Region in Southwest China

    NASA Astrophysics Data System (ADS)

    Ma, Y.; Liu, S.

    2017-12-01

    Accurate estimation of surface evapotranspiration (ET) with high quality is one of the biggest obstacles for routine applications of remote sensing in eco-hydrological studies and water resource management at basin scale. However, many aspects urgently need to deeply research, such as the applicability of the ET models, the parameterization schemes optimization at the regional scale, the temporal upscaling, the selecting and developing of the spatiotemporal data fusion method and ground-based validation over heterogeneous land surfaces. This project is based on the theoretically robust surface energy balance system (SEBS) model, which the model mechanism need further investigation, including the applicability and the influencing factors, such as local environment, and heterogeneity of the landscape, for improving estimation accuracy. Due to technical and budget limitations, so far, optical remote sensing data is missing due to frequent cloud contamination and other poor atmospheric conditions in Southwest China. Here, a multi-source remote sensing data fusion method (ESTARFM: Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model) method will be proposed through blending multi-source remote sensing data acquired by optical, and passive microwave remote sensors on board polar satellite platforms. The accurate "all-weather" ET estimation will be carried out for daily ET of the River Source Region in Southwest China, and then the remotely sensed ET results are overlapped with the footprint-weighted images of EC (eddy correlation) for ground-based validation.

  18. Acoustic Source Analysis of Magnetoacoustic Tomography With Magnetic Induction for Conductivity Gradual-Varying Tissues.

    PubMed

    Wang, Jiawei; Zhou, Yuqi; Sun, Xiaodong; Ma, Qingyu; Zhang, Dong

    2016-04-01

    As a multiphysics imaging approach, magnetoacoustic tomography with magnetic induction (MAT-MI) works on the physical mechanism of magnetic excitation, acoustic vibration, and transmission. Based on the theoretical analysis of the source vibration, numerical studies are conducted to simulate the pathological changes of tissues for a single-layer cylindrical conductivity gradual-varying model and estimate the strengths of sources inside the model. The results suggest that the inner source is generated by the product of the conductivity and the curl of the induced electric intensity inside conductivity homogeneous medium, while the boundary source is produced by the cross product of the gradient of conductivity and the induced electric intensity at conductivity boundary. For a biological tissue with low conductivity, the strength of boundary source is much higher than that of the inner source only when the size of conductivity transition zone is small. In this case, the tissue can be treated as a conductivity abrupt-varying model, ignoring the influence of inner source. Otherwise, the contributions of inner and boundary sources should be evaluated together quantitatively. This study provide basis for further study of precise image reconstruction of MAT-MI for pathological tissues.

  19. Modeling of surface dust concentration in snow cover at industrial area using neural networks and kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.

    2017-06-01

    Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.

  20. The Earth's magnetosphere modeling and ISO standard

    NASA Astrophysics Data System (ADS)

    Alexeev, I.

    The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base Fairfield et al 1994 which contains Earth s magnetospheric magnetic field measurements accumulated during many years The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The last version of the Tsyganenko model has been constructed for a geomagnetic storm time interval This version based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters The same method has been used previously for paraboloid model construction This method is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace

  1. Shear Wave Generation and Modeling Ground Motion From a Source Physics Experiment (SPE) Underground Explosion

    NASA Astrophysics Data System (ADS)

    Pitarka, Arben; Mellors, Robert; Rodgers, Arthur; Vorobiev, Oleg; Ezzedine, Souheil; Matzel, Eric; Ford, Sean; Walter, Bill; Antoun, Tarabay; Wagoner, Jeffery; Pasyanos, Mike; Petersson, Anders; Sjogreen, Bjorn

    2014-05-01

    We investigate the excitation and propagation of far-field (epicentral distance larger than 20 m) seismic waves by analyzing and modeling ground motion from an underground chemical explosion recorded during the Source Physics Experiment (SPE), Nevada. The far-field recorded ground motion is characterized by complex features, such as large azimuthal variations in P- and S-wave amplitudes, as well as substantial energy on the tangential component of motion. Shear wave energy is also observed on the tangential component of the near-field motion (epicentral distance smaller than 20 m) suggesting that shear waves were generated at or very near the source. These features become more pronounced as the waves propagate away from the source. We address the shear wave generation during the explosion by modeling ground motion waveforms recorded in the frequency range 0.01-20 Hz, at distances of up to 1 km. We used a physics based approach that combines hydrodynamic modeling of the source with anelastic modeling of wave propagation in order to separate the contributions from the source and near-source wave scattering on shear motion generation. We found that wave propagation scattering caused by the near-source geological environment, including surface topography, contributes to enhancement of shear waves generated from the explosion source. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-06NA25946/ NST11-NCNS-TM-EXP-PD15.

  2. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  3. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  4. A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo

    A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less

  5. A Discrete-Time Average Model Based Predictive Control for Quasi-Z-Source Inverter

    DOE PAGES

    Liu, Yushan; Abu-Rub, Haitham; Xue, Yaosuo; ...

    2017-12-25

    A discrete-time average model-based predictive control (DTA-MPC) is proposed for a quasi-Z-source inverter (qZSI). As a single-stage inverter topology, the qZSI regulates the dc-link voltage and the ac output voltage through the shoot-through (ST) duty cycle and the modulation index. Several feedback strategies have been dedicated to produce these two control variables, among which the most popular are the proportional–integral (PI)-based control and the conventional model-predictive control (MPC). However, in the former, there are tradeoffs between fast response and stability; the latter is robust, but at the cost of high calculation burden and variable switching frequency. Moreover, they require anmore » elaborated design or fine tuning of controller parameters. The proposed DTA-MPC predicts future behaviors of the ST duty cycle and modulation signals, based on the established discrete-time average model of the quasi-Z-source (qZS) inductor current, the qZS capacitor voltage, and load currents. The prediction actions are applied to the qZSI modulator in the next sampling instant, without the need of other controller parameters’ design. A constant switching frequency and significantly reduced computations are achieved with high performance. Transient responses and steady-state accuracy of the qZSI system under the proposed DTA-MPC are investigated and compared with the PI-based control and the conventional MPC. Simulation and experimental results verify the effectiveness of the proposed approach for the qZSI.« less

  6. A spatial individual-based model predicting a great impact of copious sugar sources and resting sites on survival of Anopheles gambiae and malaria parasite transmission

    USGS Publications Warehouse

    Zhu, Lin; Qualls, Whitney A.; Marshall, John M; Arheart, Kris L.; DeAngelis, Donald L.; McManus, John W.; Traore, Sekou F.; Doumbia, Seydou; Schlein, Yosef; Muller, Gunter C.; Beier, John C.

    2015-01-01

    BackgroundAgent-based modelling (ABM) has been used to simulate mosquito life cycles and to evaluate vector control applications. However, most models lack sugar-feeding and resting behaviours or are based on mathematical equations lacking individual level randomness and spatial components of mosquito life. Here, a spatial individual-based model (IBM) incorporating sugar-feeding and resting behaviours of the malaria vector Anopheles gambiae was developed to estimate the impact of environmental sugar sources and resting sites on survival and biting behaviour.MethodsA spatial IBM containing An. gambiae mosquitoes and humans, as well as the village environment of houses, sugar sources, resting sites and larval habitat sites was developed. Anopheles gambiae behaviour rules were attributed at each step of the IBM: resting, host seeking, sugar feeding and breeding. Each step represented one second of time, and each simulation was set to run for 60 days and repeated 50 times. Scenarios of different densities and spatial distributions of sugar sources and outdoor resting sites were simulated and compared.ResultsWhen the number of natural sugar sources was increased from 0 to 100 while the number of resting sites was held constant, mean daily survival rate increased from 2.5% to 85.1% for males and from 2.5% to 94.5% for females, mean human biting rate increased from 0 to 0.94 bites per human per day, and mean daily abundance increased from 1 to 477 for males and from 1 to 1,428 for females. When the number of outdoor resting sites was increased from 0 to 50 while the number of sugar sources was held constant, mean daily survival rate increased from 77.3% to 84.3% for males and from 86.7% to 93.9% for females, mean human biting rate increased from 0 to 0.52 bites per human per day, and mean daily abundance increased from 62 to 349 for males and from 257 to 1120 for females. All increases were significant (P < 0.01). Survival was greater when sugar sources were randomly distributed in the whole village compared to clustering around outdoor resting sites or houses.ConclusionsIncreases in densities of sugar sources or outdoor resting sites significantly increase the survival and human biting rates of An. gambiae mosquitoes. Survival of An. gambiae is more supported by random distribution of sugar sources than clustering of sugar sources around resting sites or houses. Density and spatial distribution of natural sugar sources and outdoor resting sites modulate vector populations and human biting rates, and thus malaria parasite transmission.

  7. A spatial individual-based model predicting a great impact of copious sugar sources and resting sites on survival of Anopheles gambiae and malaria parasite transmission.

    PubMed

    Zhu, Lin; Qualls, Whitney A; Marshall, John M; Arheart, Kris L; DeAngelis, Donald L; McManus, John W; Traore, Sekou F; Doumbia, Seydou; Schlein, Yosef; Müller, Günter C; Beier, John C

    2015-02-05

    Agent-based modelling (ABM) has been used to simulate mosquito life cycles and to evaluate vector control applications. However, most models lack sugar-feeding and resting behaviours or are based on mathematical equations lacking individual level randomness and spatial components of mosquito life. Here, a spatial individual-based model (IBM) incorporating sugar-feeding and resting behaviours of the malaria vector Anopheles gambiae was developed to estimate the impact of environmental sugar sources and resting sites on survival and biting behaviour. A spatial IBM containing An. gambiae mosquitoes and humans, as well as the village environment of houses, sugar sources, resting sites and larval habitat sites was developed. Anopheles gambiae behaviour rules were attributed at each step of the IBM: resting, host seeking, sugar feeding and breeding. Each step represented one second of time, and each simulation was set to run for 60 days and repeated 50 times. Scenarios of different densities and spatial distributions of sugar sources and outdoor resting sites were simulated and compared. When the number of natural sugar sources was increased from 0 to 100 while the number of resting sites was held constant, mean daily survival rate increased from 2.5% to 85.1% for males and from 2.5% to 94.5% for females, mean human biting rate increased from 0 to 0.94 bites per human per day, and mean daily abundance increased from 1 to 477 for males and from 1 to 1,428 for females. When the number of outdoor resting sites was increased from 0 to 50 while the number of sugar sources was held constant, mean daily survival rate increased from 77.3% to 84.3% for males and from 86.7% to 93.9% for females, mean human biting rate increased from 0 to 0.52 bites per human per day, and mean daily abundance increased from 62 to 349 for males and from 257 to 1120 for females. All increases were significant (P < 0.01). Survival was greater when sugar sources were randomly distributed in the whole village compared to clustering around outdoor resting sites or houses. Increases in densities of sugar sources or outdoor resting sites significantly increase the survival and human biting rates of An. gambiae mosquitoes. Survival of An. gambiae is more supported by random distribution of sugar sources than clustering of sugar sources around resting sites or houses. Density and spatial distribution of natural sugar sources and outdoor resting sites modulate vector populations and human biting rates, and thus malaria parasite transmission.

  8. A Shock-Refracted Acoustic Wave Model for Screech Amplitude in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2007-01-01

    A physical model is proposed for the estimation of the screech amplitude in underexpanded supersonic jets. The model is based on the hypothesis that the interaction of a plane acoustic wave with stationary shock waves provides amplification of the transmitted acoustic wave upon traversing the shock. Powell's discrete source model for screech incorporating a stationary array of acoustic monopoles is extended to accommodate variable source strength. The proposed model reveals that the acoustic sources are of increasing strength with downstream distance. It is shown that the screech amplitude increases with the fully expanded jet Mach number. Comparisons of predicted screech amplitude with available test data show satisfactory agreement. The effect of variable source strength on the directivity of the fundamental (first harmonic, lowest frequency mode) and the second harmonic (overtone) is found to be unimportant with regard to the principal lobe (main or major lobe) of considerable relative strength, and is appreciable only in the secondary or minor lobes (of relatively weaker strength).

  9. A Shock-Refracted Acoustic Wave Model for the Prediction of Screech Amplitude in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2007-01-01

    A physical model is proposed for the estimation of the screech amplitude in underexpanded supersonic jets. The model is based on the hypothesis that the interaction of a plane acoustic wave with stationary shock waves provides amplification of the transmitted acoustic wave upon traversing the shock. Powell's discrete source model for screech incorporating a stationary array of acoustic monopoles is extended to accommodate variable source strength. The proposed model reveals that the acoustic sources are of increasing strength with downstream distance. It is shown that the screech amplitude increases with the fuiiy expanded jet Mach number. Comparisons of predicted screech amplitude with available test data show satisfactory agreement. The effect of variable source strength on directivity of the fundamental (first harmonic, lowest frequency mode) and the second harmonic (overtone) is found to be unimportant with regard to the principal lobe (main or major lobe) of considerable relative strength, and is appreciable only in the secondary or minor lobes (of relatively weaker strength

  10. Acoustic emission based damage localization in composites structures using Bayesian identification

    NASA Astrophysics Data System (ADS)

    Kundu, A.; Eaton, M. J.; Al-Jumali, S.; Sikdar, S.; Pullin, R.

    2017-05-01

    Acoustic emission based damage detection in composite structures is based on detection of ultra high frequency packets of acoustic waves emitted from damage sources (such as fibre breakage, fatigue fracture, amongst others) with a network of distributed sensors. This non-destructive monitoring scheme requires solving an inverse problem where the measured signals are linked back to the location of the source. This in turn enables rapid deployment of mitigative measures. The presence of significant amount of uncertainty associated with the operating conditions and measurements makes the problem of damage identification quite challenging. The uncertainties stem from the fact that the measured signals are affected by the irregular geometries, manufacturing imprecision, imperfect boundary conditions, existing damages/structural degradation, amongst others. This work aims to tackle these uncertainties within a framework of automated probabilistic damage detection. The method trains a probabilistic model of the parametrized input and output model of the acoustic emission system with experimental data to give probabilistic descriptors of damage locations. A response surface modelling the acoustic emission as a function of parametrized damage signals collected from sensors would be calibrated with a training dataset using Bayesian inference. This is used to deduce damage locations in the online monitoring phase. During online monitoring, the spatially correlated time data is utilized in conjunction with the calibrated acoustic emissions model to infer the probabilistic description of the acoustic emission source within a hierarchical Bayesian inference framework. The methodology is tested on a composite structure consisting of carbon fibre panel with stiffeners and damage source behaviour has been experimentally simulated using standard H-N sources. The methodology presented in this study would be applicable in the current form to structural damage detection under varying operational loads and would be investigated in future studies.

  11. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  12. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  13. Development of a plume-in-grid model for industrial point and volume sources: application to power plant and refinery sources in the Paris region

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Seigneur, C.; Duclaux, O.

    2014-04-01

    Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosol (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (beyond about 15 km). The impacts of PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model with a coarser grid size.

  14. Development of a plume-in-grid model for industrial point and volume sources: application to power plant and refinery sources in the Paris region

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Seigneur, C.; Duclaux, O.

    2013-11-01

    Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosols (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (from about 15 km). The impacts of the PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model.

  15. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE PAGES

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; ...

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. Furthermore, while CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  16. Seismic Source Scaling and Characteristics of Six North Korean Underground Nuclear Explosions

    NASA Astrophysics Data System (ADS)

    Park, J.; Stump, B. W.; Che, I. Y.; Hayward, C.

    2017-12-01

    We estimate the range of yields and source depths for the six North Korean underground nuclear explosions in 2006, 2009, 2013, 2016 (January and September), and 2017, based on regional seismic observations in South Korea and China. Seismic data used in this study are from three seismo-acoustic stations, BRDAR, CHNAR, and KSGAR, cooperatively operated by SMU and KIGAM, the KSRS seismic array operated by the Comprehensive Nuclear Test Ban Treaty Organization, and MDJ, a station in the Global Seismographic Network. We calculate spectral ratios for event pairs using seismograms from the six explosions observed along the same paths and at the same receivers. These relative seismic source scaling spectra for Pn, Pg, Sn, and surface wave windows provide a basis for a grid search source solution that estimates source yield and depth for each event based on both the modified Mueller and Murphy (1971; MM71) and Denny and Johnson (1991; DJ91) source models. The grid search is used to identify the best-fit empirical spectral ratios subject to the source models by minimizing the goodness-of-fit (GOF) in the frequency range of 0.5-15 Hz. For all cases, the DJ91 model produces higher ratios of depth and yield than MM71. These initial results include significant trade-offs between depth and yield in all cases. In order to better take the effect of source depth into account, a modified grid search was implemented that includes the propagation effects for different source depths by including reflectivity Greens functions in the grid search procedure. This revision reduces the trade-offs between depth and yield, results in better model fits to frequencies as high as 15 Hz, and GOF values smaller than those where the depth effects on the Greens functions were ignored. The depth and yield estimates for all six explosions using this new procedure will be presented.

  17. SPRAYTRAN USER'S GUIDE: A GIS-BASED ATMOSPHERIC SPRAY DROPLET DISPERSION MODELING SYSTEM

    EPA Science Inventory

    The offsite drift of pesticide from spray operations is an ongoing source of concern. The SPRAY TRANsport (SPRAYTRAN) system, documented in this report, incorporates the near-field spray application model, AGDISP, into a meso-scale atmospheric transport model. The AGDISP model ...

  18. Emissions and dispersion modeling system (EDMS). Its development and application at airports and airbases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, M.T.; Segal, H.M.

    1994-06-01

    A new complex source microcomputer model has been developed for use at civil airports and Air Force bases. This paper describes both the key features of this model and its application in evaluating the air quality impact of new construction projects at three airports: one in the United States and two in Canada. The single EDMS model replaces the numerous models previously required to assess the air quality impact of pollution sources at airports. EDMS also employs a commercial data base to reduce the time and manpower required to accurately assess and document the air quality impact of airfield operations.more » On July 20, 1993, the U.S. Environmental Protection Agency (EPA) issued the final rule (Federal Register, 7/20/93, page 38816) to add new models to the Guideline on Air Quality Models. At that time EDMS was incorporated into the Guideline as an Appendix A model. 12 refs., 4 figs., 1 tab.« less

  19. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  20. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  1. Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.

    PubMed

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2015-04-01

    Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.

  2. Seasonal Phosphorus Sources and Loads to Upper Klamath Lake, Oregon, as Determined by a Dynamic SPARROW Model

    NASA Astrophysics Data System (ADS)

    Saleh, D.; Domagalski, J. L.; Smith, R. A.

    2016-12-01

    The SPARROW (SPAtially-Referenced Regression On Watershed Attributes) model, developed by the U.S. Geological Survey, has been used to identify and quantify the sources of nitrogen and phosphorus in watersheds and to predict their fluxes and concentration at specified locations downstream. Existing SPARROW models use a hybrid statistical approach to describe an annual average ("steady-state") relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. Although these annual models are useful for some management purposes, many water quality issues stem from intra- and inter-annual changes in constituent sources, hydrologic forcing, or other environmental conditions, which cause a lag between watershed inputs and stream water quality. We are developing a seasonal dynamic SPARROW model of sources, fluxes, and yields of phosphorus for the watershed (approximately 9,700 square kilometers) draining to Upper Klamath Lake, Oregon. The lake is hyper-eutrophic and various options are being considered for water quality improvement. The model was calibrated with 11 years of water quality data (2000 to 2010) and simulates seasonal loads and yields for a total of 44 seasons. Phosphorus sources to the watershed include animal manure, farm fertilizer, discharges of treated wastewater, and natural sources (soil and streambed sediment). The model predicts that phosphorus delivery to the lake is strongly affected by intra- and inter-annual changes in precipitation and by temporary seasonal storage of phosphorus in the watershed. The model can be used to predict how different management actions for mitigating phosphorus sources might affect phosphorus loading to the lake as well as the time required for any changes in loading to occur following implementation of the action.

  3. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    PubMed Central

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  4. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    PubMed

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  5. Delineating sources of groundwater recharge in an arsenic-affected Holocene aquifer in Cambodia using stable isotope-based mixing models

    NASA Astrophysics Data System (ADS)

    Richards, Laura A.; Magnone, Daniel; Boyce, Adrian J.; Casanueva-Marenco, Maria J.; van Dongen, Bart E.; Ballentine, Christopher J.; Polya, David A.

    2018-02-01

    Chronic exposure to arsenic (As) through the consumption of contaminated groundwaters is a major threat to public health in South and Southeast Asia. The source of As-affected groundwaters is important to the fundamental understanding of the controls on As mobilization and subsequent transport throughout shallow aquifers. Using the stable isotopes of hydrogen and oxygen, the source of groundwater and the interactions between various water bodies were investigated in Cambodia's Kandal Province, an area which is heavily affected by As and typical of many circum-Himalayan shallow aquifers. Two-point mixing models based on δD and δ18O allowed the relative extent of evaporation of groundwater sources to be estimated and allowed various water bodies to be broadly distinguished within the aquifer system. Model limitations are discussed, including the spatial and temporal variation in end member compositions. The conservative tracer Cl/Br is used to further discriminate between groundwater bodies. The stable isotopic signatures of groundwaters containing high As and/or high dissolved organic carbon plot both near the local meteoric water line and near more evaporative lines. The varying degrees of evaporation of high As groundwater sources are indicative of differing recharge contributions (and thus indirectly inferred associated organic matter contributions). The presence of high As groundwaters with recharge derived from both local precipitation and relatively evaporated surface water sources, such as ponds or flooded wetlands, are consistent with (but do not provide direct evidence for) models of a potential dual role of surface-derived and sedimentary organic matter in As mobilization.

  6. Assessment of spatial distribution of soil heavy metals using ANN-GA, MSLR and satellite imagery.

    PubMed

    Naderi, Arman; Delavar, Mohammad Amir; Kaboudin, Babak; Askari, Mohammad Sadegh

    2017-05-01

    This study aims to assess and compare heavy metal distribution models developed using stepwise multiple linear regression (MSLR) and neural network-genetic algorithm model (ANN-GA) based on satellite imagery. The source identification of heavy metals was also explored using local Moran index. Soil samples (n = 300) were collected based on a grid and pH, organic matter, clay, iron oxide contents cadmium (Cd), lead (Pb) and zinc (Zn) concentrations were determined for each sample. Visible/near-infrared reflectance (VNIR) within the electromagnetic ranges of satellite imagery was applied to estimate heavy metal concentrations in the soil using MSLR and ANN-GA models. The models were evaluated and ANN-GA model demonstrated higher accuracy, and the autocorrelation results showed higher significant clusters of heavy metals around the industrial zone. The higher concentration of Cd, Pb and Zn was noted under industrial lands and irrigation farming in comparison to barren and dryland farming. Accumulation of industrial wastes in roads and streams was identified as main sources of pollution, and the concentration of soil heavy metals was reduced by increasing the distance from these sources. In comparison to MLSR, ANN-GA provided a more accurate indirect assessment of heavy metal concentrations in highly polluted soils. The clustering analysis provided reliable information about the spatial distribution of soil heavy metals and their sources.

  7. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  8. Analysis and Simulation of Far-Field Seismic Data from the Source Physics Experiment

    DTIC Science & Technology

    2012-09-01

    ANALYSIS AND SIMULATION OF FAR-FIELD SEISMIC DATA FROM THE SOURCE PHYSICS EXPERIMENT Arben Pitarka, Robert J. Mellors, Arthur J. Rodgers, Sean...Security Site (NNSS) provides new data for investigating the excitation and propagation of seismic waves generated by buried explosions. A particular... seismic model. The 3D seismic model includes surface topography. It is based on regional geological data, with material properties constrained by shallow

  9. Broadband Fan Noise Generated by Small Scale Turbulence

    NASA Technical Reports Server (NTRS)

    Glegg, Stewart A. L.

    1998-01-01

    This report describes the development of prediction methods for broadband fan noise from aircraft engines. First, experimental evidence of the most important source mechanisms is reviewed. It is found that there are a number of competing source mechanism involved and that there is no single dominant source to which noise control procedures can be applied. Theoretical models are then developed for: (1) ducted rotors and stator vanes interacting with duct wall boundary layers, (2) ducted rotor self noise, and (3) stator vanes operating in the wakes of rotors. All the turbulence parameters required for these models are based on measured quantities. Finally the theoretical models are used to predict measured fan noise levels with some success.

  10. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  11. FRED (a Framework for Reconstructing Epidemic Dynamics): an open-source software system for modeling infectious diseases and control strategies using census-based populations.

    PubMed

    Grefenstette, John J; Brown, Shawn T; Rosenfeld, Roni; DePasse, Jay; Stone, Nathan T B; Cooley, Phillip C; Wheaton, William D; Fyshe, Alona; Galloway, David D; Sriram, Anuroop; Guclu, Hasan; Abraham, Thomas; Burke, Donald S

    2013-10-08

    Mathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels. FRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations. State and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.

  12. Reflection processing of the large-N seismic data from the Source Physics Experiment (SPE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paschall, Olivia C.

    2016-07-18

    The purpose of the SPE is to develop a more physics-based model for nuclear explosion identification to understand the development of S-waves from explosion sources in order to enhance nuclear test ban treaty monitoring.

  13. Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano

    2016-04-01

    On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.

  14. Discontinuous model with semi analytical sheath interface for radio frequency plasma

    NASA Astrophysics Data System (ADS)

    Miyashita, Masaru

    2016-09-01

    Sumitomo Heavy Industries, Ltd. provide many products utilizing plasma. In this study, we focus on the Radio Frequency (RF) plasma source by interior antenna. The plasma source is expected to be high density and low metal contamination. However, the sputtering the antenna cover by high energy ion from sheath voltage still have been problematic. We have developed the new model which can calculate sheath voltage wave form in the RF plasma source for realistic calculation time. This model is discontinuous that electronic fluid equation in plasma connect to usual passion equation in antenna cover and chamber with semi analytical sheath interface. We estimate the sputtering distribution based on calculated sheath voltage waveform by this model, sputtering yield and ion energy distribution function (IEDF) model. The estimated sputtering distribution reproduce the tendency of experimental results.

  15. Coupling of an aeroacoustic model and a parabolic equation code for long range wind turbine noise propagation

    NASA Astrophysics Data System (ADS)

    Cotté, B.

    2018-05-01

    This study proposes to couple a source model based on Amiet's theory and a parabolic equation code in order to model wind turbine noise emission and propagation in an inhomogeneous atmosphere. Two broadband noise generation mechanisms are considered, namely trailing edge noise and turbulent inflow noise. The effects of wind shear and atmospheric turbulence are taken into account using the Monin-Obukhov similarity theory. The coupling approach, based on the backpropagation method to preserve the directivity of the aeroacoustic sources, is validated by comparison with an analytical solution for the propagation over a finite impedance ground in a homogeneous atmosphere. The influence of refraction effects is then analyzed for different directions of propagation. The spectrum modification related to the ground effect and the presence of a shadow zone for upwind receivers are emphasized. The validity of the point source approximation that is often used in wind turbine noise propagation models is finally assessed. This approximation exaggerates the interference dips in the spectra, and is not able to correctly predict the amplitude modulation.

  16. Analytical volcano deformation modelling: A new and fast generalized point-source approach with application to the 2015 Calbuco eruption

    NASA Astrophysics Data System (ADS)

    Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.

    2015-12-01

    Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1 satellite. We infer the parameters of a deflating elongated source located beneath Calbuco, and find significant differences to Mogi type solutions. The results imply that interpretations based on our model may help us better understand source characteristics, and in the case of Calubuco volcano infer a volcano-tectonic coupling mechanism.

  17. Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.

    PubMed

    Liu, Y; Che, W; Li, J

    2005-01-01

    As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.

  18. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.

  19. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  1. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  2. Simulation of load-sharing in standalone distributed generation system

    NASA Astrophysics Data System (ADS)

    Ajewole, Titus O.; Craven, Robert P. M.; Kayode, Olakunle; Babalola, Olufisayo S.

    2018-05-01

    This paper presents a study on load-sharing among the component generating units of a multi-source electric microgrid that is operated as an autonomous ac supply-mode system. Emerging trend in power system development permits deployment of microgrids for standalone or stand-by applications, thereby requiring active- and reactive power sharing among the discrete generating units contained in hybrid-source microgrids. In this study, therefore, a laboratory-scale model of a microgrid energized with three renewable energy-based sources is employed as a simulation platform to investigate power sharing among the power-generating units. Each source is represented by a source emulator that captures the real operational characteristics of the mimicked generating unit and, with implementation of real-life weather data and load profiles on the model; the sharing of the load among the generating units is investigated. There is a proportionate generation of power by the three source emulators, with their frequencies perfectly synchronized at the point of common coupling as a result of balance flow of power among them. This hybrid topology of renewable energy-based microgrid could therefore be seamlessly adapted into national energy mix by the indigenous electric utility providers in Nigeria.

  3. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    NASA Astrophysics Data System (ADS)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  4. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  5. Use of MODIS Satellite Images and an Atmospheric Dust Transport Model to Evaluate Juniperus spp. Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Myers, O. B.; Budge, A. M.; hide

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al. reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen release will be estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities will be used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  6. RINGMesh: A programming library for developing mesh-based geomodeling applications

    NASA Astrophysics Data System (ADS)

    Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume

    2017-07-01

    RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.

  7. A European model and case studies for aggregate exposure assessment of pesticides.

    PubMed

    Kennedy, Marc C; Glass, C Richard; Bokkers, Bas; Hart, Andy D M; Hamey, Paul Y; Kruisselbrink, Johannes W; de Boer, Waldo J; van der Voet, Hilko; Garthwaite, David G; van Klaveren, Jacob D

    2015-05-01

    Exposures to plant protection products (PPPs) are assessed using risk analysis methods to protect public health. Traditionally, single sources, such as food or individual occupational sources, have been addressed. In reality, individuals can be exposed simultaneously to multiple sources. Improved regulation therefore requires the development of new tools for estimating the population distribution of exposures aggregated within an individual. A new aggregate model is described, which allows individual users to include as much, or as little, information as is available or relevant for their particular scenario. Depending on the inputs provided by the user, the outputs can range from simple deterministic values through to probabilistic analyses including characterisations of variability and uncertainty. Exposures can be calculated for multiple compounds, routes and sources of exposure. The aggregate model links to the cumulative dietary exposure model developed in parallel and is implemented in the web-based software tool MCRA. Case studies are presented to illustrate the potential of this model, with inputs drawn from existing European data sources and models. These cover exposures to UK arable spray operators, Italian vineyard spray operators, Netherlands users of a consumer spray and UK bystanders/residents. The model could also be adapted to handle non-PPP compounds. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  8. Application of an integrated Weather Research and Forecasting (WRF)/CALPUFF modeling tool for source apportionment of atmospheric pollutants for air quality management: A case study in the urban area of Benxi, China.

    PubMed

    Wu, Hao; Zhang, Yan; Yu, Qi; Ma, Weichun

    2018-04-01

    In this study, the authors endeavored to develop an effective framework for improving local urban air quality on meso-micro scales in cities in China that are experiencing rapid urbanization. Within this framework, the integrated Weather Research and Forecasting (WRF)/CALPUFF modeling system was applied to simulate the concentration distributions of typical pollutants (particulate matter with an aerodynamic diameter <10 μm [PM 10 ], sulfur dioxide [SO 2 ], and nitrogen oxides [NO x ]) in the urban area of Benxi. Statistical analyses were performed to verify the credibility of this simulation, including the meteorological fields and concentration fields. The sources were then categorized using two different classification methods (the district-based and type-based methods), and the contributions to the pollutant concentrations from each source category were computed to provide a basis for appropriate control measures. The statistical indexes showed that CALMET had sufficient ability to predict the meteorological conditions, such as the wind fields and temperatures, which provided meteorological data for the subsequent CALPUFF run. The simulated concentrations from CALPUFF showed considerable agreement with the observed values but were generally underestimated. The spatial-temporal concentration pattern revealed that the maximum concentrations tended to appear in the urban centers and during the winter. In terms of their contributions to pollutant concentrations, the districts of Xihu, Pingshan, and Mingshan all affected the urban air quality to different degrees. According to the type-based classification, which categorized the pollution sources as belonging to the Bengang Group, large point sources, small point sources, and area sources, the source apportionment showed that the Bengang Group, the large point sources, and the area sources had considerable impacts on urban air quality. Finally, combined with the industrial characteristics, detailed control measures were proposed with which local policy makers could improve the urban air quality in Benxi. In summary, the results of this study showed that this framework has credibility for effectively improving urban air quality, based on the source apportionment of atmospheric pollutants. The authors endeavored to build up an effective framework based on the integrated WRF/CALPUFF to improve the air quality in many cities on meso-micro scales in China. Via this framework, the integrated modeling tool is accurately used to study the characteristics of meteorological fields, concentration fields, and source apportionments of pollutants in target area. The impacts of classified sources on air quality together with the industrial characteristics can provide more effective control measures for improving air quality. Through the case study, the technical framework developed in this study, particularly the source apportionment, could provide important data and technical support for policy makers to assess air pollution on the scale of a city in China or even the world.

  9. Source-to-Outcome Microbial Exposure and Risk Modeling Framework

    EPA Science Inventory

    A Quantitative Microbial Risk Assessment (QMRA) is a computer-based data-delivery and modeling approach that integrates interdisciplinary fate/transport, exposure, and impact models and databases to characterize potential health impacts/risks due to pathogens. As such, a QMRA ex...

  10. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  11. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  12. Application of the ReNuMa model in the Sha He river watershed: tools for watershed environmental management.

    PubMed

    Sha, Jian; Liu, Min; Wang, Dong; Swaney, Dennis P; Wang, Yuqiu

    2013-07-30

    Models and related analytical methods are critical tools for use in modern watershed management. A modeling approach for quantifying the source apportionment of dissolved nitrogen (DN) and associated tools for examining the sensitivity and uncertainty of the model estimates were assessed for the Sha He River (SHR) watershed in China. The Regional Nutrient Management model (ReNuMa) was used to infer the primary sources of DN in the SHR watershed. This model is based on the Generalized Watershed Loading Functions (GWLF) and the Net Anthropogenic Nutrient Input (NANI) framework, modified to improve the characterization of subsurface hydrology and septic system loads. Hydrochemical processes of the SHR watershed, including streamflow, DN load fluxes, and corresponding DN concentration responses, were simulated following calibrations against observations of streamflow and DN fluxes. Uncertainty analyses were conducted with a Monte Carlo analysis to vary model parameters for assessing the associated variations in model outputs. The model performed accurately at the watershed scale and provided estimates of monthly streamflows and nutrient loads as well as DN source apportionments. The simulations identified the dominant contribution of agricultural land use and significant monthly variations. These results provide valuable support for science-based watershed management decisions and indicate the utility of ReNuMa for such applications. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Standardizing Acute Toxicity Data for use in Ecotoxicology Models: Influence of Test Type, Life Stage, and Concentration Reporting

    EPA Science Inventory

    Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. ...

  14. Propagation of Exploration Seismic Sources in Shallow Water

    NASA Astrophysics Data System (ADS)

    Diebold, J. B.; Tolstoy, M.; Barton, P. J.; Gulick, S. P.

    2006-05-01

    The choice of safety radii to mitigation the impact of exploration seismic sources upon marine mammals is typically based on measurement or modeling in deep water. In shallow water environments, rule-of-thumb spreading laws are often used to predict the falloff of amplitude with offset from the source, but actual measurements (or ideally, near-perfect modeling) are still needed to account for the effects of bathymetric changes and subseafloor characteristics. In addition, the question: "how shallow is 'shallow?'" needs an answer. In a cooperative effort by NSF, MMS, NRL, IAGC and L-DEO, a series of seismic source calibration studies was carried out in the Northern Gulf of Mexico during 2003. The sources used were the two-, six-, ten-, twelve-, and twenty-airgun arrays of R/V Ewing, and a 31-element, 3-string "G" gun array, deployed by M/V Kondor, an exploration industry source ship. The results of the Ewing calibrations have been published, documenting results in deep (3200m) and shallow (60m) water. Lengthy analysis of the Kondor results, presented here, suggests an approach to answering the "how shallow is shallow" question. After initially falling off steadily with source-receiver offset, the Kondor levels suddenly increased at a 4km offset. Ray-based modeling with a complex, realistic source, but with a simple homogeneous water column-over-elastic halfspace ocean shows that the observed pattern is chiefly due to geophysical effects, and not focusing within the water column. The same kind of modeling can be used to predict how the amplitudes will change with decreasing water depth, and when deep-water safety radii may need to be increased. Another set of data (see Barton, et al., this session) recorded in 20 meters of water during early 2005, however, shows that simple modeling may be insufficient when the geophysics becomes more complex. In this particular case, the fact that the seafloor was within the near field of the R/V Ewing source array seems to have given rise to seismic phases not normally seen in marine survey data acquired in deeper water. The associated partitioning of energy is likely to have caused the observed uncharacteristically rapid loss of energy with distance. It appears that in this case, the shallow-water marine mammal safety mitigation measures prescribed and followed were far more stringent than they needed to be. A new approach, wherein received levels detected by the towed 6-km multichannel hydrophone array may be used to modify safety radii has recently been proposed, based on these observations.

  15. Comprehensive model-based prediction of micropollutants from diffuse sources in the Swiss river network

    NASA Astrophysics Data System (ADS)

    Strahm, Ivo; Munz, Nicole; Braun, Christian; Gälli, René; Leu, Christian; Stamm, Christian

    2014-05-01

    Water quality in the Swiss river network is affected by many micropollutants from a variety of diffuse sources. This study compares, for the first time, in a comprehensive manner the diffuse sources and the substance groups that contribute the most to water contamination in Swiss streams and highlights the major regions for water pollution. For this a simple but comprehensive model was developed to estimate emission from diffuse sources for the entire Swiss river network of 65 000 km. Based on emission factors the model calculates catchment specific losses to streams for more than 15 diffuse sources (such as crop lands, grassland, vineyards, fruit orchards, roads, railways, facades, roofs, green space in urban areas, landfills, etc.) and more than 130 different substances from 5 different substance groups (pesticides, biocides, heavy metals, human drugs, animal drugs). For more than 180 000 stream sections estimates of mean annual pollutant loads and mean annual concentration levels were modeled. This data was validated with a set of monitoring data and evaluated based on annual average environmental quality standards (AA-EQS). Model validation showed that the estimated mean annual concentration levels are within the range of measured data. Therefore simulations were considered as adequately robust for identifying the major sources of diffuse pollution. The analysis depicted that in Switzerland widespread pollution of streams can be expected. Along more than 18 000 km of the river network one or more simulated substances has a concentration exceeding the AA-EQS. In single stream sections it could be more than 50 different substances. Moreover, the simulations showed that in two-thirds of small streams (Strahler order 1 and 2) at least one AA-EQS is always exceeded. The highest number of substances exceeding the AA-EQS are in areas with large fractions of arable cropping, vineyards and fruit orchards. Urban areas are also of concern even without considering wastewater treatment plants. Only a small number of problematic substances are expected from grassland. Landfills and roadways are insignificant within the entire Swiss river network, but may locally lead to considerable water pollution. Considering all substance groups, pesticides and some heavy metals are the main polluters. Many pesticides are expected to exceed AA-EQS and in a substantial percentage of the river network. Modeling a large number of substances from many sources and a huge quantity of stream sections is only possible with a simple model. Nevertheless conclusions are robust and may indicate where and for what kind of substance groups additional efforts for water quality improvements should be undertaken.

  16. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  17. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  18. A data-driven modeling approach to stochastic computation for low-energy biomedical devices.

    PubMed

    Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen

    2011-01-01

    Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.

  19. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  20. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  1. Shallow seismicity in volcanic system: what role does the edifice play?

    NASA Astrophysics Data System (ADS)

    Bean, Chris; Lokmer, Ivan

    2017-04-01

    Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.

  2. A stochastically forced time delay solar dynamo model: Self-consistent recovery from a maunder-like grand minimum necessitates a mean-field alpha effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Soumitra; Nandy, Dibyendu; Passos, Dário, E-mail: s.hazra@iiserkol.ac.in, E-mail: dariopassos@ist.utl.pt, E-mail: dnandi@iiserkol.ac.in

    Fluctuations in the Sun's magnetic activity, including episodes of grand minima such as the Maunder minimum have important consequences for space and planetary environments. However, the underlying dynamics of such extreme fluctuations remain ill-understood. Here, we use a novel mathematical model based on stochastically forced, non-linear delay differential equations to study solar cycle fluctuations in which time delays capture the physics of magnetic flux transport between spatially segregated dynamo source regions in the solar interior. Using this model, we explicitly demonstrate that the Babcock-Leighton poloidal field source based on dispersal of tilted bipolar sunspot flux, alone, cannot recover the sunspotmore » cycle from a grand minimum. We find that an additional poloidal field source effective on weak fields—e.g., the mean-field α effect driven by helical turbulence—is necessary for self-consistent recovery of the sunspot cycle from grand minima episodes.« less

  3. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  4. Attribution of the French human Salmonellosis cases to the main food-sources according to the type of surveillance data.

    PubMed

    David, J M; Sanders, P; Bemrah, N; Granier, S A; Denis, M; Weill, F-X; Guillemot, D; Watier, L

    2013-05-15

    Salmonella are the most common bacterial cause of foodborne infections in France and ubiquitous pathogens present in many animal productions. Assessing the relative contribution of the different food-animal sources to the burden of human cases is a key step towards the conception, prioritization and assessment of efficient control policy measures. For this purpose, we considered a Bayesian microbial subtyping attribution approach based on a previous published model (Hald et al., 2004). It requires quality integrated data on human cases and on the contamination of their food sources, per serotype and microbial subtype, which were retrieved from the French integrated surveillance system for Salmonella. The quality of the data available for such an approach is an issue for many countries in which the surveillance system has not been designed for this purpose. In France, the sources are monitored simultaneously by an active, regulation-based surveillance system that produces representative prevalence data (as ideally required for the approach) and a passive system relying on voluntary laboratories that produces data not meeting the standards set by Hald et al. (2004) but covering a broader range of sources. These data allowed us to study the impact of data quality on the attribution results, globally and focusing on specific features of the data (number of sources and contamination indicator). The microbial subtyping attribution model was run using an adapted parameterization previously proposed (David et al., 2012). A total of 9076 domestic sporadic cases were included in the analyses as well as 9 sources among which 5 were common to the active and the passive datasets. The greatest impact on the attribution results was observed for the number of sources. Thus, especially in the absence of data on imported products, the attribution estimates presented here should be considered with caution. The results were comparable for both types of surveillance, leading to the conclusion that passive data constitute a potential cost-effective complement to active data collection, especially interesting because the former encompass a greater number of sources. The model appeared robust to the type of surveillance, and provided that some methodological aspects of the model can be enhanced, it could also serve as a risk-based guidance tool for active surveillance systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Detection of Warming Effects Due to Industrialization: An Accumulated Intervention Model with an Application in Pohang, Korea.

    NASA Astrophysics Data System (ADS)

    Ryoo, S. B.; Moon, S. E.

    1995-06-01

    Modifications of surface air temperature caused by anthropogenic impacts have received much attention recently because of the heightened interest in climatic change. When an industrial area is constructed, resulting in a large-scale anthropogenic heat source, is it possible to detect the warming effect of the heat source? In this paper, the intensity of warming is estimated in the area of the source. A statistical model is suggested to estimate the warming caused by that anthropogenic heat source. The model used in this study is an accumulated intervention (AI) model that is applied to industrial heat perturbations that occurred in the area. To evaluate the AI model performance, the forecast experiment was carried out with an independent dataset. The data used in this study are the monthly mean temperatures at Pohang, Korea. The AI model was developed based on the data for the 38-year period from 1953 to 1990, and the forecast experiment was carried out with an independent dataset for the 2-year period from 1991 to 1992.

  6. Combining observations in the reflective solar and thermal domains for improved carbon and energy flux estimation

    USDA-ARS?s Scientific Manuscript database

    This study investigates the utility of integrating remotely sensed estimates of leaf chlorophyll (Cab) into a therma-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LU...

  7. Global estimation of evapotranspiration using a leaf area index-based surface energy and water balance model

    USDA-ARS?s Scientific Manuscript database

    Studies of global hydrologic cycles, carbon cycles and climate change are greatly facilitated when global estimates of evapotranspiration (E) are available. We have developed an air-relative-humidity-based two-source (ARTS) E model that simulates the surface energy balance, soil water balance, and e...

  8. Not Funding the Evidence-Based Model in Ohio

    ERIC Educational Resources Information Center

    Edlefson, Carla

    2010-01-01

    The purpose of this descriptive case study was to describe the implementation of Ohio's version of the Evidence-Based Model (OEBM) state school finance system in 2009. Data sources included state budget documents and analyses as well as interviews with local school officials. The new system was responsive to three policy objectives ordered by the…

  9. Movement behavior explains genetic differentiation in American black bears

    Treesearch

    Samuel A Cushman; Jesse S. Lewis

    2010-01-01

    Individual-based landscape genetic analyses provide empirically based models of gene flow. It would be valuable to verify the predictions of these models using independent data of a different type. Analyses using different data sources that produce consistent results provide strong support for the generality of the findings. Mating and dispersal movements are the...

  10. A thermal-based remote sensing modelling system for estimating crop water use and stress from field to regional scales

    USDA-ARS?s Scientific Manuscript database

    Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. A thermal-based scheme, called the Two-Source Energy Balance (TSEB) model, solves for the soil/substrate and canopy temp...

  11. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  12. Creating system engineering products with executable models in a model-based engineering environment

    NASA Astrophysics Data System (ADS)

    Karban, Robert; Dekens, Frank G.; Herzig, Sebastian; Elaasar, Maged; Jankevičius, Nerijus

    2016-08-01

    Applying systems engineering across the life-cycle results in a number of products built from interdependent sources of information using different kinds of system level analysis. This paper focuses on leveraging the Executable System Engineering Method (ESEM) [1] [2], which automates requirements verification (e.g. power and mass budget margins and duration analysis of operational modes) using executable SysML [3] models. The particular value proposition is to integrate requirements, and executable behavior and performance models for certain types of system level analysis. The models are created with modeling patterns that involve structural, behavioral and parametric diagrams, and are managed by an open source Model Based Engineering Environment (named OpenMBEE [4]). This paper demonstrates how the ESEM is applied in conjunction with OpenMBEE to create key engineering products (e.g. operational concept document) for the Alignment and Phasing System (APS) within the Thirty Meter Telescope (TMT) project [5], which is under development by the TMT International Observatory (TIO) [5].

  13. Matrix factorization-based data fusion for the prediction of lncRNA-disease associations.

    PubMed

    Fu, Guangyuan; Wang, Jun; Domeniconi, Carlotta; Yu, Guoxian

    2018-05-01

    Long non-coding RNAs (lncRNAs) play crucial roles in complex disease diagnosis, prognosis, prevention and treatment, but only a small portion of lncRNA-disease associations have been experimentally verified. Various computational models have been proposed to identify lncRNA-disease associations by integrating heterogeneous data sources. However, existing models generally ignore the intrinsic structure of data sources or treat them as equally relevant, while they may not be. To accurately identify lncRNA-disease associations, we propose a Matrix Factorization based LncRNA-Disease Association prediction model (MFLDA in short). MFLDA decomposes data matrices of heterogeneous data sources into low-rank matrices via matrix tri-factorization to explore and exploit their intrinsic and shared structure. MFLDA can select and integrate the data sources by assigning different weights to them. An iterative solution is further introduced to simultaneously optimize the weights and low-rank matrices. Next, MFLDA uses the optimized low-rank matrices to reconstruct the lncRNA-disease association matrix and thus to identify potential associations. In 5-fold cross validation experiments to identify verified lncRNA-disease associations, MFLDA achieves an area under the receiver operating characteristic curve (AUC) of 0.7408, at least 3% higher than those given by state-of-the-art data fusion based computational models. An empirical study on identifying masked lncRNA-disease associations again shows that MFLDA can identify potential associations more accurately than competing models. A case study on identifying lncRNAs associated with breast, lung and stomach cancers show that 38 out of 45 (84%) associations predicted by MFLDA are supported by recent biomedical literature and further proves the capability of MFLDA in identifying novel lncRNA-disease associations. MFLDA is a general data fusion framework, and as such it can be adopted to predict associations between other biological entities. The source code for MFLDA is available at: http://mlda.swu.edu.cn/codes.php? name = MFLDA. gxyu@swu.edu.cn. Supplementary data are available at Bioinformatics online.

  14. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  15. Paraboloid magnetospheric magnetic field model and the status of the model as an ISO standard

    NASA Astrophysics Data System (ADS)

    Alexeev, I.

    A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions The empirical model developed by Tsyganenko T96 is constructed by minimizing the rms deviation from the large magnetospheric data base The applicability of the T96 model is limited mainly by quiet conditions in the solar wind along the Earth orbit But contrary to the internal planet s field the external magnetospheric magnetic field sources are much more time-dependent A reliable representation of the magnetic field is crucial in the framework of radiation belt modelling especially for disturbed conditions It is a reason why the method of the paraboloid magnetospheric model construction based on the more accurate and physically consistent approach in which each source of the magnetic field would have its own relaxation timescale and a driving function based on an individual best fit combination of the solar wind and IMF parameters Such approach is based on a priori information about the global magnetospheric current systems structure Each current system is included as a separate block module in the magnetospheric model As it was shown by the spacecraft magnetometer data there are three current systems which are the main contributors to the external magnetospheric magnetic field magnetopause currents ring current and tail current sheet Paraboloid model is based on an analytical solution of the Laplace equation for each of these large-scale current systems in the magnetosphere with a

  16. Cancer Related-Knowledge - Small Area Estimates

    Cancer.gov

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  17. Density-driven transport of gas phase chemicals in unsaturated soils

    NASA Astrophysics Data System (ADS)

    Fen, Chiu-Shia; Sun, Yong-tai; Cheng, Yuen; Chen, Yuanchin; Yang, Whaiwan; Pan, Changtai

    2018-01-01

    Variations of gas phase density are responsible for advective and diffusive transports of organic vapors in unsaturated soils. Laboratory experiments were conducted to explore dense gas transport (sulfur hexafluoride, SF6) from different source densities through a nitrogen gas-dry soil column. Gas pressures and SF6 densities at transient state were measured along the soil column for three transport configurations (horizontal, vertically upward and vertically downward transport). These measurements and others reported in the literature were compared with simulation results obtained from two models based on different diffusion approaches: the dusty gas model (DGM) equations and a Fickian-type molar fraction-based diffusion expression. The results show that the DGM and Fickian-based models predicted similar dense gas density profiles which matched the measured data well for horizontal transport of dense gas at low to high source densities, despite the pressure variations predicted in the soil column were opposite to the measurements. The pressure evolutions predicted by both models were in trend similar to the measured ones for vertical transport of dense gas. However, differences between the dense gas densities predicted by the DGM and Fickian-based models were discernible for vertically upward transport of dense gas even at low source densities, as the DGM-based predictions matched the measured data better than the Fickian results did. For vertically downward transport, the dense gas densities predicted by both models were not greatly different from our experimental measurements, but substantially greater than the observations obtained from the literature, especially at high source densities. Further research will be necessary for exploring factors affecting downward transport of dense gas in soil columns. Use of the measured data to compute flux components of SF6 showed that the magnitudes of diffusive flux component based on the Fickian-type diffusion expressions in terms of molar concentration, molar fraction and mass density fraction gradient were almost the same. However, they were greater than the result computed with the mass fraction gradient for > 24% and the DGM-based result for more than one time. As a consequence, the DGM-based total flux of SF6 was in magnitude greatly less than the Fickian result not only for horizontal transport (diffusion-dominating) but also for vertical transport (advection and diffusion) of dense gas. Particularly, the Fickian-based total flux was more than two times in magnitude as much as the DGM result for vertically upward transport of dense gas.

  18. The Role of Skull Modeling in EEG Source Imaging for Patients with Refractory Temporal Lobe Epilepsy.

    PubMed

    Montes-Restrepo, Victoria; Carrette, Evelien; Strobbe, Gregor; Gadeyne, Stefanie; Vandenberghe, Stefaan; Boon, Paul; Vonck, Kristl; Mierlo, Pieter van

    2016-07-01

    We investigated the influence of different skull modeling approaches on EEG source imaging (ESI), using data of six patients with refractory temporal lobe epilepsy who later underwent successful epilepsy surgery. Four realistic head models with different skull compartments, based on finite difference methods, were constructed for each patient: (i) Three models had skulls with compact and spongy bone compartments as well as air-filled cavities, segmented from either computed tomography (CT), magnetic resonance imaging (MRI) or a CT-template and (ii) one model included a MRI-based skull with a single compact bone compartment. In all patients we performed ESI of single and averaged spikes marked in the clinical 27-channel EEG by the epileptologist. To analyze at which time point the dipole estimations were closer to the resected zone, ESI was performed at two time instants: the half-rising phase and peak of the spike. The estimated sources for each model were validated against the resected area, as indicated by the postoperative MRI. Our results showed that single spike analysis was highly influenced by the signal-to-noise ratio (SNR), yielding estimations with smaller distances to the resected volume at the peak of the spike. Although averaging reduced the SNR effects, it did not always result in dipole estimations lying closer to the resection. The proposed skull modeling approaches did not lead to significant differences in the localization of the irritative zone from clinical EEG data with low spatial sampling density. Furthermore, we showed that a simple skull model (MRI-based) resulted in similar accuracy in dipole estimation compared to more complex head models (based on CT- or CT-template). Therefore, all the considered head models can be used in the presurgical evaluation of patients with temporal lobe epilepsy to localize the irritative zone from low-density clinical EEG recordings.

  19. Steady-State Ion Beam Modeling with MICHELLE

    NASA Astrophysics Data System (ADS)

    Petillo, John

    2003-10-01

    There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.

  20. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-07-01

    This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  1. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-12-01

    This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  2. Evaluating environmental modeling and sampling data with biomarker data to identify sources and routes of exposure

    NASA Astrophysics Data System (ADS)

    Shin, Hyeong-Moo; McKone, Thomas E.; Bennett, Deborah H.

    2013-04-01

    Exposure to environmental chemicals results from multiple sources, environmental media, and exposure routes. Ideally, modeled exposures should be compared to biomonitoring data. This study compares the magnitude and variation of modeled polycyclic aromatic hydrocarbons (PAHs) exposures resulting from emissions to outdoor and indoor air and estimated exposure inferred from biomarker levels. Outdoor emissions result in both inhalation and food-based exposures. We modeled PAH intake doses using U.S. EPA's 2002 National Air Toxics Assessment (NATA) county-level emissions data for outdoor inhalation, the CalTOX model for food ingestion (based on NATA emissions), and indoor air concentrations from field studies for indoor inhalation. We then compared the modeled intake with the measured urine levels of hydroxy-PAH metabolites from the 2001-2002 National Health and Nutrition Examination Survey (NHANES) survey as quantifiable human intake of PAH parent-compounds. Lognormal probability plots of modeled intakes and estimated intakes inferred from biomarkers suggest that a primary route of exposure to naphthalene, fluorene, and phenanthrene for the U.S. population is likely inhalation from indoor sources. For benzo(a)pyrene, the predominant exposure route is likely from food ingestion resulting from multi-pathway transport and bioaccumulation due to outdoor emissions. Multiple routes of exposure are important for pyrene. We also considered the sensitivity of the predicted exposure to the proportion of the total naphthalene production volume emitted to the indoor environment. The comparison of PAH biomarkers with exposure variability estimated from models and sample data for various exposure pathways supports that both indoor and outdoor models are needed to capture the sources and routes of exposure to environmental contaminants.

  3. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  4. VLA OH Zeeman Observations of the NGC 6334 Complex Source A

    NASA Astrophysics Data System (ADS)

    Mayo, E. A.; Sarma, A. P.; Troland, T. H.; Abel, N. P.

    2004-12-01

    We present a detailed analysis of the NGC 6334 complex source A, a compact continuum source in the SW region of the complex. Our intent is to determine the significance of the magnetic field in the support of the surrounding molecular cloud against gravitational collapse. We have performed OH 1665 and 1667 MHz observations taken with the Very Large Array in the BnA configuration and combined these data with the lower resolution CnB data of Sarma et al. (2000). These observations reveal magnetic fields with values of the order of 350 μ G toward source A, with maximum fields reaching 500 μ G. We have also theoretically modeled the molecular cloud surrounding source A using Cloudy, with the constraints to the model based on observation. This model provides significant information on the density of H2 through the cloud and also the relative density of H2 to OH which is important to our analysis of the region. We will combine the knowledge gained through the Cloudy modeling with Virial estimates to determine the significance of the magnetic field to the dynamics and evolution of source A.

  5. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  6. Better Assessment Science Integrating Point and Nonpoint Sources

    EPA Science Inventory

    Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is not a model per se, but is a multipurpose environmental decision support system for use by regional, state, and local agencies in performing watershed- and water-quality-based studies. BASI...

  7. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  8. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  9. An Open Source Model for Open Access Journal Publication

    PubMed Central

    Blesius, Carl R.; Williams, Michael A.; Holzbach, Ana; Huntley, Arthur C.; Chueh, Henry

    2005-01-01

    We describe an electronic journal publication infrastructure that allows a flexible publication workflow, academic exchange around different forms of user submissions, and the exchange of articles between publishers and archives using a common XML based standard. This web-based application is implemented on a freely available open source software stack. This publication demonstrates the Dermatology Online Journal's use of the platform for non-biased independent open access publication. PMID:16779183

  10. Rg excitation by underground explosions: insights from source modelling the 1997 Kazakhstan depth-of-burial experiment

    NASA Astrophysics Data System (ADS)

    Patton, Howard J.; Bonner, Jessie L.; Gupta, Indra N.

    2005-12-01

    Near-field seismograms of chemical explosions detonated as part of the 1997 depth-of-burial (DOB) experiment at the former Semipalatinsk nuclear test site provide an excellent opportunity to study the excitation of Rg waves for source effects. Rg waves were identified with particle-motion analysis and isolated from other arrivals using group velocity filtering. Amplitude and phase spectra of Rg waves were corrected for path effects based on observed attenuation in the near-field and path-specific phase velocity models. The path-corrected spectra were inputs to a grid-search method for finding source parameters of an axisymmetric source consisting of a monopole plus a compensated linear vector dipole (CLVD) or a horizontal tensile crack. The suite of observations, including ground-zero accelerograms and geophysical data from borehole logs, are best satisfied by models involving a CLVD with static (zero-frequency) seismic moment Mo. The CLVD source is related to tensile failure occurring at depths above the shotpoint. A static Mo distinguishes this source from classical models of spall, which are usually characterized by horizontal cracks that dynamically open and close with no permanent displacement (i.e. no static Mo). The CLVD source in this study appears to be more closely related to a driven block motion model envisaged by Masse. Rg source amplitudes are consistent with mb(Lg) measurements at station MAK, as would be expected if near-field Rg-to-S scattering plays a role in generating S waves observed at regional distances.

  11. Connecting Numerical Relativity and Data Analysis of Gravitational Wave Detectors

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Jani, Karan; London, Lionel; Pekowsky, Larne

    Gravitational waves deliver information in exquisite detail about astrophysical phenomena, among them the collision of two black holes, a system completely invisible to the eyes of electromagnetic telescopes. Models that predict gravitational wave signals from likely sources are crucial for the success of this endeavor. Modeling binary black hole sources of gravitational radiation requires solving the Einstein equations of General Relativity using powerful computer hardware and sophisticated numerical algorithms. This proceeding presents where we are in understanding ground-based gravitational waves resulting from the merger of black holes and the implications of these sources for the advent of gravitational-wave astronomy.

  12. Articulatory speech synthesis and speech production modelling

    NASA Astrophysics Data System (ADS)

    Huang, Jun

    This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue that the articulatory speech production model has the potential to flexibly synthesize natural-quality speech sounds and to provide a compact computational model for speech production that can be beneficial to a wide range of areas in speech signal processing.

  13. Measurement and modeling of the acoustic field near an underwater vehicle and implications for acoustic source localization.

    PubMed

    Lepper, Paul A; D'Spain, Gerald L

    2007-08-01

    The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.

  14. A clustering algorithm for sample data based on environmental pollution characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  15. Electroencephalography (EEG) forward modeling via H(div) finite element sources with focal interpolation.

    PubMed

    Pursiainen, S; Vorwerk, J; Wolters, C H

    2016-12-21

    The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.

  16. The Top 10 List of Gravitational Lens Candidates from the HUBBLE SPACE TELESCOPE Medium Deep Survey

    NASA Astrophysics Data System (ADS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Ostrander, Eric J.

    1999-05-01

    A total of 10 good candidates for gravitational lensing have been discovered in the WFPC2 images from the Hubble Space Telescope (HST) Medium Deep Survey (MDS) and archival primary observations. These candidate lenses are unique HST discoveries, i.e., they are faint systems with subarcsecond separations between the lensing objects and the lensed source images. Most of them are difficult objects for ground-based spectroscopic confirmation or for measurement of the lens and source redshifts. Seven are ``strong lens'' candidates that appear to have multiple images of the source. Three are cases in which the single image of the source galaxy has been significantly distorted into an arc. The first two quadruply lensed candidates were reported by Ratnatunga et al. We report on the subsequent eight candidates and describe them with simple models based on the assumption of singular isothermal potentials. Residuals from the simple models for some of the candidates indicate that a more complex model for the potential will probably be required to explain the full structural detail of the observations once they are confirmed to be lenses. We also discuss the effective survey area that was searched for these candidate lens objects.

  17. Integrating HCI Specialists into Open Source Software Development Projects

    NASA Astrophysics Data System (ADS)

    Hedberg, Henrik; Iivari, Netta

    Typical open source software (OSS) development projects are organized around technically talented developers, whose communication is based on technical aspects and source code. Decision-making power is gained through proven competence and activity in the project, and non-technical end-user opinions are too many times neglected. In addition, also human-computer interaction (HCI) specialists have encountered difficulties in trying to participate in OSS projects, because there seems to be no clear authority and responsibility for them. In this paper, based on HCI and OSS literature, we introduce an extended OSS development project organization model that adds a new level of communication and roles for attending human aspects of software. The proposed model makes the existence of HCI specialists visible in the projects, and promotes interaction between developers and the HCI specialists in the course of a project.

  18. Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy logic tool.

    PubMed

    Nobre, R C M; Rotunno Filho, O C; Mansur, W J; Nobre, M M M; Cosenza, C A N

    2007-12-07

    A groundwater vulnerability and risk mapping assessment, based on a source-pathway-receptor approach, is presented for an urban coastal aquifer in northeastern Brazil. A modified version of the DRASTIC methodology was used to map the intrinsic and specific groundwater vulnerability of a 292 km(2) study area. A fuzzy hierarchy methodology was adopted to evaluate the potential contaminant source index, including diffuse and point sources. Numerical modeling was performed for delineation of well capture zones, using MODFLOW and MODPATH. The integration of these elements provided the mechanism to assess groundwater pollution risks and identify areas that must be prioritized in terms of groundwater monitoring and restriction on use. A groundwater quality index based on nitrate and chloride concentrations was calculated, which had a positive correlation with the specific vulnerability index.

  19. Air Pollution Source/receptor Relationships in South Coast Air Basin, CA

    NASA Astrophysics Data System (ADS)

    Gao, Ning

    This research project includes the application of some existing receptor models to study the air pollution source/receptor relationships in the South Coast Air Basin (SoCAB) of southern California, the development of a new receptor model and the testing and the modifications of some existing models. These existing receptor models used include principal component factor analysis (PCA), potential source contribution function (PSCF) analysis, Kohonen's neural network combined with Prim's minimal spanning tree (TREE-MAP), and direct trilinear decomposition followed by a matrix reconstruction. The ambient concentration measurements used in this study are a subset of the data collected during the 1987 field exercise of Southern California Air Quality Study (SCAQS). It consists of a number of gaseous and particulate pollutants analyzed from samples collected by SCAQS samplers at eight sampling sites, Anaheim, Azusa, Burbank, Claremont, Downtown Los Angeles, Hawthorne, Long Beach, and Rubidoux. Based on the information of emission inventories, meteorology and ambient concentrations, this receptor modeling study has revealed mechanisms that influence the air quality in SoCAB. Some of the mechanisms affecting the air quality in SoCAB that were revealed during this study include the following aspects. The SO_2 collected at sampling sites is mainly contributed by refineries in the coastal area and the ships equipped with oil-fired boilers off shore. Combustion of fossil fuel by automobiles dominates the emission of NO_{rm x} that is subsequently transformed and collected at sampling sites. Electric power plants also contribute HNO_3 to the sampling sites. A large feedlot in the eastern region of SoCAB has been identified as the major source of NH_3. Possible contributions from other industrial sources such as smelters and incinerators were also revealed. The results of this study also suggest the possibility of DMS (dimethylsulfide) and NH_3 emissions from off-shore sediments that have been contaminated by waste sludge disposal. The study also discovered that non-anthropogenic sources account for the observation of many chemical components being brought to the sampling sites, such as seasalt particles, soil particles, and Cl emission from Mojave Desert. The potential and limitation of the receptor models have been evaluated and some modifications have been made to improve the value of the models. A source apportionment method has been developed based on the application results of the potential source contribution function (PSCF) analysis.

  20. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    PubMed

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  1. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  2. An Overview of Virtual Acoustic Simulation of Aircraft Flyover Noise

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2013-01-01

    Methods for testing human subject response to aircraft flyover noise have greatly advanced in recent years as a result of advances in simulation technology. Capabilities have been developed which now allow subjects to be immersed both visually and aurally in a three-dimensional, virtual environment. While suitable for displaying recorded aircraft noise, the true potential is found when synthesizing aircraft flyover noise because it allows the flexibility and freedom to study sounds from aircraft not yet flown. A virtual acoustic simulation method is described which is built upon prediction-based source noise synthesis, engineering-based propagation modeling, and empirically-based receiver modeling. This source-path-receiver paradigm allows complete control over all aspects of flyover auralization. With this capability, it is now possible to assess human response to flyover noise by systematically evaluating source noise reductions within the context of a system level simulation. Examples of auralized flyover noise and movie clips representative of an immersive aircraft flyover environment are made in the presentation.

  3. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  4. Contamination characteristics and source apportionment of trace metals in soils around Miyun Reservoir.

    PubMed

    Chen, Haiyang; Teng, Yanguo; Chen, Ruihui; Li, Jiao; Wang, Jinsheng

    2016-08-01

    Due to their toxicity and bioaccumulation, trace metals in soils can result in a wide range of toxic effects on animals, plants, microbes, and even humans. Recognizing the contamination characteristics of soil metals and especially apportioning their potential sources are the necessary preconditions for pollution prevention and control. Over the past decades, several receptor models have been developed for source apportionment. Among them, positive matrix factorization (PMF) has gained popularity and was recommended by the US Environmental Protection Agency as a general modeling tool. In this study, an extended chemometrics model, multivariate curve resolution-alternating least squares based on maximum likelihood principal component analysis (MCR-ALS/MLPCA), was proposed for source apportionment of soil metals and applied to identify the potential sources of trace metals in soils around Miyun Reservoir. Similar to PMF, the MCR-ALS/MLPCA model can incorporate measurement error information and non-negativity constraints in its calculation procedures. Model validation with synthetic dataset suggested that the MCR-ALS/MLPCA could extract acceptable recovered source profiles even considering relatively larger error levels. When applying to identify the sources of trace metals in soils around Miyun Reservoir, the MCR-ALS/MLPCA model obtained the highly similar profiles with PMF. On the other hand, the assessment results of contamination status showed that the soils around reservoir were polluted by trace metals in slightly moderate degree but potentially posed acceptable risks to the public. Mining activities, fertilizers and agrochemicals, and atmospheric deposition were identified as the potential anthropogenic sources with contributions of 24.8, 14.6, and 13.3 %, respectively. In order to protect the drinking water source of Beijing, special attention should be paid to the metal inputs to soils from mining and agricultural activities.

  5. Reconstructing Exposures from Biomarkers using Exposure-Pharmacokinetic Modeling - A Case Study with Carbaryl

    EPA Science Inventory

    Sources of uncertainty involved in exposure reconstruction for a short half-life chemical, carbaryl, were characterized using the Cumulative and Aggregate Risk Evaluation System (CARES), an exposure model, and a human physiologically based pharmacokinetic (PBPK) model. CARES was...

  6. A Microphysics-Based Black Carbon Aging Scheme in a Global Chemical Transport Model: Constraints from HIPPO Observations

    NASA Astrophysics Data System (ADS)

    He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.

    2015-12-01

    Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.

  7. A virtual source model for Monte Carlo simulation of helical tomotherapy.

    PubMed

    Yuan, Jiankui; Rong, Yi; Chen, Quan

    2015-01-08

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.

  8. A virtual source model for Monte Carlo simulation of helical tomotherapy

    PubMed Central

    Yuan, Jiankui; Rong, Yi

    2015-01-01

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157

  9. Targeted versus statistical approaches to selecting parameters for modelling sediment provenance

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick

    2017-04-01

    One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.

  10. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  11. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  12. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general, estimates of B and K are based on the initial P-wave pulse, which various numerical analyses show to be least affected by variations in near-source path effects. The corner-frequency parameter K is 20% lower at NTS (Pahute) than at other sites, implying larger effective source radii. The overshoot parameter B appears to be low at NTS (although variable) relative to other sites and is probably due to variations in source conditions. For a low B, the near-field data require a higher value of ψ ∞ to match the long-period MS and short-period mb observations. This flexibility in modeling proves useful in comparing released FSU yields against predictions based on mb and MS.

  13. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  14. Software Toolbox Development for Rapid Earthquake Source Optimisation Combining InSAR Data and Seismic Waveforms

    NASA Astrophysics Data System (ADS)

    Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.

    2017-04-01

    We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  15. Physical models and primary design of reactor based slow positron source at CMRR

    NASA Astrophysics Data System (ADS)

    Wang, Guanbo; Li, Rundong; Qian, Dazhi; Yang, Xin

    2018-07-01

    Slow positron facilities are widely used in material science. A high intensity slow positron source is now at the design stage based on the China Mianyang Research Reactor (CMRR). This paper describes the physical models and our primary design. We use different computer programs or mathematical formula to simulate different physical process, and validate them by proper experiments. Considering the feasibility, we propose a primary design, containing a cadmium shield, a honeycomb arranged W tubes assembly, electrical lenses, and a solenoid. It is planned to be vertically inserted in the Si-doping channel. And the beam intensity is expected to be 5 ×109

  16. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  17. 1-D/3-D geologic model of the Western Canada Sedimentary Basin

    USGS Publications Warehouse

    Higley, D.K.; Henry, M.; Roberts, L.N.R.; Steinshouer, D.W.

    2005-01-01

    The 3-D geologic model of the Western Canada Sedimentary Basin comprises 18 stacked intervals from the base of the Devonian Woodbend Group and age equivalent formations to ground surface; it includes an estimated thickness of eroded sediments based on 1-D burial history reconstructions for 33 wells across the study area. Each interval for the construction of the 3-D model was chosen on the basis of whether it is primarily composed of petroleum system elements of reservoir, hydrocarbon source, seal, overburden, or underburden strata, as well as the quality and areal distribution of well and other data. Preliminary results of the modeling support the following interpretations. Long-distance migration of hydrocarbons east of the Rocky Mountains is indicated by oil and gas accumulations in areas within which source rocks are thermally immature for oil and (or) gas. Petroleum systems in the basin are segmented by the northeast-trending Sweetgrass Arch; hydrocarbons west of the arch were from source rocks lying near or beneath the Rocky Mountains, whereas oil and gas east of the arch were sourced from the Williston Basin. Hydrocarbon generation and migration are primarily due to increased burial associated with the Laramide Orogeny. Hydrocarbon sources and migration were also influenced by the Lower Cretaceous sub-Mannville unconformity. In the Peace River Arch area of northern Alberta, Jurassic and older formations exhibit high-angle truncations against the unconformity. Potential Paleozoic though Mesozoic hydrocarbon source rocks are in contact with overlying Mannville Group reservoir facies. In contrast, in Saskatchewan and southern Alberta the contacts are parallel to sub-parallel, with the result that hydrocarbon source rocks are separated from the Mannville Group by seal-forming strata within the Jurassic. Vertical and lateral movement of hydrocarbons along the faults in the Rocky Mountains deformed belt probably also resulted in mixing of oil and gas from numerous source rocks in Alberta.

  18. Python-Based Applications for Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.

  19. Localization of diffusion sources in complex networks with sparse observations

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng

    2018-04-01

    Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.

  20. Source Attribution of Near-surface Ozone in the Western US: Improved Estimates by TF HTAP2 Multi-model Experiment and Multi-scale Chemical Data Assimilation

    NASA Astrophysics Data System (ADS)

    Huang, M.; Bowman, K. W.; Carmichael, G. R.; Lee, M.; Park, R.; Henze, D. K.; Chai, T.; Flemming, J.; Lin, M.; Weinheimer, A. J.; Wisthaler, A.; Jaffe, D. A.

    2014-12-01

    Near-surface ozone in the western US can be sensitive to transported background pollutants from the free troposphere over the eastern Pacific, as well as various local emissions sources. Accurately estimating ozone source contributions in this region has strong policy-relevant significance as the air quality standards tend to go down. Here we improve modeled contributions from local and non-local sources to western US ozone base on the HTAP2 (Task Force on Hemispheric Transport of Air Pollution) multi-model experiment, along with multi-scale chemical data assimilation. We simulate western US air quality using the STEM regional model on a 12 km horizontal resolution grid, during the NASA ARCTAS field campaign period in June 2008. STEM simulations use time-varying boundary conditions downscaled from global GEOS-Chem model simulations. Standard GEOS-Chem simulation overall underpredicted ozone at 1-5 km in the eastern Pacific, resulting in underestimated contributions from the transported background pollutants to surface ozone inland. These negative biases can be reduced by using the output from several global models that support the HTAP2 experiment, which all ran with the HTAP2 harmonized emission inventory and also calculated the contributions from east Asian anthropogenic emissions. We demonstrate that the biases in GEOS-Chem boundary conditions can be more efficiently reduced via assimilating satellite ozone profiles from the Tropospheric Emission Spectrometer (TES) instrument using the three dimensional variational (3D-Var) approach. Base upon these TES-constrained GEOS-Chem boundary conditions, we then update regional nitrogen dioxide and isoprene emissions in STEM through the four dimensional variational (4D-Var) assimilation of the Ozone Monitoring Instrument (OMI) nitrogen dioxide columns and the NASA DC-8 aircraft isoprene measurements. The 4D-Var assimilation spatially redistributed the emissions of nitrogen oxides and isoprene from various US sources, and in the meantime updated the modeled ozone and its US source contributions. Compared with available independent measurements (e.g., ozone observed on the DC-8 aircraft, and at EPA and Mt. Bachelor monitoring stations) during this period, modeled ozone fields after the multi-scale assimilation show overall improvement.

Top