Mathematical Modeling Of A Nuclear/Thermionic Power Source
NASA Technical Reports Server (NTRS)
Vandersande, Jan W.; Ewell, Richard C.
1992-01-01
Report discusses mathematical modeling to predict performance and lifetime of spacecraft power source that is integrated combination of nuclear-fission reactor and thermionic converters. Details of nuclear reaction, thermal conditions in core, and thermionic performance combined with model of swelling of fuel.
Evaluation of the Combined AERCOARE/AERMOD Modeling Approach for Offshore Sources
ENVIRON conducted an evaluation of the combined AERCOARE/AERMOD (AERCOARE-MOD) modeling approach for offshore sources using tracer data from four field studies. AERCOARE processes overwater meteorological data for use by the AERMOD air quality dispersion model (EPA, 2004a). AERC...
NASA Technical Reports Server (NTRS)
Snyder, Gregory A.; Taylor, Lawrence A.; Neal, Clive R.
1992-01-01
A chemical model for simulating the sources of the lunar mare basalts was developed by considering a modified mafic cumulate source formed during the combined equilibrium and fractional crystallization of a lunar magma ocean (LMO). The parameters which influence the initial LMO and its subsequent crystallization are examined, and both trace and major elements are modeled. It is shown that major elements tightly constrain the composition of mare basalt sources and the pathways to their creation. The ability of this LMO model to generate viable mare basalt source regions was tested through a case study involving the high-Ti basalts.
Comparing models of the combined-stimulation advantage for speech recognition.
Micheyl, Christophe; Oxenham, Andrew J
2012-05-01
The "combined-stimulation advantage" refers to an improvement in speech recognition when cochlear-implant or vocoded stimulation is supplemented by low-frequency acoustic information. Previous studies have been interpreted as evidence for "super-additive" or "synergistic" effects in the combination of low-frequency and electric or vocoded speech information by human listeners. However, this conclusion was based on predictions of performance obtained using a suboptimal high-threshold model of information combination. The present study shows that a different model, based on Gaussian signal detection theory, can predict surprisingly large combined-stimulation advantages, even when performance with either information source alone is close to chance, without involving any synergistic interaction. A reanalysis of published data using this model reveals that previous results, which have been interpreted as evidence for super-additive effects in perception of combined speech stimuli, are actually consistent with a more parsimonious explanation, according to which the combined-stimulation advantage reflects an optimal combination of two independent sources of information. The present results do not rule out the possible existence of synergistic effects in combined stimulation; however, they emphasize the possibility that the combined-stimulation advantages observed in some studies can be explained simply by non-interactive combination of two information sources.
Finite Element modelling of deformation induced by interacting volcanic sources
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora
2010-05-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; Whitney, Paul D.; White, Amanda M.
2013-07-15
Pacific Northwest National Laboratory has spent several years researching, developing, and validating large Bayesian network models to support integration of open source data sets for nuclear proliferation research. Our current work focuses on generating a set of interrelated models for multi-source assessment of nuclear programs, as opposed to a single comprehensive model. By using this approach, we can break down the models to cover logical sub-problems that can utilize different expertise and data sources. This approach allows researchers to utilize the models individually or in combination to detect and characterize a nuclear program and identify data gaps. The models operatemore » at various levels of granularity, covering a combination of state-level assessments with more detailed models of site or facility characteristics. This paper will describe the current open source-driven, nuclear nonproliferation models under development, the pros and cons of the analytical approach, and areas for additional research.« less
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
NASA Astrophysics Data System (ADS)
Pascal, K.; Neuberg, J. W.; Rivalta, E.
2011-12-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system is represented by several sources, their respective deformation fields are summed, and the assumption of homogeneity in the half-space is violated. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or opening of the sources and their relative position. We also investigated various numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. In the former case, the dike opening was either defined as two boundaries displaced from a central location, or as one boundary displaced relative to the other. We finally considered two case studies based on Soufrière Hills Volcano (Montserrat, West Indies) and the Dabbahu rift segment (Afar, Ethiopia) magmatic systems. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. Their magnitude may be comparable with the errors due to neglecting the topography, the inhomogeneities in crustal properties or more realistic rheologies. In the models considered, the errors induced when neglecting the source interaction can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources. Furthermore, this study underlines fundamental issues related to the numerical method chosen to model a dike or a magma chamber. It clearly demonstrates that, while the magma compressibility can be neglected to model the deformation due to one source or distant sources, it is necessary to take it into account in models combining close sources.
On precisely modelling surface deformation due to interacting magma chambers and dykes
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora
2014-01-01
Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Identifying the sources of dissolved inorganic nitrogen (DIN) in estuaries is complicated by the multiple sources, temporal variability in inputs, and variations in transport. We used a hydrodynamic model to simulate the transport and uptake of three sources of DIN (oceanic, riv...
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Quantifying the errors due to the superposition of analytical deformation sources
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Pascal, K.
2012-04-01
The displacement field due to magma movement in the subsurface is often modelled using a Mogi point source or a dislocation Okada source embedded in a homogeneous elastic half-space. When the magmatic system cannot be modelled by a single source it is often represented by several sources, their respective deformation fields are superimposed. However, in such a case the assumption of homogeneity in the half-space is violated and the interaction between sources in an elastic medium is neglected. In this investigation we have quantified the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or dislocation of the sources and their relative position. We also investigated three numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. The errors induced when neglecting the source interaction are expected to vary greatly with the physical and geometrical parameters of the model. We demonstrated that for certain scenarios these discrepancies can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources
Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling
2015-09-01
A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.
SOURCE AGGREGATION IN STABLE ISOTOPE MIXING MODELS: LUMP IT OR LEAVE IT?
A common situation when stable isotope mixing models are used to estimate source contributions to a mixture is that there are too many sources to allow a unique solution. To resolve this problem one option is to combine sources with similar signatures such that the number of sou...
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
NASA Astrophysics Data System (ADS)
Pongmala, Khemngeun; Autixier, Laurène; Madoux-Humery, Anne-Sophie; Fuamba, Musandji; Galarneau, Martine; Sauvé, Sébastien; Prévost, Michèle; Dorner, Sarah
2015-12-01
Urban source water protection requires knowledge of sources of fecal contamination upstream of drinking water intakes. Combined and sanitary sewer overflows (CSOs and SSOs) are primary sources of microbiological contamination and wastewater micropollutants (WWMPs) in urban water supplies. To quantify the impact of sewer overflows, predictive simulation models are required and have not been widely applied for microbial contaminants such as fecal indicator bacteria and pathogens in urban drainage networks. The objective of this study was to apply a simulation model to estimate the dynamics of three contaminants in sewer overflows - total suspended solids, Escherichia coli (E. coli) and carbamazepine, a WWMP. A mixed combined and pseudo-sanitary drainage network in Québec, Canada was studied and modelled for a total of 7 events for which water quality data were available. Model results were significantly correlated with field water quality data. The model confirmed that the contributions of E. coli from runoff and sewer deposits were minor and their dominant source was from sewage. In contrast, the main sources of total suspended solids were stormwater runoff and sewer resuspension. Given that it is not present in stormwater, carbamazepine was found to be a useful stable tracer of sewage contributions to total contaminant loads and also provided an indication of the fraction of total suspended solids originating from sewer deposits because of its similar response to increasing flowrates.
Gille, Laure-Anne; Marquis-Favre, Catherine; Lam, Kin-Che
2017-11-30
Structural equation modeling was used to analyze partial and total in situ annoyance in combined transportation noise situations. A psychophysical total annoyance model and a perceptual total annoyance model were proposed. Results show a high contribution of Noise exposure and Noise sensitivity to Noise annoyance , as well as a causal relationship between noise annoyance and lower Dwelling satisfaction. Moreover, the Visibility of noise source may increase noise annoyance, even when the visible noise source is different from the annoying source under study. With regards to total annoyance due to road traffic noise combined with railway or aircraft noise, even though in both situations road traffic noise may be considered background noise and the other noise source event noise, the contribution of road traffic noise to the models is greater than railway noise and smaller than aircraft noise. This finding may be explained by the difference in sound pressure levels between these two types of combined exposures or by the aircraft noise level, which may also indicate the city in which the respondents live. Finally, the results highlight the importance of sample size and variable distribution in the database, as different results can be observed depending on the sample or variables considered.
Gille, Laure-Anne; Marquis-Favre, Catherine; Lam, Kin-Che
2017-01-01
Structural equation modeling was used to analyze partial and total in situ annoyance in combined transportation noise situations. A psychophysical total annoyance model and a perceptual total annoyance model were proposed. Results show a high contribution of Noise exposure and Noise sensitivity to Noise annoyance, as well as a causal relationship between noise annoyance and lower Dwelling satisfaction. Moreover, the Visibility of noise source may increase noise annoyance, even when the visible noise source is different from the annoying source under study. With regards to total annoyance due to road traffic noise combined with railway or aircraft noise, even though in both situations road traffic noise may be considered background noise and the other noise source event noise, the contribution of road traffic noise to the models is greater than railway noise and smaller than aircraft noise. This finding may be explained by the difference in sound pressure levels between these two types of combined exposures or by the aircraft noise level, which may also indicate the city in which the respondents live. Finally, the results highlight the importance of sample size and variable distribution in the database, as different results can be observed depending on the sample or variables considered. PMID:29189751
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tim, U.S.; Jolly, R.
1994-01-01
Considerable progress has been made in developing physically based, distributed parameter, hydrologic/water quality (HIWQ) models for planning and control of nonpoint-source pollution. The widespread use of these models is often constrained by the excessive and time-consuming input data demands and the lack of computing efficiencies necessary for iterative simulation of alternative management strategies. Recent developments in geographic information systems (GIS) provide techniques for handling large amounts of spatial data for modeling nonpoint-source pollution problems. Because a GIS can be used to combine information from several sources to form an array of model input data and to examine any combinations ofmore » spatial input/output data, it represents a highly effective tool for HiWQ modeling. This paper describes the integration of a distributed-parameter model (AGNPS) with a GIS (ARC/INFO) to examine nonpoint sources of pollution in an agricultural watershed. The ARC/INFO GIS provided the tools to generate and spatially organize the disparate data to support modeling, while the AGNPS model was used to predict several water quality variables including soil erosion and sedimentation within a watershed. The integrated system was used to evaluate the effectiveness of several alternative management strategies in reducing sediment pollution in a 417-ha watershed located in southern Iowa. The implementation of vegetative filter strips and contour buffer (grass) strips resulted in a 41 and 47% reduction in sediment yield at the watershed outlet, respectively. In addition, when the integrated system was used, the combination of the above management strategies resulted in a 71% reduction in sediment yield. In general, the study demonstrated the utility of integrating a simulation model with GIS for nonpoini-source pollution control and planning. Such techniques can help characterize the diffuse sources of pollution at the landscape level. 52 refs., 6 figs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises
Marquis-Favre, Catherine; Morel, Julien
2015-01-01
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326
Added-value joint source modelling of seismic and geodetic data
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank
2013-04-01
In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.
2014-01-01
Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less
Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge
2014-01-01
Background Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. Results To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. Conclusion It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work. PMID:24959206
Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus
2018-05-12
A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Combination of acoustical radiosity and the image source method.
Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn
2013-06-01
A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
NASA Astrophysics Data System (ADS)
Qin, Y.; Oduyemi, K.
Anthropogenic aerosol (PM 10) emission sources sampled at an air quality monitoring station in Dundee have been analysed. However, the information on local natural aerosol emission sources was unavailable. A method that combines receptor model and atmospheric dispersion model was used to identify aerosol sources and estimate source contributions to air pollution. The receptor model identified five sources. These are aged marine aerosol source with some chlorine replaced by sulphate, secondary aerosol source of ammonium sulphate, secondary aerosol source of ammonium nitrate, soil and construction dust source, and incinerator and fuel oil burning emission source. For the vehicle emission source, which has been comprehensively described in the atmospheric emission inventory but cannot be identified by the receptor model, an atmospheric dispersion model was used to estimate its contributions. In Dundee, a significant percentage, 67.5%, of the aerosol mass sampled at the study station could be attributed to the six sources named above.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
The Modeling Environment for Total Risks studies (MENTOR) system, combined with an extension of the SHEDS (Stochastic Human Exposure and Dose Simulation) methodology, provide a mechanistically consistent framework for conducting source-to-dose exposure assessments of multiple pol...
VLA OH Zeeman Observations of the NGC 6334 Complex Source A
NASA Astrophysics Data System (ADS)
Mayo, E. A.; Sarma, A. P.; Troland, T. H.; Abel, N. P.
2004-12-01
We present a detailed analysis of the NGC 6334 complex source A, a compact continuum source in the SW region of the complex. Our intent is to determine the significance of the magnetic field in the support of the surrounding molecular cloud against gravitational collapse. We have performed OH 1665 and 1667 MHz observations taken with the Very Large Array in the BnA configuration and combined these data with the lower resolution CnB data of Sarma et al. (2000). These observations reveal magnetic fields with values of the order of 350 μ G toward source A, with maximum fields reaching 500 μ G. We have also theoretically modeled the molecular cloud surrounding source A using Cloudy, with the constraints to the model based on observation. This model provides significant information on the density of H2 through the cloud and also the relative density of H2 to OH which is important to our analysis of the region. We will combine the knowledge gained through the Cloudy modeling with Virial estimates to determine the significance of the magnetic field to the dynamics and evolution of source A.
NASA Astrophysics Data System (ADS)
Costa, A.; Stutenbecker, L.; Anghileri, D.; Bakker, M.; Lane, S. N.; Molnar, P.; Schlunegger, F.
2017-12-01
In Alpine basins, sediment production and transfer is increasingly affected by climate change and human activities, specifically hydropower exploitation. Changes in sediment sources and pathways significantly influence basin management, biodiversity and landscape evolution. We explore the dynamics of sediment sources in a partially glaciated and highly regulated Alpine basin, the Borgne basin, by combining geochemical fingerprinting with the modelling of erosion and sediment transfer. The Borgne basin in southwest Switzerland is composed of three main litho-tectonic units, which we characterised following a tributary-sampling approach from lithologically characteristic sub-basins. We analysed bulk geochemistry using lithium borate fusion coupled with ICP-ES, and we used it to discriminate the three lithologic sources using statistical methods. Finally, we applied a mixing model to estimate the relative contributions of the three sources to the sediment sampled at the outlet. We combine results of the sediment fingerprinting with simulations of a spatially distributed conceptual model for erosion and transport of fine sediment. The model expresses sediment erosion by differentiating the contributions of erosional processes driven by erosive rainfall, snowmelt, and icemelt. Soil erodibility is accounted for as function of land-use and sediment fluxes are linearly convoluted to the outlet by sediment transfer rates for hillslope and river cells, which are a function of sediment connectivity. Sediment connectivity is estimated on the basis of topographic-hydraulic connectivity, flow duration associated with hydropower flow abstraction and permanent storage in hydropower reservoirs. Sediment fingerprinting at the outlet of the Borgne shows a consistent dominance (68-89%) of material derived from the uppermost, highly glaciated reaches, while contributions of the lower part (10-25%) and middle part (1-16%), where rainfall erosion is predominant, are minor. This result is confirmed by the model simulation which shows that, despite the large flow abstraction (about 90%), the upstream reaches contribute the most of the sediments. This study shows how combining geochemical techniques and sediment erosion models provides insight in the dynamics of sediment sources.
Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area
NASA Astrophysics Data System (ADS)
Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.
2008-05-01
Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.
NASA Astrophysics Data System (ADS)
Xia, Yongqiu; Li, Yuefei; Zhang, Xinyu; Yan, Xiaoyuan
2017-01-01
Nitrate (NO3-) pollution is a serious problem worldwide, particularly in countries with intensive agricultural and population activities. Previous studies have used δ15N-NO3- and δ18O-NO3- to determine the NO3- sources in rivers. However, this approach is subject to substantial uncertainties and limitations because of the numerous NO3- sources, the wide isotopic ranges, and the existing isotopic fractionations. In this study, we outline a combined procedure for improving the determination of NO3- sources in a paddy agriculture-urban gradient watershed in eastern China. First, the main sources of NO3- in the Qinhuai River were examined by the dual-isotope biplot approach, in which we narrowed the isotope ranges using site-specific isotopic results. Next, the bacterial groups and chemical properties of the river water were analyzed to verify these sources. Finally, we introduced a Bayesian model to apportion the spatiotemporal variations of the NO3- sources. Denitrification was first incorporated into the Bayesian model because denitrification plays an important role in the nitrogen pathway. The results showed that fertilizer contributed large amounts of NO3- to the surface water in traditional agricultural regions, whereas manure effluents were the dominant NO3- source in intensified agricultural regions, especially during the wet seasons. Sewage effluents were important in all three land uses and exhibited great differences between the dry season and the wet season. This combined analysis quantitatively delineates the proportion of NO3- sources from paddy agriculture to urban river water for both dry and wet seasons and incorporates isotopic fractionation and uncertainties in the source compositions.
Fine particle receptor modeling in the atmosphere of Mexico City.
Vega, Elizabeth; Lowenthal, Douglas; Ruiz, Hugo; Reyes, Elizabeth; Watson, John G; Chow, Judith C; Viana, Mar; Querol, Xavier; Alastuey, Andrés
2009-12-01
Source apportionment analyses were carried out by means of receptor modeling techniques to determine the contribution of major fine particulate matter (PM2.5) sources found at six sites in Mexico City. Thirty-six source profiles were determined within Mexico City to establish the fingerprints of particulate matter sources. Additionally, the profiles under the same source category were averaged using cluster analysis and the fingerprints of 10 sources were included. Before application of the chemical mass balance (CMB), several tests were carried out to determine the best combination of source profiles and species used for the fitting. CMB results showed significant spatial variations in source contributions among the six sites that are influenced by local soil types and land use. On average, 24-hr PM2.5 concentrations were dominated by mobile source emissions (45%), followed by secondary inorganic aerosols (16%) and geological material (17%). Industrial emissions representing oil combustion and incineration contributed less than 5%, and their contribution was higher at the industrial areas of Tlalnepantla (11%) and Xalostoc (8%). Other sources such as cooking, biomass burning, and oil fuel combustion were identified at lower levels. A second receptor model (principal component analysis, [PCA]) was subsequently applied to three of the monitoring sites for comparison purposes. Although differences were obtained between source contributions, results evidence the advantages of the combined use of different receptor modeling techniques for source apportionment, given the complementary nature of their results. Further research is needed in this direction to reach a better agreement between the estimated source contributions to the particulate matter mass.
Peng, Xing; Shi, GuoLiang; Liu, GuiRong; Xu, Jiao; Tian, YingZe; Zhang, YuFen; Feng, YinChang; Russell, Armistead G
2017-02-01
Heavy metals (Cr, Co, Ni, As, Cd, and Pb) can be bound to PM adversely affecting human health. Quantifying the source impacts on heavy metals can provide source-specific estimates of the heavy metal health risk (HMHR) to guide effective development of strategies to reduce such risks from exposure to heavy metals in PM 2.5 (particulate matter (PM) with aerodynamic diameter less than or equal to 2.5 μm). In this study, a method combining Multilinear Engine 2 (ME2) and a risk assessment model is developed to more effectively quantify source contributions to HMHR, including heavy metal non-cancer risk (non-HMCR) and cancer risk (HMCR). The combined model (called ME2-HMHR) has two steps: step1, source contributions to heavy metals are estimated by employing the ME2 model; step2, the source contributions in step 1 are introduced into the risk assessment model to calculate the source contributions to HMHR. The approach was applied to Huzou, China and five significant sources were identified. Soil dust is the largest source of non-HMCR. For HMCR, the source contributions of soil dust, coal combustion, cement dust, vehicle, and secondary sources are 1.0 × 10 -4 , 3.7 × 10 -5 , 2.7 × 10 -6 , 1.6 × 10 -6 and 1.9 × 10 -9 , respectively. The soil dust is the largest contributor to HMCR, being driven by the high impact of soil dust on PM 2.5 and the abundance of heavy metals in soil dust. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, J. P.; Owens, P. N.; Gaspar, L.; Lobb, D. A.; Petticrew, E. L.
2015-12-01
An understanding of sediment redistribution processes and the main sediment sources within a watershed is needed to support watershed management strategies. The fingerprinting technique is increasingly being recognized as a method for establishing the source of the sediment transported within watersheds. However, the different behaviour of the various fingerprinting properties has been recognized as a major limitation of the technique, and the uncertainty associated with tracer selection needs to be addressed. There are also questions associated with which modelling approach (frequentist or Bayesian) is the best to unmix complex environmental mixtures, such as river sediment. This study aims to compare and evaluate the differences between fingerprinting predictions provided by a Bayesian unmixing model (MixSIAR) using different groups of tracer properties for use in sediment source identification. We used fallout radionuclides (e.g. 137Cs) and geochemical elements (e.g. As) as conventional fingerprinting properties, and colour parameters as emerging properties; both alone and in combination. These fingerprinting properties are being used (i.e. Koiter et al., 2013; Barthod et al., 2015) to determine the proportional contributions of fine sediment in the South Tobacco Creek Watershed, an agricultural watershed located in Manitoba, Canada. We show that the unmixing model using a combination of fallout radionuclides and geochemical tracers gave similar results to the model based on colour parameters. Furthermore, we show that a model that combines all tracers (i.e. radionuclide/geochemical and colour) gave similar results, showing that sediment sources change from predominantly topsoil in the upper reaches of the watershed to channel bank and bedrock outcrop material in the lower reaches. Barthod LRM et al. (2015). Selecting color-based tracers and classifying sediment sources in the assessment of sediment dynamics using sediment source fingerprinting. J Environ Qual. Doi:10.2134/jeq2015.01.0043 Koiter AJ et al. (2013). Investigating the role of connectivity and scale in assessing the sources of sediment in an agricultural watershed in the Canadian prairies using sediment source fingerprinting. J Soils Sediments, 13, 1676-1691.
Combined analysis of modeled and monitored SO2 concentrations at a complex smelting facility.
Rehbein, Peter J G; Kennedy, Michael G; Cotsman, David J; Campeau, Madonna A; Greenfield, Monika M; Annett, Melissa A; Lepage, Mike F
2014-03-01
Vale Canada Limited owns and operates a large nickel smelting facility located in Sudbury, Ontario. This is a complex facility with many sources of SO2 emissions, including a mix of source types ranging from passive building roof vents to North America's tallest stack. In addition, as this facility performs batch operations, there is significant variability in the emission rates depending on the operations that are occurring. Although SO2 emission rates for many of the sources have been measured by source testing, the reliability of these emission rates has not been tested from a dispersion modeling perspective. This facility is a significant source of SO2 in the local region, making it critical that when modeling the emissions from this facility for regulatory or other purposes, that the resulting concentrations are representative of what would actually be measured or otherwise observed. To assess the accuracy of the modeling, a detailed analysis of modeled and monitored data for SO2 at the facility was performed. A mobile SO2 monitor sampled at five locations downwind of different source groups for different wind directions resulting in a total of 168 hr of valid data that could be used for the modeled to monitored results comparison. The facility was modeled in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model) using site-specific meteorological data such that the modeled periods coincided with the same times as the monitored events. In addition, great effort was invested into estimating the actual SO2 emission rates that would likely be occurring during each of the monitoring events. SO2 concentrations were modeled for receptors around each monitoring location so that the modeled data could be directly compared with the monitored data. The modeled and monitored concentrations were compared and showed that there were no systematic biases in the modeled concentrations. This paper is a case study of a Combined Analysis of Modelled and Monitored Data (CAMM), which is an approach promulgated within air quality regulations in the Province of Ontario, Canada. Although combining dispersion models and monitoring data to estimate or refine estimates of source emission rates is not a new technique, this study shows how, with a high degree of rigor in the design of the monitoring and filtering of the data, it can be applied to a large industrial facility, with a variety of emission sources. The comparison of modeled and monitored SO2 concentrations in this case study also provides an illustration of the AERMOD model performance for a large industrial complex with many sources, at short time scales in comparison with monitored data. Overall, this analysis demonstrated that the AERMOD model performed well.
NASA Astrophysics Data System (ADS)
Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; Teige, Virginia; Harley, Robert A.; Cohen, Ronald C.
2016-11-01
The majority of anthropogenic CO2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO2 Observation Network (BEACO2N) in California's Bay Area, in combination with an inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km2. The model uses an hourly 1 × 1 km2 emission inventory and 1 × 1 km2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model-observing system in reducing uncertainty in CO2 emissions. We examine uncertainty in estimated CO2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
NASA Astrophysics Data System (ADS)
Lutz, Stefanie; Van Breukelen, Boris
2014-05-01
Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.
Collaborative Research: Atmospheric Pressure Microplasma Chemistry-Photon Synergies Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, David
Combining the effects of low temperature, atmospheric pressure microplasmas and microplasma photon sources shows greatly expanded range of applications of each of them. The plasma sources create active chemical species and these can be activated further by addition of photons and associated photochemistry. There are many ways to combine the effects of plasma chemistry and photochemistry, especially if there are multiple phases present. The project combines construction of appropriate test experimental systems, various spectroscopic diagnostics and mathematical modeling.
AGGREGATING FOOD SOURCES IN STABLE ISOTOPE DIETARY STUDIES: LUMP IT OR LEAVE IT?
A common situation when stable isotope mixing models are used to estimate food source dietary contributions is that there are too many sources to allow a unique solution. To resolve this problem one option is to combine sources with similar signatures such that the number of sou...
Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.; ...
2016-11-01
The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Alexander J.; Shusterman, Alexis A.; McDonald, Brian C.
The majority of anthropogenic CO 2 emissions are attributable to urban areas. While the emissions from urban electricity generation often occur in locations remote from consumption, many of the other emissions occur within the city limits. Evaluating the effectiveness of strategies for controlling these emissions depends on our ability to observe urban CO 2 emissions and attribute them to specific activities. Cost-effective strategies for doing so have yet to be described. Here we characterize the ability of a prototype measurement network, modeled after the Berkeley Atmospheric CO 2 Observation Network (BEACO 2N) in California's Bay Area, in combination with anmore » inverse model based on the coupled Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) to improve our understanding of urban emissions. The pseudo-measurement network includes 34 sites at roughly 2 km spacing covering an area of roughly 400 km 2. The model uses an hourly 1 × 1 km 2 emission inventory and 1 × 1 km 2 meteorological calculations. We perform an ensemble of Bayesian atmospheric inversions to sample the combined effects of uncertainties of the pseudo-measurements and the model. We vary the estimates of the combined uncertainty of the pseudo-observations and model over a range of 20 to 0.005 ppm and vary the number of sites from 1 to 34. We use these inversions to develop statistical models that estimate the efficacy of the combined model–observing system in reducing uncertainty in CO 2 emissions. We examine uncertainty in estimated CO 2 fluxes on the urban scale, as well as for sources embedded within the city such as a line source (e.g., a highway) or a point source (e.g., emissions from the stacks of small industrial facilities). Using our inversion framework, we find that a dense network with moderate precision is the preferred setup for estimating area, line, and point sources from a combined uncertainty and cost perspective. The dense network considered here (modeled after the BEACO 2N network with an assumed mismatch error of 1 ppm at an hourly temporal resolution) could estimate weekly CO 2 emissions from an urban region with less than 5 % error, given our characterization of the combined observation and model uncertainty.« less
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
NASA Astrophysics Data System (ADS)
Deng, Junjun; Zhang, Yanru; Qiu, Yuqing; Zhang, Hongliang; Du, Wenjiao; Xu, Lingling; Hong, Youwei; Chen, Yanting; Chen, Jinsheng
2018-04-01
Source apportionment of fine particulate matter (PM2.5) were conducted at the Lin'an Regional Atmospheric Background Station (LA) in the Yangtze River Delta (YRD) region in China from July 2014 to April 2015 with three receptor models including principal component analysis combining multiple linear regression (PCA-MLR), UNMIX and Positive Matrix Factorization (PMF). The model performance, source identification and source contribution of the three models were analyzed and inter-compared. Source apportionment of PM2.5 was also conducted with the receptor models. Good correlations between the reconstructed and measured concentrations of PM2.5 and its major chemical species were obtained for all models. PMF resolved almost all masses of PM2.5, while PCA-MLR and UNMIX explained about 80%. Five, four and seven sources were identified by PCA-MLR, UNMIX and PMF, respectively. Combustion, secondary source, marine source, dust and industrial activities were identified by all the three receptor models. Combustion source and secondary source were the major sources, and totally contributed over 60% to PM2.5. The PMF model had a better performance on separating the different combustion sources. These findings improve the understanding of PM2.5 sources in background region.
NASA Astrophysics Data System (ADS)
Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han
2016-10-01
One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
NASA Astrophysics Data System (ADS)
Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.
Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.
Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data
2009-05-01
operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models
Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian
2018-05-16
Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Andringa, S.
2017-04-01
We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties aboutmore » physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less
Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröoder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zong, Z.
2017-04-01
We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ṡ 1018 eV, i.e. the region of the all-particle spectrum above the so-called "ankle" feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.
Methane source identification in Boston, Massachusetts using isotopic and ethane measurements
NASA Astrophysics Data System (ADS)
Down, A.; Jackson, R. B.; Plata, D.; McKain, K.; Wofsy, S. C.; Rella, C.; Crosson, E.; Phillips, N. G.
2012-12-01
Methane has substantial greenhouse warming potential and is the principle component of natural gas. Fugitive natural gas emissions could be a significant source of methane to the atmosphere. However, the cumulative magnitude of natural gas leaks is not yet well constrained. We used a combination of point source measurements and ambient monitoring to characterize the methane sources in the Boston urban area. We developed distinct fingerprints for natural gas and multiple biogenic methane sources based on hydrocarbon concentration and isotopic composition. We combine these data with periodic measurements of atmospheric methane and ethane concentration to estimate the fractional contribution of natural gas and biogenic methane sources to the cumulative urban methane flux in Boston. These results are used to inform an inverse model of urban methane concentration and emissions.
patches to cycle from sink to source status and back.Objective: Through a combination of field studies and state-of-the-art quantitative models, we...landscapes with dynamic changes in habitat quality due to management. We also validated our general approach by comparing patterns in our focal species to general, cross-taxa, patterns.
NASA Astrophysics Data System (ADS)
Luscz, E.; Kendall, A. D.; Martin, S. L.; Hyndman, D. W.
2011-12-01
Watershed nutrient loading models are important tools used to address issues including eutrophication, harmful algal blooms, and decreases in aquatic species diversity. Such approaches have been developed to assess the level and source of nutrient loading across a wide range of scales, yet there is typically a tradeoff between the scale of the model and the level of detail regarding the individual sources of nutrients. To avoid this tradeoff, we developed a detailed source nutrient loading model for every watershed in Michigan's lower peninsula. Sources considered include atmospheric deposition, septic tanks, waste water treatment plants, combined sewer overflows, animal waste from confined animal feeding operations and pastured animals, as well as fertilizer from agricultural, residential, and commercial sources and industrial effluents . Each source is related to readily-available GIS inputs that may vary through time. This loading model was used to assess the importance of sources and landscape factors in nutrient loading rates to watersheds, and how these have changed in recent decades. The results showed the value of detailed source inputs, revealing regional trends while still providing insight to the existence of variability at smaller scales.
Exploring Evidence Aggregation Methods and External Expansion Sources for Medical Record Search
2012-11-01
Equation 3 using Indri in the same way as our previous work [12]. We denoted this model as MRM . A Combined Model We linearly combine MRF and MRM to get...retrieving indexing visits ranking III RbM VRM baseline/MRF/ MRM models ICD, NEG MbR Figure 1: Merging results from two different...retrieval model MRM with one expansion collection at a time to explore the expansion effectiveness of each collection as show in Table 5. As we can
Cancer Related-Knowledge - Small Area Estimates
These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.
Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory
Aab, A.; Abreu, P.; Aglietta, M.; ...
2017-04-20
In this paper, we present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 • 10 18 eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show thatmore » uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less
Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Aglietta, M.
In this paper, we present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 • 10 18 eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show thatmore » uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.« less
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
NASA Astrophysics Data System (ADS)
Kim, M. G.; Lin, J. C.; Huang, L.; Edwards, T. W.; Jones, J. P.; Polavarapu, S.; Nassar, R.
2012-12-01
Reducing uncertainties in the projections of atmospheric CO2 concentration levels relies on increasing our scientific understanding of the exchange processes between atmosphere and land at regional scales, which is highly dependent on climate, ecosystem processes, and anthropogenic disturbances. In order for researchers to reduce the uncertainties, a combined framework that mutually addresses these independent variables to account for each process is invaluable. In this research, an example of top-down inversion modeling approach that is combined with stable isotope measurement data is presented. The potential for the proposed analysis framework is demonstrated using the Stochastic Time-Inverted Lagrangian Transport (STILT) model runs combined with high precision CO2 concentration data measured at a Canadian greenhouse gas monitoring site as well as multiple tracers: stable isotopes and combustion-related species. This framework yields a unique regional scale constraint that can be used to relate the measured changes of tracer concentrations to processes in their upwind source regions. The inversion approach both reproduces source areas in a spatially explicit way through sophisticated Lagrangian transport modeling and infers emission processes that leave imprints on atmospheric tracers. The understanding gained through the combined approach can also be used to verify reported emissions as part of regulatory regimes. The results indicate that changes in CO2 concentration is strongly influenced by regional sources, including significant fossil fuel emissions, and that the combined approach can be used to test reported emissions of the greenhouse gas from oil sands developments. Also, methods to further reduce uncertainties in the retrieved emissions by incorporating additional constraints including tracer-to-tracer correlations and satellite measurements are discussed briefly.
“RLINE: A Line Source Dispersion Model for Near-Surface Releases”
Growing concern about human exposure and related adverse health effects near roadways initiated an effort by the U. S. Environmental Protection Agency to reexamine the dispersion of mobile source related pollutants. These adverse effects, in combination with the fact that a signi...
Gateuille, David; Evrard, Olivier; Lefevre, Irène; Moreau-Guigon, Elodie; Alliot, Fabrice; Chevreuil, Marc; Mouchel, Jean-Marie
2014-06-01
Various sources supply PAHs that accumulate in soils. The methodology we developed provided an evaluation of the contribution of local sources (road traffic, local industries) versus remote sources (long range atmospheric transport, fallout and gaseous exchanges) to PAH stocks in two contrasting subcatchments (46-614 km²) of the Seine River basin (France). Soil samples (n = 336) were analysed to investigate the spatial pattern of soil contamination across the catchments and an original combination with radionuclide measurements provided new insights into the evolution of the contamination with depth. Relationships between PAH concentrations and the distance to the potential sources were modelled. Despite both subcatchments are mainly rural, roadside areas appeared to concentrate 20% of the contamination inside the catchment while a local industry was found to be responsible for up to 30% of the stocks. Those results have important implications for understanding and controlling PAH contamination in rural areas of early-industrialized regions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean
2010-11-01
Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.
Lenstronomy: Multi-purpose gravitational lens modeling software package
NASA Astrophysics Data System (ADS)
Birrer, Simon; Amara, Adam
2018-04-01
Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.
Remanent magnetization and 3-dimensional density model of the Kentucky anomaly region
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.; Myers, D. M.
1984-01-01
A three-dimensional model of the Kentucky body was developed to fit surface gravity and long wavelength aeromagnetic data. Magnetization and density parameters for the model are much like those of Mayhew et al (1982). The magnetic anomaly due to the model at satellite altitude is shown to be much too small by itself to account for the anomaly measured by Magsat. It is demonstrated that the source region for the satellite anomaly is considerably more extensive than the Kentucky body sensu stricto. The extended source region is modeled first using prismatic model sources and then using dipole array sources. Magnetization directions for the source region found by inversion of various combinations of scalar and vector data are found to be close to the main field direction, implying the lack of a strong remanent component. It is shown by simulation that in a case (such as this) where the geometry of the source is known, if a strong remanent component is present its direction is readily detectable, but by scalar data as readily as vector data.
Dissecting the Gamma-Ray Background in Search of Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cholis, Ilias; Hooper, Dan; McDermott, Samuel D.
2014-02-01
Several classes of astrophysical sources contribute to the approximately isotropic gamma-ray background measured by the Fermi Gamma-Ray Space Telescope. In this paper, we use Fermi's catalog of gamma-ray sources (along with corresponding source catalogs at infrared and radio wavelengths) to build and constrain a model for the contributions to the extragalactic gamma-ray background from astrophysical sources, including radio galaxies, star-forming galaxies, and blazars. We then combine our model with Fermi's measurement of the gamma-ray background to derive constraints on the dark matter annihilation cross section, including contributions from both extragalactic and galactic halos and subhalos. The resulting constraints are competitivemore » with the strongest current constraints from the Galactic Center and dwarf spheroidal galaxies. As Fermi continues to measure the gamma-ray emission from a greater number of astrophysical sources, it will become possible to more tightly constrain the astrophysical contributions to the extragalactic gamma-ray background. We project that with 10 years of data, Fermi's measurement of this background combined with the improved constraints on the astrophysical source contributions will yield a sensitivity to dark matter annihilations that exceeds the strongest current constraints by a factor of ~ 5 - 10.« less
Lidar method to estimate emission rates from extended sources
USDA-ARS?s Scientific Manuscript database
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Duct Liner Optimization for Turbomachinery Noise Sources
1975-11-01
AD-A279 441lIIIflhIh* NASA TECHNICAL NASA TMA X-72789 MEMORANDUM oo £ 00 r-:. DUCT LINER OPTIMIZATION FOR TURBOMACHINERY w NOISE SOURCES By Harold C...Recipient’s r.atalog No. NASA TM X-72789! 4 Title diid Subtitle 5. Rewrt Date Duct Liner Optimization for Turbomachinery Noise Sources November 1975...profiles is combined wit., a numerical minimization algorithm to predict optimal liner configurations having one, two, and three sections. Source models
Combining Multiple Knowledge Sources for Speech Recognition
1988-09-15
Thus, the first is thle to clarify the pronunciationt ( TASSEAJ for the acronym TASA !). best adaptation sentence, the second sentence, whens addled...10 rapid adapltati,,n sen- tenrces, and 15 spell-i,, de phrases. 6101 resource rirailageo lei SPEAKER-DEPENDENT DATABASE sentences were randortily...combining the smoothed phoneme models with the de - system tested on a standard database using two well de . tailed context models. BYBLOS makes maximal use
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Iterative combination of national phenotype, genotype, pedigree, and foreign information
USDA-ARS?s Scientific Manuscript database
Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...
Göthe, Katrin; Oberauer, Klaus
2008-05-01
Dual process models postulate familiarity and recollection as the basis of the recognition process. We investigated the time-course of integration of the two information sources to one recognition judgment in a working memory task. We tested 24 subjects with a response signal variant of the modified Sternberg recognition task (Oberauer, 2001) to isolate the time course of three different probe types indicating different combinations of familiarity and source information. We compared two mathematical models implementing different ways of integrating familiarity and recollection. Within each model, we tested three assumptions about the nature of the familiarity signal, with familiarity having (a) only positive values, indicating similarity of the probe with the memory list, (b) only negative values, indicating novelty, or (c) both positive and negative values. Both models provided good fits to the data. A model combining the outputs of both processes additively (Integration Model) gave an overall better fit to the data than a model based on a continuous familiarity signal and a probabilistic all-or-none recollection process (Dominance Model).
Contaminant dispersal in bounded turbulent shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Bernard, P.S.; Chiang, K.F.
The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less
The Sources of the Relationship between Sustained Attention and Reasoning
ERIC Educational Resources Information Center
Ren, Xuezhu; Schweizer, Karl; Xu, Fen
2013-01-01
Although a substantial relationship of sustained attention and reasoning was consistently found, little is known about what drives this relationship. The present study aims at revealing the underlying sources that are responsible for the relationship by means of an integrative approach combining experimental manipulation and psychometric modeling.…
A modeling framework for characterizing near-road air pollutant concentration at community scales
In this study, we combine information from transportation network, traffic emissions, and dispersion model to develop a framework to inform exposure estimates for traffic-related air pollutants (TRAPs) with a high spatial resolution. A Research LINE source dispersion model (R-LIN...
ERIC Educational Resources Information Center
SILVERN, LEONARD C.
THE MAJOR OBJECTIVES OF THIS FEASIBILITY STUDY WERE (1) TO IDENTIFY INFORMATION SOURCES WHICH FURNISH OCCUPATIONAL AND ECONOMIC DATA TO SECONDARY SCHOOLS, (2) TO SELECT THOSE SOURCES WHICH ARE BELIEVED TO HAVE A MEASURABLE INFLUENCE ON THE VOCATIONAL CURRICULUM, AND (3) TO CATEGORIZE, RELATE, AND COMBINE OR RESTRUCTURE THOSE SOURCES INTO A…
Comparison of hybrid receptor models to locate PCB sources in Chicago
NASA Astrophysics Data System (ADS)
Hsu, Ying-Kuang; Holsen, Thomas M.; Hopke, Philip K.
Results of three hybrid receptor models, potential source contribution function (PSCF), concentration weighted trajectory (CWT), and residence time weighted concentration (RTWC), were compared for locating polychlorinated biphenyl (PCB) sources contributing to the atmospheric concentrations in Chicago. Variations of these models, including PSCF using mean and 75% criterion concentrations, joint probability PSCF (JP-PSCF), changes of point filters and grid cell sizes for RTWC, and PSCF using wind trajectories started at different altitudes, are also discussed. Modeling results were relatively consistent between models. However, no single model provided as complete information as was obtained by using all of them. CWT and 75% PSCF appears to be able to distinguish between larger sources and moderate ones. RTWC resolved high potential source areas. RTWC and JP-PSCF pooling data from all sampling sites removed the trailing effect often seen in PSCF modeling. PSCF results using average concentration criteria, appears to identify both moderate and major sources. Each model has advantages and disadvantages. However, used in combination, they provide information that is not available if only one of them is used. For short-range atmospheric transport, PSCF results were consistent when using wind trajectories starting at different heights. Based on the archived PCB data, the modeling results indicate there is a large potential source area between Joliet and Kankakee, IL, and two moderate sources to the northwest and south of Chicago. On the south side of Chicago in the neighborhood of Lake Calumet, several PCB sources were identified. Other unidentified potential source location(s) will require additional upwind/downwind field sampling to verify modeling results.
Flood extent and water level estimation from SAR using data-model integration
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2017-12-01
Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.
NASA Astrophysics Data System (ADS)
Korte, M. C.; Senftleben, R.; Brown, M. C.; Finlay, C. C.; Feinberg, J. M.; Biggin, A. J.
2016-12-01
Geomagnetic field evolution of the recent past can be studied using different data sources: Jackson et al. (2000) combined historical observations with modern field measurements to derive a global geomagnetic field model (gufm1) spanning 1590 to 1990. Several published young archeo- and volcanic paleomagnetic data fall into this time interval. Here, we directly combine data from these different sources to derive a global field model covering the past 1000 years. We particularly focus on reliably recovering dipole moment evolution prior to the times of the first direct absolute intensity observations at around 1840. We first compared the different data types and their agreement with the gufm1 model to assess their compatibility and reliability. We used these results, in combination with statistical modelling tests, to obtain suitable uncertainty estimates as weighting factors for the data in the final model. In addition, we studied samples from seven lava flows from the island of Fogo, Cape Verde, erupted between 1664 and 1857. Oriented samples were available for two of them, providing declination and inclination results. Due to the complicated mineralogy of three of the flows, microwave paleointensity experiments using a modified version of the IZZI protocol were carried out on flows erupted in 1664, 1769, 1816 and 1847. The new directional results are compared with nearby historical data and the influence on, and agreement with, the new model are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
NASA Technical Reports Server (NTRS)
Matthews, Elaine; Walter, B.; Bogner, J.; Sarma, D.; Portney, B.; Hansen, James (Technical Monitor)
2000-01-01
In situ measurements of atmospheric methane concentrations begun in the early 1980s show decadal trends, as well as large interannual variations, in growth rate. Recent research indicates that while wetlands can explain several of the large growth anomalies for individual years, the decadal trend may be the combined effect of increasing sinks, due to increases in tropospheric OH, and stabilizing sources. We discuss new 20-year histories of annual, global source strengths for all major methane sources, i.e., natural wetlands, rice cultivation, ruminant animals, landfills, fossil fuels, and biomass burning, and present estimates of the temporal pattern of the sink required to reconcile these sources and atmospheric concentrations over the time period. Analysis of the individual emission sources, together with model-derived estimates of the OH sink strength, indicates that the growth rate of atmospheric methane observed over the last 20 years can only be explained by a combination of changes in source emissions and an increasing tropospheric sink.
A Targeted Search for Point Sources of EeV Neutrons
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Criss, A.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q. D.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Islo, K.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; La Rosa, G.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Malacari, M.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, A. J.; Matthews, J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Peķala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Peters, C.; Petrera, S.; Petrolini, A.; Petrov, Y.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schulz, A.; Schulz, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Thao, N. T.; Theodoro, V. M.; Tiffenberg, J.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Whelan, B. J.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Auger Collaboration101, The Pierre
2014-07-01
A flux of neutrons from an astrophysical source in the Galaxy can be detected in the Pierre Auger Observatory as an excess of cosmic-ray air showers arriving from the direction of the source. To avoid the statistical penalty for making many trials, classes of objects are tested in combinations as nine "target sets," in addition to the search for a neutron flux from the Galactic center or from the Galactic plane. Within a target set, each candidate source is weighted in proportion to its electromagnetic flux, its exposure to the Auger Observatory, and its flux attenuation factor due to neutron decay. These searches do not find evidence for a neutron flux from any class of candidate sources. Tabulated results give the combined p-value for each class, with and without the weights, and also the flux upper limit for the most significant candidate source within each class. These limits on fluxes of neutrons significantly constrain models of EeV proton emission from non-transient discrete sources in the Galaxy.
Computational toxicology using the OpenTox application programming interface and Bioclipse
2011-01-01
Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
NASA Astrophysics Data System (ADS)
Pereda-Loth, V.; Franceries, X.; Afonso, A. S.; Ayala, A.; Eche, B.; Ginibrière, D.; Gauquelin-Koch, G.; Bardiès, M.; Lacoste-Collin, L.; Courtade-Saïdi, M.
2018-02-01
Astronauts are exposed to microgravity and chronic irradiation but experimental conditions combining these two factors are difficult to reproduce on earth. We have created an experimental device able to combine chronic irradiation and altered gravity that may be used for cell cultures or plant models in a ground based facility. Irradiation was provided by thorium nitrate powder, conditioned so as to constitute a sealed source that could be placed in an incubator. Cell plates or plant seedlings could be placed in direct contact with the source or at various distances above it. Moreover, a random positioning machine (RPM) could be positioned on the source to simulate microgravity. The activity of the source was established using the Bateman formula. The spectrum of the source, calculated according to the natural decrease of radioactivity and the gamma spectrometry, showed very good adequacy. The experimental fluence was close to the theoretical fluence evaluation, attesting its uniform distribution. A Monte Carlo model of the irradiation device was processed by GATE code. Dosimetry was performed with radiophotoluminescent dosimeters exposed for one month at different locations (x and y axes) in various cell culture conditions. Using the RPM placed on the source, we reached a mean absorbed dose of gamma rays of (0.33 ± 0.17) mSv per day. In conclusion, we have elaborated an innovative device allowing chronic radiation exposure to be combined with altered gravity. Given the limited access to the International Space Station, this device could be useful to researchers interested in the field of space biology.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
A combined approach for the evaluation of a volatile organic compound emissions inventory.
Choi, Yu-Jin; Calabrese, Richard V; Ehrman, Sheryl H; Dickerson, Russell R; Stehr, Jeffrey W
2006-02-01
Emissions inventories significantly affect photochemical air quality model performance and the development of effective control strategies. However, there have been very few studies to evaluate their accuracy. Here, to evaluate a volatile organic compound (VOC) emissions inventory, we implemented a combined approach: comparing the ratios of carbon bond (CB)-IV VOC groups to nitrogen oxides (NOx) or carbon monoxide (CO) using an emission preprocessing model, comparing the ratios of VOC source contributions from a source apportionment technique to NOx or CO, and comparing ratios of CB-IV VOC groups to NOx or CO and the absolute concentrations of CB-IV VOC groups using an air quality model, with the corresponding ratios and concentrations observed at three sites (Maryland, Washington, DC, and New Jersey). The comparisons of the ethene/NOx ratio, the xylene group (XYL)/NOx ratio, and ethene and XYL concentrations between estimates and measurements showed some differences, depending on the comparison approach, at the Maryland and Washington, DC sites. On the other hand, consistent results at the New Jersey site were observed, implying a possible overestimation of vehicle exhaust. However, in the case of the toluene group (TOL), which is emitted mainly from surface coating and printing sources in the solvent utilization category, the ratios of TOL/ NOx or CO, as well as the absolute concentrations revealed an overestimate of these solvent sources by a factor of 1.5 to 3 at all three sites. In addition, the overestimate of these solvent sources agreed with the comparisons of surface coating and printing source contributions relative to NOx from a source apportionment technique to the corresponding value of estimates at the Maryland site. Other studies have also suggested an overestimate of solvent sources, implying a possibility of inaccurate emission factors in estimating VOC emissions from surface coating and printing sources. We tested the impact of these overestimates with a chemical transport model and found little change in ozone but substantial changes in calculated secondary organic aerosol concentrations.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
NASA Astrophysics Data System (ADS)
Pedone, Maria; Granieri, Domenico; Moretti, Roberto; Fedele, Alessandro; Troise, Claudia; Somma, Renato; De Natale, Giuseppe
2017-12-01
This study investigates fumarolic CO2 emissions at Campi Flegrei (Southern Italy) and their dispersion in the lowest atmospheric boundary layer. We innovatively utilize a Lagrangian Stochastic dispersion model (WindTrax) combined with an Eulerian model (DISGAS) to diagnose the dispersion of diluted gas plumes over large and complex topographic domains. New measurements of CO2 concentrations acquired in February and October 2014 in the area of Pisciarelli and Solfatara, the two major fumarolic fields of Campi Flegrei caldera, and simultaneous measurements of meteorological parameters are used to: 1) test the ability of WindTrax to calculate the fumarolic CO2 flux from the investigated sources, and 2) perform predictive numerical simulations to resolve the mutual interference between the CO2 emissions of the two adjacent areas. This novel approach allows us to a) better quantify the CO2 emission of the fumarolic source, b) discriminate ;true; CO2 contributions for each source, and c) understand the potential impact of the composite CO2 plume (Pisciarelli ;plus; Solfatara) on the highly populated areas inside the Campi Flegrei caldera.
A Speculative Approach to Design A Hybrid System for Green Energy
NASA Astrophysics Data System (ADS)
Sharma, Dinesh; Sharma, Purnima K.; Naidu, Praveen V.
2017-08-01
Now a day’s demand of energy is increasing all over the world. Because of this demand the fossils fuels are reducing day by day to meet the requirements of energy in daily life of human beings. It is necessary to balance the situation for the increasing energy demand by taking an optimistic overview about the natural renewable energy sources like sun, gust, hydro etc.,. These energy sources only can balance the situation of unbalancing between fossil fuels and increasing energy demand. Renewable energy systems are suitable for off grid services in power generation, to provide services to remote areas to build complex grid infrastructures. India has the abundant source of solar and wind energy. Individually these energy sources have some own advantages and disadvantages; to overcome the disadvantages of individual energy sources we can combine all these sources to make an efficient renewable source nothing but hybrid renewable energy source. In this paper we proposed a hybrid model which is a combination of four renewable energy sources solar, wind, RF signal and living plants to increase the energy efficiency.
Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S
2015-03-15
The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.
Xu, Jiao; Shi, Guo-Liang; Guo, Chang-Sheng; Wang, Hai-Ting; Tian, Ying-Ze; Huangfu, Yan-Qi; Zhang, Yuan; Feng, Yin-Chang; Xu, Jian
2018-01-01
A hybrid model based on the positive matrix factorization (PMF) model and the health risk assessment model for assessing risks associated with sources of perfluoroalkyl substances (PFASs) in water was established and applied at Dianchi Lake to test its applicability. The new method contains 2 stages: 1) the sources of PFASs were apportioned by the PMF model and 2) the contribution of health risks from each source was calculated by the new hybrid model. Two factors were extracted by PMF, with factor 1 identified as aqueous fire-fighting foams source and factor 2 as fluoropolymer manufacturing and processing and perfluorooctanoic acid production source. The health risk of PFASs in the water assessed by the health risk assessment model was 9.54 × 10 -7 a -1 on average, showing no obvious adverse effects to human health. The 2 sources' risks estimated by the new hybrid model ranged from 2.95 × 10 -10 to 6.60 × 10 -6 a -1 and from 1.64 × 10 -7 to 1.62 × 10 -6 a -1 , respectively. The new hybrid model can provide useful information on the health risks of PFAS sources, which is helpful for pollution control and environmental management. Environ Toxicol Chem 2018;37:107-115. © 2017 SETAC. © 2017 SETAC.
Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach
NASA Astrophysics Data System (ADS)
Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam
2018-03-01
We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.
NASA Astrophysics Data System (ADS)
Heo, Seung; Cheong, Cheolung; Kim, Taehoon
2015-09-01
In this study, efficient numerical method is proposed for predicting tonal and broadband noises of a centrifugal fan unit. The proposed method is based on Hybrid Computational Aero-Acoustic (H-CAA) techniques combined with Unsteady Fast Random Particle Mesh (U-FRPM) method. The U-FRPM method is developed by extending the FRPM method proposed by Ewert et al. and is utilized to synthesize turbulence flow field from unsteady RANS solutions. The H-CAA technique combined with U-FRPM method is applied to predict broadband as well as tonal noises of a centrifugal fan unit in a household refrigerator. Firstly, unsteady flow field driven by a rotating fan is computed by solving the RANS equations with Computational Fluid Dynamic (CFD) techniques. Main source regions around the rotating fan are identified by examining the computed flow fields. Then, turbulence flow fields in the main source regions are synthesized by applying the U-FRPM method. The acoustic analogy is applied to model acoustic sources in the main source regions. Finally, the centrifugal fan noise is predicted by feeding the modeled acoustic sources into an acoustic solver based on the Boundary Element Method (BEM). The sound spectral levels predicted using the current numerical method show good agreements with the measured spectra at the Blade Pass Frequencies (BPFs) as well as in the high frequency range. On the more, the present method enables quantitative assessment of relative contributions of identified source regions to the sound field by comparing predicted sound pressure spectrum due to modeled sources.
Combining multiple earthquake models in real time for earthquake early warning
Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.
2017-01-01
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.
NASA Astrophysics Data System (ADS)
Lechner, H. N.; Waite, G. P.; Wauthier, D. C.; Escobar-Wolf, R. P.; Lopez-Hetland, B.
2017-12-01
Geodetic data from an eight-station GPS network at Pacaya volcano Guatemala allows us to produce a simple analytical model of deformation sources associated with the 2010 eruption and the eruptive period in 2013-2014. Deformation signals for both eruptive time-periods indicate downward vertical and outward horizontal motion at several stations surrounding the volcano. The objective of this research was to better understand the magmatic plumbing system and sources of this deformation. Because this down-and-out displacement is difficult to explain with a single source, we chose a model that includes a combination of a dike and spherical source. Our modelling suggests that deformation is dominated the inflation of a shallow dike seated high within the volcanic edifice and deflation of a deeper, spherical source below the SW flank of the volcano. The source parameters for the dike feature are in good agreement with the observed orientation of recent vent emplacements on the edifice as well the horizontal displacement, while the parameters for a deeper spherical source accommodate the downward vertical motion. This study presents GPS observations at Pacaya dating back to 2009 and provides a glimpse of simple models of possible deformation sources.
Using truck fleet data in combination with other data sources for freight modeling and planning.
DOT National Transportation Integrated Search
2014-07-01
This project investigated the use of large streams of truck GPS data available from the American Transportation : Research Institute (ATRI) for the following statewide freight modeling and planning applications in Florida: : (1) Average truck speed d...
ERIC Educational Resources Information Center
Amershi, Saleema; Conati, Cristina
2009-01-01
In this paper, we present a data-based user modeling framework that uses both unsupervised and supervised classification to build student models for exploratory learning environments. We apply the framework to build student models for two different learning environments and using two different data sources (logged interface and eye-tracking data).…
NASA Astrophysics Data System (ADS)
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei
2016-04-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.
NASA Astrophysics Data System (ADS)
Wellen, Christopher; Arhonditsis, George B.; Long, Tanya; Boyd, Duncan
2014-11-01
Spatially distributed nonpoint source watershed models are essential tools to estimate the magnitude and sources of diffuse pollution. However, little work has been undertaken to understand the sources and ramifications of the uncertainty involved in their use. In this study we conduct the first Bayesian uncertainty analysis of the water quality components of the SWAT model, one of the most commonly used distributed nonpoint source models. Working in Southern Ontario, we apply three Bayesian configurations for calibrating SWAT to Redhill Creek, an urban catchment, and Grindstone Creek, an agricultural one. We answer four interrelated questions: can SWAT determine suspended sediment sources with confidence when end of basin data is used for calibration? How does uncertainty propagate from the discharge submodel to the suspended sediment submodels? Do the estimated sediment sources vary when different calibration approaches are used? Can we combine the knowledge gained from different calibration approaches? We show that: (i) despite reasonable fit at the basin outlet, the simulated sediment sources are subject to uncertainty sufficient to undermine the typical approach of reliance on a single, best fit simulation; (ii) more than a third of the uncertainty of sediment load predictions may stem from the discharge submodel; (iii) estimated sediment sources do vary significantly across the three statistical configurations of model calibration despite end-of-basin predictions being virtually identical; and (iv) Bayesian model averaging is an approach that can synthesize predictions when a number of adequate distributed models make divergent source apportionments. We conclude with recommendations for future research to reduce the uncertainty encountered when using distributed nonpoint source models for source apportionment.
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Kirol, Christopher P; Beck, Jeffrey L; Huzurbazar, Snehalata V; Holloran, Matthew J; Miller, Scott N
2015-06-01
Conserving a declining species that is facing many threats, including overlap of its habitats with energy extraction activities, depends upon identifying and prioritizing the value of the habitats that remain. In addition, habitat quality is often compromised when source habitats are lost or fragmented due to anthropogenic development. Our objective was to build an ecological model to classify and map habitat quality in terms of source or sink dynamics for Greater Sage-Grouse (Centrocercus urophasianus) in the Atlantic Rim Project Area (ARPA), a developing coalbed natural gas field in south-central Wyoming, USA. We used occurrence and survival modeling to evaluate relationships between environmental and anthropogenic variables at multiple spatial scales and for all female summer life stages, including nesting, brood-rearing, and non-brooding females. For each life stage, we created resource selection functions (RSFs). We weighted the RSFs and combined them to form a female summer occurrence map. We modeled survival also as a function of spatial variables for nest, brood, and adult female summer survival. Our survival-models were mapped as survival probability functions individually and then combined with fixed vital rates in a fitness metric model that, when mapped, predicted habitat productivity (productivity map). Our results demonstrate a suite of environmental and anthropogenic variables at multiple scales that were predictive of occurrence and survival. We created a source-sink map by overlaying our female summer occurrence map and productivity map to predict habitats contributing to population surpluses (source habitats) or deficits (sink habitat) and low-occurrence habitats on the landscape. The source-sink map predicted that of the Sage-Grouse habitat within the ARPA, 30% was primary source, 29% was secondary source, 4% was primary sink, 6% was secondary sink, and 31% was low occurrence. Our results provide evidence that energy development and avoidance of energy infrastructure were probably reducing the amount of source habitat within the ARPA landscape. Our source-sink map provides managers with a means of prioritizing habitats for conservation planning based on source and sink dynamics. The spatial identification of high value (i.e., primary source) as well as suboptimal (i.e., primary sink) habitats allows for informed energy development to minimize effects on local wildlife populations.
Research on evacuation in the subway station in China based on the Combined Social Force Model
NASA Astrophysics Data System (ADS)
Wan, Jiahui; Sui, Jie; Yu, Hua
2014-01-01
With the increasing number of subway stations, more and more attention has been paid to their emergency evacuation, as it plays an important part in urban emergency management. The present paper puts forward a method of crowd evacuation simulation for bioterrorism in subway station environment using the basic theory of the Social Force Model combined with the Gaussian Puff Model. A Combined Social Force Model is developed which is suitable for a real situation where there is a sudden toxic gas event. The model can also be used to demonstrate some individual behaviors in evacuation, such as competitive, grouping and herding. At last a series of experiments are conducted and the results are as follows. (1) When there is a toxic gas terroristic attack in subway stations, the influence on passengers varies according to the position that the gas source lies in and the numbers of gas sources. (2) More casualties will occur if managers do not detect the toxic gas danger and inform passengers about it. (3) The larger the wind speed is, the smaller the number of injured passengers will be. With the experiments, the number of people affected and other parameters like gas concentration can be estimated, which could support rapid and efficient emergency decisions.
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Cai, Hao; Long, Weiding; Li, Xianting; Kong, Lingjuan; Xiong, Shuang
2010-06-15
In case hazardous contaminants are suddenly released indoors, the prompt and proper emergency responses are critical to protect occupants. This paper aims to provide a framework for determining the optimal combination of ventilation and evacuation strategies by considering the uncertainty of source locations. The certainty of source locations is classified as complete certainty, incomplete certainty, and complete uncertainty to cover all the possible situations. According to this classification, three types of decision analysis models are presented. A new concept, efficiency factor of contaminant source (EFCS), is incorporated in these models to evaluate the payoffs of the ventilation and evacuation strategies. A procedure of decision-making based on these models is proposed and demonstrated by numerical studies of one hundred scenarios with ten ventilation modes, two evacuation modes, and five source locations. The results show that the models can be useful to direct the decision analysis of both the ventilation and evacuation strategies. In addition, the certainty of the source locations has an important effect on the outcomes of the decision-making. Copyright 2010 Elsevier B.V. All rights reserved.
Meisner, Allison; Kerr, Kathleen F; Thiessen-Philbrook, Heather; Coca, Steven G; Parikh, Chirag R
2016-02-01
Individual biomarkers of renal injury are only modestly predictive of acute kidney injury (AKI). Using multiple biomarkers has the potential to improve predictive capacity. In this systematic review, statistical methods of articles developing biomarker combinations to predict AKI were assessed. We identified and described three potential sources of bias (resubstitution bias, model selection bias, and bias due to center differences) that may compromise the development of biomarker combinations. Fifteen studies reported developing kidney injury biomarker combinations for the prediction of AKI after cardiac surgery (8 articles), in the intensive care unit (4 articles), or other settings (3 articles). All studies were susceptible to at least one source of bias and did not account for or acknowledge the bias. Inadequate reporting often hindered our assessment of the articles. We then evaluated, when possible (7 articles), the performance of published biomarker combinations in the TRIBE-AKI cardiac surgery cohort. Predictive performance was markedly attenuated in six out of seven cases. Thus, deficiencies in analysis and reporting are avoidable, and care should be taken to provide accurate estimates of risk prediction model performance. Hence, rigorous design, analysis, and reporting of biomarker combination studies are essential to realizing the promise of biomarkers in clinical practice.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
NASA Astrophysics Data System (ADS)
Calchi Novati, S.; Skowron, J.; Jung, Y. K.; Beichman, C.; Bryden, G.; Carey, S.; Gaudi, B. S.; Henderson, C. B.; Shvartzvald, Y.; Yee, J. C.; Zhu, W.; Spitzer Team; Udalski, A.; Szymański, M. K.; Mróz, P.; Poleski, R.; Soszyński, I.; Kozłowski, S.; Pietrukowicz, P.; Ulaczyk, K.; Pawlak, M.; Rybicki, K.; Iwanek, P.; OGLE Collaboration; Albrow, M. D.; Chung, S.-J.; Gould, A.; Han, C.; Hwang, K.-H.; Ryu, Y.-H.; Shin, I.-G.; Zang, W.; Cha, S.-M.; Kim, D.-J.; Kim, H.-W.; Kim, S.-L.; Lee, C.-U.; Lee, D.-J.; Lee, Y.; Park, B.-G.; Pogge, R. W.; KMTNet Collaboration
2018-06-01
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class ({m}p≃ 1.6 {M}{{J}{{u}}{{p}}}) planet orbiting a mid-late M dwarf (M≃ 0.2 {M}ȯ ) that lies {D}LS}≃ 1.0 {kpc} in the foreground of the microlensed Galactic-bar source star. The planet–host projected separation is {a}\\perp ≃ 1.0 {au}, i.e., well beyond the snow line. By measuring the source proper motion {{\\boldsymbol{μ }}}s from ongoing long-term OGLE imaging and combining this with the lens-source relative proper motion {{\\boldsymbol{μ }}}rel} derived from the microlensing solution, we show that the lens proper motion {{\\boldsymbol{μ }}}l={{\\boldsymbol{μ }}}rel}+{{\\boldsymbol{μ }}}s is consistent with the lens lying in the Galactic disk, although a bulge lens is not ruled out. We show that while the Spitzer and ground-based data are comparably well fitted by planetary (i.e., binary-lens (2L1S)) and binary-source (1L2S) models, the combination of Spitzer and ground-based data decisively favors the planetary model. This is a new channel to resolve the 2L1S/1L2S degeneracy, which can be difficult to break in some cases.
NASA Astrophysics Data System (ADS)
Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.
2017-04-01
We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Tests and consequences of disk plus halo models of gamma-ray burst sources
NASA Technical Reports Server (NTRS)
Smith, I. A.
1995-01-01
The gamma-ray burst observations made by the Burst and Transient Source Experiment (BATSE) and by previous experiments are still consistent with a combined Galactic disk (or Galactic spiral arm) plus extended Galactic halo model. Testable predictions and consequences of the disk plus halo model are discussed here; tests performed on the expanded BATSE database in the future will constrain the allowed model parameters and may eventually rule out the disk plus halo model. Using examples, it is shown that if the halo has an appropriate edge, BATSE will never detect an anisotropic signal from the halo of the Andromeda galaxy. A prediction of the disk plus halo model is that the fraction of the bursts observed to be in the 'disk' population rises as the detector sensitivity improves. A careful reexamination of the numbers of bursts in the two populations for the pre-BATSE databases could rule out this class of models. Similarly, it is predicted that different satellites will observe different relative numbers of bursts in the two classes for any model in which there are two different spatial distribiutions of the sources, or for models in which there is one spatial distribution of the sources that is sampled to different depths for the two classes. An important consequence of the disk plus halo model is that for the birthrate of the halo sources to be small compared to the birthrate of the disk sources, it is necessary for the halo sources to release many orders of magnitude more energy over their bursting lifetime than the disk sources. The halo bursts must also be much more luminous than the disk bursts; if this disk-halo model is correct, it is necessary to explain why the disk sources do not produce halo-type bursts.
Opportunities and Challenges in Supply-Side Simulation: Physician-Based Models
Gresenz, Carole Roan; Auerbach, David I; Duarte, Fabian
2013-01-01
Objective To provide a conceptual framework and to assess the availability of empirical data for supply-side microsimulation modeling in the context of health care. Data Sources Multiple secondary data sources, including the American Community Survey, Health Tracking Physician Survey, and SK&A physician database. Study Design We apply our conceptual framework to one entity in the health care market—physicians—and identify, assess, and compare data available for physician-based simulation models. Principal Findings Our conceptual framework describes three broad types of data required for supply-side microsimulation modeling. Our assessment of available data for modeling physician behavior suggests broad comparability across various sources on several dimensions and highlights the need for significant integration of data across multiple sources to provide a platform adequate for modeling. A growing literature provides potential estimates for use as behavioral parameters that could serve as the models' engines. Sources of data for simulation modeling that account for the complex organizational and financial relationships among physicians and other supply-side entities are limited. Conclusions A key challenge for supply-side microsimulation modeling is optimally combining available data to harness their collective power. Several possibilities also exist for novel data collection. These have the potential to serve as catalysts for the next generation of supply-side-focused simulation models to inform health policy. PMID:23347041
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
Zhao, Liang; Wei, Jianwei; Lu, Junhua; He, Cheng; Duan, Chunying
2017-07-17
Using small molecules with defined pockets to catalyze chemical transformations resulted in attractive catalytic syntheses that echo the remarkable properties of enzymes. By modulating the active site of a nicotinamide adenine dinucleotide (NADH) model in a redox-active molecular flask, we combined biomimetic hydrogenation with in situ regeneration of the active site in a one-pot transformation using light as a clean energy source. This molecular flask facilitates the encapsulation of benzoxazinones for biomimetic hydrogenation of the substrates within the inner space of the flask using the active sites of the NADH models. The redox-active metal centers provide an active hydrogen source by light-driven proton reduction outside the pocket, allowing the in situ regeneration of the NADH models under irradiation. This new synthetic platform, which offers control over the location of the redox events, provides a regenerating system that exhibits high selectivity and efficiency and is extendable to benzoxazinone and quinoxalinone systems. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Wziontek, Hartmut; Wilmes, Herbert; Güntner, Andreas; Creutzfeldt, Benjamin
2010-05-01
Water mass changes are a major source of variations in residual gravimetric time series obtained from the combination of observations with superconducting and absolute gravimeters. Changes in the local water storage are the main influence, but global variations contribute to the signal significantly. For three European gravity stations, Bad Homburg, Wettzell and Medicina, different global hydrology models are compared. The influence of topographic effects is discussed and due to the long-term stability of the combined gravity time series, inter-annual signals in model data and gravimetric observations are compared. Two sources of influence are discriminated, i.e., the effect of a local zone with an extent of a few kilometers around the gravimetric station and the global contribution beyond 50km. Considering their coarse resolution and uncertainties, local effects calculated from global hydrological models are compared with the in-situ gravity observations and, for the station Wettzell, with local hydrological monitoring data.
Eslinger, P W; Biegalski, S R; Bowyer, T W; Cooper, M W; Haas, D A; Hayes, J C; Hoffman, I; Korpach, E; Yi, J; Miley, H S; Rishel, J P; Ungar, K; White, B; Woods, V T
2014-01-01
Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout across the northern hemisphere resulting from the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Sampling data from multiple International Modeling System locations are combined with atmospheric transport modeling to estimate the magnitude and time sequence of releases of (133)Xe. Modeled dilution factors at five different detection locations were combined with 57 atmospheric concentration measurements of (133)Xe taken from March 18 to March 23 to estimate the source term. This analysis suggests that 92% of the 1.24 × 10(19) Bq of (133)Xe present in the three operating reactors at the time of the earthquake was released to the atmosphere over a 3 d period. An uncertainty analysis bounds the release estimates to 54-129% of available (133)Xe inventory. Copyright © 2013 Elsevier Ltd. All rights reserved.
Feasibility of approaches combining sensor and source features in brain-computer interface.
Ahn, Minkyu; Hong, Jun Hee; Jun, Sung Chan
2012-02-15
Brain-computer interface (BCI) provides a new channel for communication between brain and computers through brain signals. Cost-effective EEG provides good temporal resolution, but its spatial resolution is poor and sensor information is blurred by inherent noise. To overcome these issues, spatial filtering and feature extraction techniques have been developed. Source imaging, transformation of sensor signals into the source space through source localizer, has gained attention as a new approach for BCI. It has been reported that the source imaging yields some improvement of BCI performance. However, there exists no thorough investigation on how source imaging information overlaps with, and is complementary to, sensor information. Information (visible information) from the source space may overlap as well as be exclusive to information from the sensor space is hypothesized. Therefore, we can extract more information from the sensor and source spaces if our hypothesis is true, thereby contributing to more accurate BCI systems. In this work, features from each space (sensor or source), and two strategies combining sensor and source features are assessed. The information distribution among the sensor, source, and combined spaces is discussed through a Venn diagram for 18 motor imagery datasets. Additional 5 motor imagery datasets from the BCI Competition III site were examined. The results showed that the addition of source information yielded about 3.8% classification improvement for 18 motor imagery datasets and showed an average accuracy of 75.56% for BCI Competition data. Our proposed approach is promising, and improved performance may be possible with better head model. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vetter, L.; LeGrande, A. N.; Ullman, D. J.; Carlson, A. E.
2017-12-01
Sediment cores from the Gulf of Mexico show evidence of meltwater derived from the Laurentide Ice Sheet during the last deglaciation. Recent studies using geochemical measurements of individual foraminifera suggest changes in the oxygen isotopic composition of the meltwater as deglaciation proceeded. Here we use the water isotope enabled climate model simulations (NASA GISS ModelE-R) to investigate potential sources of meltwater within the ice sheet. We find that initial melting of the ice sheet from the southern margin contributed an oxygen isotope value reflecting a low-elevation, local precipitation source. As deglacial melting proceeded, meltwater delivered to the Gulf of Mexico had a more negative oxygen isotopic value, which the climate model simulates as being sourced from the high-elevation, high-latitude interior of the ice sheet. This study demonstrates the utility of combining stable isotope analyses with climate model simulations to investigate past changes in the hydrologic cycle.
Automation for System Safety Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
2009-01-01
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
Sources of carbonaceous PM2.5 were quantified in downtown Cleveland, OH and Chippewa Lake, OH located ~40 miles southwest of Cleveland during the Cleveland Multiple Air Pollutant Study (CMAPS). PM2.5 filter samples were collected daily during July-August 200...
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
NASA Astrophysics Data System (ADS)
Sturtz, Timothy M.
Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.
NASA Astrophysics Data System (ADS)
O'Reilly, Shannon E.; DeWeese, Lindsay S.; Maynard, Matthew R.; Rajon, Didier A.; Wayson, Michael B.; Marshall, Emily L.; Bolch, Wesley E.
2016-12-01
An image-based skeletal dosimetry model for internal electron sources was created for the ICRP-defined reference adult female. Many previous skeletal dosimetry models, which are still employed in commonly used internal dosimetry software, do not properly account for electron escape from trabecular spongiosa, electron cross-fire from cortical bone, and the impact of marrow cellularity on active marrow self-irradiation. Furthermore, these existing models do not employ the current ICRP definition of a 50 µm bone endosteum (or shallow marrow). Each of these limitations was addressed in the present study. Electron transport was completed to determine specific absorbed fractions to both active and shallow marrow of the skeletal regions of the University of Florida reference adult female. The skeletal macrostructure and microstructure were modeled separately. The bone macrostructure was based on the whole-body hybrid computational phantom of the UF series of reference models, while the bone microstructure was derived from microCT images of skeletal region samples taken from a 45 years-old female cadaver. The active and shallow marrow are typically adopted as surrogate tissue regions for the hematopoietic stem cells and osteoprogenitor cells, respectively. Source tissues included active marrow, inactive marrow, trabecular bone volume, trabecular bone surfaces, cortical bone volume, and cortical bone surfaces. Marrow cellularity was varied from 10 to 100 percent for active marrow self-irradiation. All other sources were run at the defined ICRP Publication 70 cellularity for each bone site. A total of 33 discrete electron energies, ranging from 1 keV to 10 MeV, were either simulated or analytically modeled. The method of combining skeletal macrostructure and microstructure absorbed fractions assessed using MCNPX electron transport was found to yield results similar to those determined with the PIRT model applied to the UF adult male skeletal dosimetry model. Calculated skeletal averaged absorbed fractions for each source-target combination were found to follow similar trends of more recent dosimetry models (image-based models) but did not follow results from skeletal models based upon assumptions of an infinite expanse of trabecular spongiosa.
A physics based method for combining multiple anatomy models with application to medical simulation.
Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David
2009-01-01
We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.
NASA Astrophysics Data System (ADS)
Yasuhara, Scott; Forgeron, Jeff; Rella, Chris; Franz, Patrick; Jacobson, Gloria; Chiao, Sen; Saad, Nabil
2013-04-01
The ability to quantify sources and sinks of carbon dioxide and methane on the urban scale is essential for understanding the atmospheric drivers to global climate change. In the 'top-down' approach, overall carbon fluxes are determined by combining remote measurements of carbon dioxide concentrations with complex atmospheric transport models, and these emissions measurements are compared to 'bottom-up' predictions based on detailed inventories of the sources and sinks of carbon, both anthropogenic and biogenic in nature. This approach, which has proven to be effective at continental scales, becomes challenging to implement at urban scales, due to poorly understood atmospheric transport models and high variability of the emissions sources in space (e.g., factories, highways, green spaces) and time (rush hours, factory shifts and shutdowns, and diurnal and seasonal variation in residential energy use). New measurement and analysis techniques are required to make sense of the carbon dioxide signal in cities. Here we present detailed, high spatial- and temporal- resolution greenhouse gas measurements made by multiple Picarro-CRDS analyzers in Silicon Valley in California. Real-time carbon dioxide data from a 20-month period are combined with real-time carbon monoxide, methane, and acetylene to partition the observed carbon dioxide concentrations between different anthropogenic sectors (e.g., transport, residential) and biogenic sources. Real-time wind rose data are also combined with real-time methane data to help identify the direction of local emissions of methane. High resolution WRF models are also included to better understand the dynamics of the boundary layer. The ratio between carbon dioxide and carbon monoxide is shown to vary over more than a factor of two from season to season or even from day to night, indicating rapid but frequent shifts in the balance between different carbon dioxide sources. Additional information is given by acetylene, a fossil fuel combustion tracer that provides complimentary information to carbon monoxide. In spring and summer, the combined signal of the urban center and the surrounding biosphere and urban green space is explored. These methods show great promise for identifying, quantifying, and partitioning urban-ecological (carbon) emissions.
NASA Astrophysics Data System (ADS)
Young, M. B.; Kendall, C.; Guerin, M.; Stringfellow, W. T.; Silva, S. R.; Harter, T.; Parker, A.
2013-12-01
The Sacramento and San Joaquin Rivers provide the majority of freshwater for the San Francisco Bay Delta. Both rivers are important sources of drinking and irrigation water for California, and play critical roles in the health of California fisheries. Understanding the factors controlling water quality and primary productivity in these rivers and the Delta is essential for making sound economic and environmental water management decisions. However, these highly altered surface water systems present many challenges for water quality monitoring studies due to factors such as multiple potential nutrient and contaminant inputs, dynamic source water inputs, and changing flow regimes controlled by both natural and engineered conditions. The watersheds for both rivers contain areas of intensive agriculture along with many other land uses, and the Sacramento River receives significant amounts of treated wastewater from the large population around the City of Sacramento. We have used a multi-isotope approach combined with mass balance and hydrodynamic modeling in order to better understand the dominant nutrient sources for each of these rivers, and to track nutrient sources and cycling within the complex Delta region around the confluence of the rivers. High nitrate concentrations within the San Joaquin River fuel summer algal blooms, contributing to low dissolved oxygen conditions. High δ15N-NO3 values combined with the high nitrate concentrations suggest that animal manure is a significant source of nitrate to the San Joaquin River. In contrast, the Sacramento River has lower nitrate concentrations but elevated ammonium concentrations from wastewater discharge. Downstream nitrification of the ammonium can be clearly traced using δ15N-NH4. Flow conditions for these rivers and the Delta have strong seasonal and inter-annual variations, resulting in significant changes in nutrient delivery and cycling. Isotopic measurements and estimates of source water contributions derived from the DSM2-HYDRO hydrologic model demonstrate that mixing between San Joaquin and Sacramento River water can occur as far as 30 miles upstream of the confluence within the San Joaquin channel, and that San Joaquin-derived nitrate only reaches the western Delta during periods of high flow.
Combined two-photon microscopy and optical coherence tomography using individually optimized sources
NASA Astrophysics Data System (ADS)
Jeong, Bosu; Lee, Byunghak; Jang, Min Seong; Nam, Hyoseok; Kim, Hae Koo; Yoon, Sang June; Doh, Junsang; Lee, Sang-Joon; Yang, Bo-Gie; Jang, Myoung Ho; Kim, Ki Hean
2011-03-01
Two-photon microscopy (TPM) and optical coherence tomography (OCT) are 3D tissue imaging techniques based on different contrast mechanisms. We developed a combined system of TPM and OCT to provide information of both imaging modalities for in-vivo tissue study. TPM and OCT were implemented by using separate light sources, a Ti-Sapphire laser and a wavelength-swept source centered at 1300 nm respectively, and scanners. Light from the two sources was combined for the simultaneous imaging of tissue samples. TPM provided molecular, cellular information of tissues in the region of a few hundred microns on one side at a sub-cellular resolution, and ran at approximately 40 frames per second. OCT provided structural information in the tissue region larger than TPM images at a sub-tenth micron resolution by using 0.1 numerical aperture. OCT had the field of view of 800 um × 800 um based on a 20x objective, the sensitivity of 97dB, and the imaging speed of 0.8 volumes per second. This combined system was tested with simple microsphere specimens, and then was applied to image the explanted intestine of a mouse model and the plant leaves. Morphology and micro-structures of the intestine villi and immune cells within the villi were shown in the intestine image, and chloroplasts and various microstructures of the maize leaves were visualized in 3D by the combined system.
Metildi, Cristina A; Kaushal, Sharmeela; Lee, Claudia; Hardamon, Chanae R; Snyder, Cynthia S; Luiken, George A; Talamini, Mark A; Hoffman, Robert M; Bouvet, Michael
2012-06-01
The aim of this study was to improve fluorescence laparoscopy of pancreatic cancer in an orthotopic mouse model with the use of a light-emitting diode (LED) light source and optimal fluorophore combinations. Human pancreatic cancer models were established with fluorescent FG-RFP, MiaPaca2-GFP, BxPC-3-RFP, and BxPC-3 cancer cells implanted in 6-week-old female athymic mice. Two weeks postimplantation, diagnostic laparoscopy was performed with a Stryker L9000 LED light source or a Stryker X8000 xenon light source 24 hours after tail-vein injection of CEA antibodies conjugated with Alexa 488 or Alexa 555. Cancer lesions were detected and localized under each light mode. Intravital images were also obtained with the OV-100 Olympus and Maestro CRI Small Animal Imaging Systems, serving as a positive control. Tumors were collected for histologic analysis. Fluorescence laparoscopy with a 495-nm emission filter and an LED light source enabled real-time visualization of the fluorescence-labeled tumor deposits in the peritoneal cavity. The simultaneous use of different fluorophores (Alexa 488 and Alexa 555), conjugated to antibodies, brightened the fluorescence signal, enhancing detection of submillimeter lesions without compromising background illumination. Adjustments to the LED light source permitted simultaneous detection of tumor lesions of different fluorescent colors and surrounding structures with minimal autofluorescence. Using an LED light source with adjustments to the red, blue, and green wavelengths, it is possible to simultaneously identify tumor metastases expressing fluorescent proteins of different wavelengths, which greatly enhanced the signal without compromising background illumination. Development of this fluorescence laparoscopy technology for clinical use can improve staging and resection of pancreatic cancer. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-04-01
We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-07-01
We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C
2011-07-01
Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.
Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina
2015-01-01
Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.
Meteorological and air pollution modeling for an urban airport
NASA Technical Reports Server (NTRS)
Swan, P. R.; Lee, I. Y.
1980-01-01
Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.
Silva, Rogers F.; Plis, Sergey M.; Sui, Jing; Pattichis, Marios S.; Adalı, Tülay; Calhoun, Vince D.
2016-01-01
In the past decade, numerous advances in the study of the human brain were fostered by successful applications of blind source separation (BSS) methods to a wide range of imaging modalities. The main focus has been on extracting “networks” represented as the underlying latent sources. While the broad success in learning latent representations from multiple datasets has promoted the wide presence of BSS in modern neuroscience, it also introduced a wide variety of objective functions, underlying graphical structures, and parameter constraints for each method. Such diversity, combined with a host of datatype-specific know-how, can cause a sense of disorder and confusion, hampering a practitioner’s judgment and impeding further development. We organize the diverse landscape of BSS models by exposing its key features and combining them to establish a novel unifying view of the area. In the process, we unveil important connections among models according to their properties and subspace structures. Consequently, a high-level descriptive structure is exposed, ultimately helping practitioners select the right model for their applications. Equipped with that knowledge, we review the current state of BSS applications to neuroimaging. The gained insight into model connections elicits a broader sense of generalization, highlighting several directions for model development. In light of that, we discuss emerging multi-dataset multidimensional (MDM) models and summarize their benefits for the study of the healthy brain and disease-related changes. PMID:28461840
NASA Astrophysics Data System (ADS)
Marzeion, B.; Maussion, F.
2017-12-01
Mountain glaciers are one of the few remaining sub-systems of the global climate system for which no globally applicable, open source, community-driven model exists. Notable examples from the ice sheet community include the Parallel Ice Sheet Model or Elmer/Ice. While the atmospheric modeling community has a long tradition of sharing models (e.g. the Weather Research and Forecasting model) or comparing them (e.g. the Coupled Model Intercomparison Project or CMIP), recent initiatives originating from the glaciological community show a new willingness to better coordinate global research efforts following the CMIP example (e.g. the Glacier Model Intercomparison Project or the Glacier Ice Thickness Estimation Working Group). In the recent past, great advances have been made in the global availability of data and methods relevant for glacier modeling, spanning glacier outlines, automatized glacier centerline identification, bed rock inversion methods, and global topographic data sets. Taken together, these advances now allow the ice dynamics of glaciers to be modeled on a global scale, provided that adequate modeling platforms are available. Here, we present the Open Global Glacier Model (OGGM), developed to provide a global scale, modular, and open source numerical model framework for consistently simulating past and future global scale glacier change. Global not only in the sense of leading to meaningful results for all glaciers combined, but also for any small ensemble of glaciers, e.g. at the headwater catchment scale. Modular to allow combinations of different approaches to the representation of ice flow and surface mass balance, enabling a new kind of model intercomparison. Open source so that the code can be read and used by anyone and so that new modules can be added and discussed by the community, following the principles of open governance. Consistent in order to provide uncertainty measures at all realizable scales.
GIS-MODFLOW: Ein kleines OpenSource-Werkzeug zur Anbindung von GIS-Daten an MODFLOW
NASA Astrophysics Data System (ADS)
Gossel, Wolfgang
2013-06-01
The numerical model MODFLOW (Harbaugh 2005) is an efficient and up-to-date tool for groundwater flow modelling. On the other hand, Geo-Information-Systems (GIS) provide useful tools for data preparation and visualization that can also be incorporated in numerical groundwater modelling. An interface between both would therefore be useful for many hydrogeological investigations. To date, several integrated stand-alone tools have been developed that rely on MODFLOW, MODPATH and transport modelling tools. Simultaneously, several open source-GIS codes were developed to improve functionality and ease of use. These GIS tools can be used as pre- and post-processors of the numerical model MODFLOW via a suitable interface. Here we present GIS-MODFLOW as an open-source tool that provides a new universal interface by using the ESRI ASCII GRID data format that can be converted into MODFLOW input data. This tool can also treat MODFLOW results. Such a combination of MODFLOW and open-source GIS opens new possibilities to render groundwater flow modelling, and simulation results, available to larger circles of hydrogeologists.
NASA Astrophysics Data System (ADS)
Zhu, H.; Bozdag, E.; Peter, D. B.; Tromp, J.
2010-12-01
We use spectral-element and adjoint methods to image crustal and upper mantle heterogeneity in Europe. The study area involves the convergent boundaries of the Eurasian, African and Arabian plates and the divergent boundary between the Eurasian and North American plates, making the tectonic structure of this region complex. Our goal is to iteratively fit observed seismograms and improve crustal and upper mantle images by taking advantage of 3D forward and inverse modeling techniques. We use data from 200 earthquakes with magnitudes between 5 and 6 recorded by 262 stations provided by ORFEUS. Crustal model Crust2.0 combined with mantle model S362ANI comprise the initial 3D model. Before the iterative adjoint inversion, we determine earthquake source parameters in the initial 3D model by using 3D Green functions and their Fréchet derivatives with respect to the source parameters (i.e., centroid moment tensor and location). The updated catalog is used in the subsequent structural inversion. Since we concentrate on upper mantle structures which involve anisotropy, transversely isotropic (frequency-dependent) traveltime sensitivity kernels are used in the iterative inversion. Taking advantage of the adjoint method, we use as many measurements as can obtain based on comparisons between observed and synthetic seismograms. FLEXWIN (Maggi et al., 2009) is used to automatically select measurement windows which are analyzed based on a multitaper technique. The bandpass ranges from 15 second to 150 second. Long-period surface waves and short-period body waves are combined in source relocations and structural inversions. A statistical assessments of traveltime anomalies and logarithmic waveform differences is used to characterize the inverted sources and structure.
Lombardi, Federica; Golla, Kalyan; Fitzpatrick, Darren J.; Casey, Fergal P.; Moran, Niamh; Shields, Denis C.
2015-01-01
Identifying effective therapeutic drug combinations that modulate complex signaling pathways in platelets is central to the advancement of effective anti-thrombotic therapies. However, there is no systems model of the platelet that predicts responses to different inhibitor combinations. We developed an approach which goes beyond current inhibitor-inhibitor combination screening to efficiently consider other signaling aspects that may give insights into the behaviour of the platelet as a system. We investigated combinations of platelet inhibitors and activators. We evaluated three distinct strands of information, namely: activator-inhibitor combination screens (testing a panel of inhibitors against a panel of activators); inhibitor-inhibitor synergy screens; and activator-activator synergy screens. We demonstrated how these analyses may be efficiently performed, both experimentally and computationally, to identify particular combinations of most interest. Robust tests of activator-activator synergy and of inhibitor-inhibitor synergy required combinations to show significant excesses over the double doses of each component. Modeling identified multiple effects of an inhibitor of the P2Y12 ADP receptor, and complementarity between inhibitor-inhibitor synergy effects and activator-inhibitor combination effects. This approach accelerates the mapping of combination effects of compounds to develop combinations that may be therapeutically beneficial. We integrated the three information sources into a unified model that predicted the benefits of a triple drug combination targeting ADP, thromboxane and thrombin signaling. PMID:25875950
This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include ...
Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom
2012-01-01
Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
NASA Astrophysics Data System (ADS)
Chan, T. P.; Govindaraju, Rao S.
2006-10-01
Remediation schemes for contaminated sites are often evaluated to assess their potential for source zone reduction of mass, or treatment of the contaminant between the source and a control plane (CP) to achieve regulatory limits. In this study, we utilize a stochastic stream tube model to explain the behavior of breakthrough curves (BTCs) across a CP. At the local scale, mass dissolution at the source is combined with an advection model with first-order decay for the dissolved plume. Field-scale averaging is then employed to account for spatial variation in mass within the source zone, and variation in the velocity field. Under the assumption of instantaneous mass transfer from the source to the moving liquid, semi-analytical expressions for the BTC and temporal moments are developed, followed by derivation of expressions for effective velocity, dispersion, and degradation coefficients using the method of moments. It is found that degradation strongly influences the behavior of moments and the effective parameters. While increased heterogeneity in the velocity field results in increased dispersion, degradation causes the center of mass of the plume to shift to earlier times, and reduces the dispersion of the BTC by lowering the concentrations in the tail. Modified definitions of effective parameters are presented for degrading solutes to account for the normalization constant (zeroth moment) that keeps changing with time or distance to the CP. It is shown that anomalous dispersion can result for high degradation rates combined with wide variation in velocity fluctuations. Implications of model results on estimating cleanup times and fulfillment of regulatory limits are discussed. Relating mass removal at the source to flux reductions past a control plane is confounded by many factors. Increased heterogeneity in velocity fields causes mass fluxes past a control plane to persist, however, aggressive remediation between the source and CP can reduce these fluxes.
Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007
2014-01-15
Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less
Paying attention to attention in recognition memory: insights from models and electrophysiology.
Dubé, Chad; Payne, Lisa; Sekuler, Robert; Rotello, Caren M
2013-12-01
Reliance on remembered facts or events requires memory for their sources, that is, the contexts in which those facts or events were embedded. Understanding of source retrieval has been stymied by the fact that uncontrolled fluctuations of attention during encoding can cloud results of key importance to theoretical development. To address this issue, we combined electrophysiology (high-density electroencephalogram, EEG, recordings) with computational modeling of behavioral results. We manipulated subjects' attention to an auditory attribute, whether the source of individual study words was a male or female speaker. Posterior alpha-band (8-14 Hz) power in subjects' EEG increased after a cue to ignore the voice of the person who was about to speak. Receiver-operating-characteristic analysis validated our interpretation of oscillatory dynamics as a marker of attention to source information. With attention under experimental control, computational modeling showed unequivocally that memory for source (male or female speaker) reflected a continuous signal detection process rather than a threshold recollection process.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
NASA Astrophysics Data System (ADS)
Ogulei, David; Hopke, Philip K.; Zhou, Liming; Patrick Pancras, J.; Nair, Narayanan; Ondov, John M.
Several multivariate data analysis methods have been applied to a combination of particle size and composition measurements made at the Baltimore Supersite. Partial least squares (PLS) was used to investigate the relationship (linearity) between number concentrations and the measured PM2.5 mass concentrations of chemical species. The data were obtained at the Ponca Street site and consisted of six days' measurements: 6, 7, 8, 18, 19 July, and 21 August 2002. The PLS analysis showed that the covariance between the data could be explained by 10 latent variables (LVs), but only the first four of these were sufficient to establish the linear relationship between the two data sets. More LVs could not make the model better. The four LVs were found to better explain the covariance between the large sized particles and the chemical species. A bilinear receptor model, PMF2, was then used to simultaneously analyze the size distribution and chemical composition data sets. The resolved sources were identified using information from number and mass contributions from each source (source profiles) as well as meteorological data. Twelve sources were identified: oil-fired power plant emissions, secondary nitrate I, local gasoline traffic, coal-fired power plant, secondary nitrate II, secondary sulfate, diesel emissions/bus maintenance, Quebec wildfire episode, nucleation, incinerator, airborne soil/road-way dust, and steel plant emissions. Local sources were mostly characterized by bi-modal number distributions. Regional sources were characterized by transport mode particles (0.2- 0.5μm).
Measurements of PANs during the New England Air Quality Study 2002
NASA Astrophysics Data System (ADS)
Roberts, J. M.; Marchewka, M.; Bertman, S. B.; Sommariva, R.; Warneke, C.; de Gouw, J.; Kuster, W.; Goldan, P.; Williams, E.; Lerner, B. M.; Murphy, P.; Fehsenfeld, F. C.
2007-10-01
Measurements of peroxycarboxylic nitric anhydrides (PANs) were made during the New England Air Quality Study 2002 cruise of the NOAA RV Ronald H Brown. The four compounds observed, PAN, peroxypropionic nitric anhydride (PPN), peroxymethacrylic nitric anhydride (MPAN), and peroxyisobutyric nitric anhydride (PiBN) were compared with results from other continental and Gulf of Maine sites. Systematic changes in PPN/PAN ratio, due to differential thermal decomposition rates, were related quantitatively to air mass aging. At least one early morning period was observed when O3 seemed to have been lost probably due to NO3 and N2O5 chemistry. The highest O3 episode was observed in the combined plume of isoprene sources and anthropogenic volatile organic compounds (VOCs) and NOx sources from the greater Boston area. A simple linear combination model showed that the organic precursors leading to elevated O3 were roughly half from the biogenic and half from anthropogenic VOC regimes. An explicit chemical box model confirmed that the chemistry in the Boston plume is well represented by the simple linear combination model. This degree of biogenic hydrocarbon involvement in the production of photochemical ozone has significant implications for air quality control strategies in this region.
Ascribing soil erosion of hillslope components to river sediment yield.
Nosrati, Kazem
2017-06-01
In recent decades, soil erosion has increased in catchments of Iran. It is, therefore, necessary to understand soil erosion processes and sources in order to mitigate this problem. Geomorphic landforms play an important role in influencing water erosion. Therefore, ascribing hillslope components soil erosion to river sediment yield could be useful for soil and sediment management in order to decrease the off-site effects related to downstream sedimentation areas. The main objectives of this study were to apply radionuclide tracers and soil organic carbon to determine relative contributions of hillslope component sediment sources in two land use types (forest and crop field) by using a Bayesian-mixing model, as well as to estimate the uncertainty in sediment fingerprinting in a mountainous catchment of western Iran. In this analysis, 137 Cs, 40 K, 238 U, 226 Ra, 232 Th and soil organic carbon tracers were measured in 32 different sampling sites from four hillslope component sediment sources (summit, shoulder, backslope, and toeslope) in forested and crop fields along with six bed sediment samples at the downstream reach of the catchment. To quantify the sediment source proportions, the Bayesian mixing model was based on (1) primary sediment sources and (2) combined primary and secondary sediment sources. The results of both approaches indicated that erosion from crop field shoulder dominated the sources of river sediments. The estimated contribution of crop field shoulder for all river samples was 63.7% (32.4-79.8%) for primary sediment sources approach, and 67% (15.3%-81.7%) for the combined primary and secondary sources approach. The Bayesian mixing model, based on an optimum set of tracers, estimated that the highest contribution of soil erosion in crop field land use and shoulder-component landforms constituted the most important land-use factor. This technique could, therefore, be a useful tool for soil and sediment control management strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miyagi, Y.; Freymueller, J.; Kimata, F.; Sato, T.; Mann, D.
2006-12-01
Okmok volcano is located on Umnak Island in the Aleutian Arc, Alaska. This volcano consists of a large caldera, and there are several post-caldera cones within the caldera. It has erupted more than 10 times during the last century, with the latest eruption occurring in February 1997. Annual GPS campaigns during 2000-2003 have revealed a rapid inflation at Okmok volcano. Surface deformation indicates that Okmok volcano has been inflating during 2000-2003 at a variable inflation rate. Total displacements over three years are as large as 15 cm of maximum radial displacement and more than 35 cm of maximum uplift. Simple inflation pattern after 2001, showing radial outward displacements from the caldera center and significant uplifts, are modeled by a Mogi inflation source, which is located at the depth of about 3.1 km beneath the geometric center of the caldera, and we interpreted the source as a shallow magma chamber. The results from our GPS measurements correspond approximately to the results from InSAR measurement for almost same periods, except for an underestimate of the volume change rate of the source deduced by InSAR data for the period 2002-2003. Taking into consideration the results from InSAR measurements, the amount of volume increase in the source is estimated to be about 0.028 km3 during 1997-2003. This means that 20-54 percent of the volume erupted in the 1997 eruption has been already replenished in the shallow magma chamber. An eruption recurrence time is estimated from the volume change rate of the source to be about 15-30 years for 1997-sized eruptions, which is consistent with about 25 years average time interval between major eruptions at Okmok volcano. An additional modeling using a rectangular tensile source combined to the main spherical source suggests a possibility of other magma storage located between the main source and the active vent, which is associated with lateral magma transportation between them. The combined model improved residuals compared to those from single-source model, and provided significantly better fitting to the deformation data inside the caldera.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.
2015-12-01
The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.
UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario
Porada, Eugeniusz; Szyszkowicz, Mieczysław
2016-01-01
UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416
NASA Astrophysics Data System (ADS)
Galvao, Diogo
2013-04-01
As a result of various economic, social and environmental factors, we can all experience the increase in importance of water resources at a global scale. As a consequence, we can also notice the increasing need of methods and systems capable of efficiently managing and combining the rich and heterogeneous data available that concerns, directly or indirectly, these water resources, such as in-situ monitoring station data, Earth Observation images and measurements, Meteorological modeling forecasts and Hydrological modeling. Under the scope of the MyWater project, we developed a water management system capable of satisfying just such needs, under a flexible platform capable of accommodating future challenges, not only in terms of sources of data but also on applicable models to extract information from it. From a methodological point of view, the MyWater platform obtains data from distinct sources, and in distinct formats, be they Satellite images or meteorological model forecasts, transforms and combines them in ways that allow them to be fed to a variety of hydrological models (such as MOHID Land, SIMGRO, etc…), which themselves can also be combined, using such approaches as those advocated by the OpenMI standard, to extract information in an automated and time efficient manner. Such an approach brings its own deal of challenges, and further research was developed under this project on the best ways to combine such data and on novel approaches to hydrological modeling (like the PriceXD model). From a technical point of view, the MyWater platform is structured according to a classical SOA architecture, with a flexible object oriented modular backend service responsible for all the model process management and data treatment, while the information extracted can be interacted with using a variety of frontends, from a web portal, including also a desktop client, down to mobile phone and tablet applications. From an operational point of view, a user can not only see these model results on graphically rich user interfaces, but also interact with them in ways that allows them to extract their own information. This platform was then applied to a variety of case studies in such countries as the Netherlands, Greece, Portugal, Brazil and Africa, to verify the practicality, accuracy and value that it brings to end users and stakeholders.
Development of an RF-EMF Exposure Surrogate for Epidemiologic Research.
Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin
2015-05-22
Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people's mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources.
Development of an RF-EMF Exposure Surrogate for Epidemiologic Research
Roser, Katharina; Schoeni, Anna; Bürgi, Alfred; Röösli, Martin
2015-01-01
Exposure assessment is a crucial part in studying potential effects of RF-EMF. Using data from the HERMES study on adolescents, we developed an integrative exposure surrogate combining near-field and far-field RF-EMF exposure in a single brain and whole-body exposure measure. Contributions from far-field sources were modelled by propagation modelling and multivariable regression modelling using personal measurements. Contributions from near-field sources were assessed from both, questionnaires and mobile phone operator records. Mean cumulative brain and whole-body doses were 1559.7 mJ/kg and 339.9 mJ/kg per day, respectively. 98.4% of the brain dose originated from near-field sources, mainly from GSM mobile phone calls (93.1%) and from DECT phone calls (4.8%). Main contributors to the whole-body dose were GSM mobile phone calls (69.0%), use of computer, laptop and tablet connected to WLAN (12.2%) and data traffic on the mobile phone via WLAN (6.5%). The exposure from mobile phone base stations contributed 1.8% to the whole-body dose, while uplink exposure from other people’s mobile phones contributed 3.6%. In conclusion, the proposed approach is considered useful to combine near-field and far-field exposure to an integrative exposure surrogate for exposure assessment in epidemiologic studies. However, substantial uncertainties remain about exposure contributions from various near-field and far-field sources. PMID:26006132
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
NASA Astrophysics Data System (ADS)
Pitarka, Arben; Mellors, Robert; Rodgers, Arthur; Vorobiev, Oleg; Ezzedine, Souheil; Matzel, Eric; Ford, Sean; Walter, Bill; Antoun, Tarabay; Wagoner, Jeffery; Pasyanos, Mike; Petersson, Anders; Sjogreen, Bjorn
2014-05-01
We investigate the excitation and propagation of far-field (epicentral distance larger than 20 m) seismic waves by analyzing and modeling ground motion from an underground chemical explosion recorded during the Source Physics Experiment (SPE), Nevada. The far-field recorded ground motion is characterized by complex features, such as large azimuthal variations in P- and S-wave amplitudes, as well as substantial energy on the tangential component of motion. Shear wave energy is also observed on the tangential component of the near-field motion (epicentral distance smaller than 20 m) suggesting that shear waves were generated at or very near the source. These features become more pronounced as the waves propagate away from the source. We address the shear wave generation during the explosion by modeling ground motion waveforms recorded in the frequency range 0.01-20 Hz, at distances of up to 1 km. We used a physics based approach that combines hydrodynamic modeling of the source with anelastic modeling of wave propagation in order to separate the contributions from the source and near-source wave scattering on shear motion generation. We found that wave propagation scattering caused by the near-source geological environment, including surface topography, contributes to enhancement of shear waves generated from the explosion source. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-06NA25946/ NST11-NCNS-TM-EXP-PD15.
Valid Statistical Analysis for Logistic Regression with Multiple Sources
NASA Astrophysics Data System (ADS)
Fienberg, Stephen E.; Nardi, Yuval; Slavković, Aleksandra B.
Considerable effort has gone into understanding issues of privacy protection of individual information in single databases, and various solutions have been proposed depending on the nature of the data, the ways in which the database will be used and the precise nature of the privacy protection being offered. Once data are merged across sources, however, the nature of the problem becomes far more complex and a number of privacy issues arise for the linked individual files that go well beyond those that are considered with regard to the data within individual sources. In the paper, we propose an approach that gives full statistical analysis on the combined database without actually combining it. We focus mainly on logistic regression, but the method and tools described may be applied essentially to other statistical models as well.
NASA Astrophysics Data System (ADS)
Zotter, Peter; Herich, Hanna; Gysel, Martin; El-Haddad, Imad; Zhang, Yanlin; Močnik, Griša; Hüglin, Christoph; Baltensperger, Urs; Szidat, Sönke; Prévôt, André S. H.
2017-03-01
Equivalent black carbon (EBC) measured by a multi-wavelength Aethalometer can be apportioned to traffic and wood burning. The method is based on the differences in the dependence of aerosol absorption on the wavelength of light used to investigate the sample, parameterized by the source-specific absorption Ångström exponent (α). While the spectral dependence (defined as α values) of the traffic-related EBC light absorption is low, wood smoke particles feature enhanced light absorption in the blue and near ultraviolet. Source apportionment results using this methodology are hence strongly dependent on the α values assumed for both types of emissions: traffic αTR, and wood burning αWB. Most studies use a single αTR and αWB pair in the Aethalometer model, derived from previous work. However, an accurate determination of the source specific α values is currently lacking and in some recent publications the applicability of the Aethalometer model was questioned.Here we present an indirect methodology for the determination of αWB and αTR by comparing the source apportionment of EBC using the Aethalometer model with 14C measurements of the EC fraction on 16 to 40 h filter samples from several locations and campaigns across Switzerland during 2005-2012, mainly in winter. The data obtained at eight stations with different source characteristics also enabled the evaluation of the performance and the uncertainties of the Aethalometer model in different environments. The best combination of αTR and αWB (0.9 and 1.68, respectively) was obtained by fitting the Aethalometer model outputs (calculated with the absorption coefficients at 470 and 950 nm) against the fossil fraction of EC (ECF / EC) derived from 14C measurements. Aethalometer and 14C source apportionment results are well correlated (r = 0.81) and the fitting residuals exhibit only a minor positive bias of 1.6 % and an average precision of 9.3 %. This indicates that the Aethalometer model reproduces reasonably well the 14C results for all stations investigated in this study using our best estimate of a single αWB and αTR pair. Combining the EC, 14C, and Aethalometer measurements further allowed assessing the dependence of the mass absorption cross section (MAC) of EBC on its source. Results indicate no significant difference in MAC at 880 nm between EBC originating from traffic or wood-burning emissions. Using ECF / EC as reference and constant a priori selected αTR values, αWB was also calculated for each individual data point. No clear station-to-station or season-to-season differences in αWB were observed, but αTR and αWB values are interdependent. For example, an increase in αTR by 0.1 results in a decrease in αWB by 0.1. The fitting residuals of different αTR and αWB combinations depend on ECF / EC such that a good agreement cannot be obtained over the entire ECF / EC range using other α pairs. Additional combinations of αTR = 0.8, and 1.0 and αWB = 1.8 and 1.6, respectively, are possible but only for ECF / EC between ˜ 40 and 85 %. Applying α values previously used in the literature such as αWB of ˜ 2 or any αWB in combination with αTR = 1.1 to our data set results in large residuals. Therefore we recommend to use the best α combination as obtained here (αTR = 0.9 and αWB = 1.68) in future studies when no or only limited additional information like 14C measurements are available. However, these results were obtained for locations impacted by black carbon (BC) mainly from traffic consisting of a modern car fleet and residential wood combustion with well-constrained combustion efficiencies. For regions of the world with different combustion conditions, additional BC sources, or fuels used, further investigations are needed.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin
2009-01-01
We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.
Two-dimensional seismic velocity models of southern Taiwan from TAIGER transects
NASA Astrophysics Data System (ADS)
McIntosh, K. D.; Kuochen, H.; Van Avendonk, H. J.; Lavier, L. L.; Wu, F. T.; Okaya, D. A.
2013-12-01
We use a broad combination of wide-angle seismic data sets to develop high-resolution crustal-scale, two-dimensional, velocity models across southern Taiwan and the adjacent Huatung Basin. The data were recorded primarily during the TAIGER project and include records of thousands of marine airgun shots, several land explosive sources, and ~90 Earthquakes. Both airgun sources and earthquake data were recorded by dense land arrays, and ocean bottom seismographs (OBS) recorded airgun sources east of Taiwan. This combination of data sets enables us to develop a high-resolution upper- to mid-crustal model defined by marine and explosive sources, while also constraining the full crustal structure - with depths approaching 50 km - by using the earthquake and explosive sources. These data and the resulting models are particularly important for understanding the development of arc-continent collision in Taiwan. McIntosh et al. (2013) have shown that highly extended continental crust of the northeastern South China Sea rifted margin is underthrust at the Manila trench southwest of Taiwan but then is structurally underplated to the accretionary prism. This process of basement accretion is confirmed in the southern Central Range of Taiwan where basement outcrops can be directly linked to high seismic velocities measured in the accretionary prism well south of the continental shelf, even south of Taiwan. These observations indicate that the southern Central Range begins to grow well before there is any direct interaction between the North Luzon arc and the Eurasian continent. Our transects provide information on how the accreted mass behaves as it approaches the continental shelf and on deformation of the arc and forearc as this occurs. We suggest that arc-continent collision in Taiwan actually develops as arc-prism-continent collision.
Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources
NASA Astrophysics Data System (ADS)
Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato
2017-04-01
Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodds, W. K.; Collins, S. M.; Hamilton, S. K.
Analyses of 21 15N stable isotope tracer experiments, designed to examine food web dynamics in streams around the world, indicated that the isotopic composition of food resources assimilated by primary consumers (mostly invertebrates) poorly reflected the presumed food sources. Modeling indicated that consumers assimilated only 33–50% of the N available in sampled food sources such as decomposing leaves, epilithon, and fine particulate detritus over feeding periods of weeks or more. Thus, common methods of sampling food sources consumed by animals in streams do not sufficiently reflect the pool of N they assimilate. Lastly, Isotope tracer studies, combined with modeling andmore » food separation techniques, can improve estimation of N pools in food sources that are assimilated by consumers.« less
Dodds, W. K.; Collins, S. M.; Hamilton, S. K.; ...
2014-10-01
Analyses of 21 15N stable isotope tracer experiments, designed to examine food web dynamics in streams around the world, indicated that the isotopic composition of food resources assimilated by primary consumers (mostly invertebrates) poorly reflected the presumed food sources. Modeling indicated that consumers assimilated only 33–50% of the N available in sampled food sources such as decomposing leaves, epilithon, and fine particulate detritus over feeding periods of weeks or more. Thus, common methods of sampling food sources consumed by animals in streams do not sufficiently reflect the pool of N they assimilate. Lastly, Isotope tracer studies, combined with modeling andmore » food separation techniques, can improve estimation of N pools in food sources that are assimilated by consumers.« less
NASA Astrophysics Data System (ADS)
Küchler, N.; Kneifel, S.; Kollias, P.; Loehnert, U.
2017-12-01
Cumulus and stratocumulus clouds strongly affect the Earth's radiation budget and are a major uncertainty source in weather and climate prediction models. To improve and evaluate models, a comprehensive understanding of cloud processes is necessary and references are needed. Therefore active and passive microwave remote sensing of clouds can be used to derive cloud properties such as liquid water path and liquid water content (LWC), which can serve as a reference for model evaluation. However, both the measurements and the assumptions when retrieving physical quantities from the measurements involve uncertainty sources. Frisch et al. (1998) combined radar and radiometer observations to derive LWC profiles. Assuming their assumptions are correct, there will be still uncertainties regarding the measurement setup. We investigate how varying beam width, temporal and vertical resolutions, frequency combinations, and beam overlap of and between the two instruments influence the retrieval of LWC profiles. Especially, we discuss the benefit of combining vertically, high resolved radar and radiometer measurements using the same antenna, i.e. having ideal beam overlap. Frisch, A. S., G. Feingold, C. W. Fairall, T. Uttal, and J. B. Snider, 1998: On cloud radar and microwave radiometer measurements of stratus cloud liquid water profiles. J. Geophys. Res.: Atmos., 103 (18), 23 195-23 197, doi:0148-0227/98/98JD-01827509.00.
Crowcroft, Natasha S.; Johnson, Caitlin; Chen, Cynthia; Li, Ye; Marchand-Austin, Alex; Bolotin, Shelly; Schwartz, Kevin; Deeks, Shelley L.; Jamieson, Frances; Drews, Steven; Russell, Margaret L.; Svenson, Lawrence W.; Simmonds, Kimberley; Mahmud, Salaheddin M.; Kwong, Jeffrey C.
2018-01-01
Introduction Under-reporting of pertussis cases is a longstanding challenge. We estimated the true number of pertussis cases in Ontario using multiple data sources, and evaluated the completeness of each source. Methods We linked data from multiple sources for the period 2009 to 2015: public health reportable disease surveillance data, public health laboratory data, and health administrative data (hospitalizations, emergency department visits, and physician office visits). To estimate the total number of pertussis cases in Ontario, we used a three-source capture-recapture analysis stratified by age (infants, or aged one year and older) and adjusting for dependency between sources. We used the Bayesian Information Criterion to compare models. Results Using probable and confirmed reported cases, laboratory data, and combined hospitalizations/emergency department visits, the estimated total number of cases during the six-year period amongst infants was 924, compared with 545 unique observed cases from all sources. Using the same sources, the estimated total for those aged 1 year and older was 12,883, compared with 3,304 observed cases from all sources. Only 37% of infants and 11% for those aged 1 year and over admitted to hospital or seen in an emergency department for pertussis were reported to public health. Public health reporting sensitivity varied from 2% to 68% depending on age group and the combination of data sources included. Sensitivity of combined hospitalizations and emergency department visits varied from 37% to 49% and of laboratory data from 1% to 50%. Conclusions All data sources contribute cases and are complementary, suggesting that the incidence of pertussis is substantially higher than suggested by routine reports. The sensitivity of different data sources varies. Better case identification is required to improve pertussis control in Ontario. PMID:29718945
NASA Astrophysics Data System (ADS)
Ren, Y.
2017-12-01
Context Land surface temperatures (LSTs) spatio-temporal distribution pattern of urban forests are influenced by many ecological factors; the identification of interaction between these factors can improve simulations and predictions of spatial patterns of urban cold islands. This quantitative research requires an integrated method that combines multiple sources data with spatial statistical analysis. Objectives The purpose of this study was to clarify urban forest LST influence interaction between anthropogenic activities and multiple ecological factors using cluster analysis of hot and cold spots and Geogdetector model. We introduced the hypothesis that anthropogenic activity interacts with certain ecological factors, and their combination influences urban forests LST. We also assumed that spatio-temporal distributions of urban forest LST should be similar to those of ecological factors and can be represented quantitatively. Methods We used Jinjiang as a representative city in China as a case study. Population density was employed to represent anthropogenic activity. We built up a multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) on a unified urban scale to support urban forest LST influence interaction research. Through a combination of spatial statistical analysis results, multi-source spatial data, and Geogdetector model, the interaction mechanisms of urban forest LST were revealed. Results Although different ecological factors have different influences on forest LST, in two periods with different hot spots and cold spots, the patch area and dominant tree species were the main factors contributing to LST clustering in urban forests. The interaction between anthropogenic activity and multiple ecological factors increased LST in urban forest stands, linearly and nonlinearly. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. Conclusions In conclusion, a combination of spatial statistics and GeogDetector models should be effective for quantitatively evaluating interactive relationships among ecological factors, anthropogenic activity and LST.
Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.
2016-01-01
Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.
Gass, Katherine; Balachandran, Sivaraman; Chang, Howard H.; Russell, Armistead G.; Strickland, Matthew J.
2015-01-01
Epidemiologic studies utilizing source apportionment (SA) of fine particulate matter have shown that particles from certain sources might be more detrimental to health than others; however, it is difficult to quantify the uncertainty associated with a given SA approach. In the present study, we examined associations between source contributions of fine particulate matter and emergency department visits for pediatric asthma in Atlanta, Georgia (2002–2010) using a novel ensemble-based SA technique. Six daily source contributions from 4 SA approaches were combined into an ensemble source contribution. To better account for exposure uncertainty, 10 source profiles were sampled from their posterior distributions, resulting in 10 time series with daily SA concentrations. For each of these time series, Poisson generalized linear models with varying lag structures were used to estimate the health associations for the 6 sources. The rate ratios for the source-specific health associations from the 10 imputed source contribution time series were combined, resulting in health associations with inflated confidence intervals to better account for exposure uncertainty. Adverse associations with pediatric asthma were observed for 8-day exposure to particles generated from diesel-fueled vehicles (rate ratio = 1.06, 95% confidence interval: 1.01, 1.10) and gasoline-fueled vehicles (rate ratio = 1.10, 95% confidence interval: 1.04, 1.17). PMID:25776011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007
Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less
Analysis on a diffusive SIS epidemic model with logistic source
NASA Astrophysics Data System (ADS)
Li, Bo; Li, Huicong; Tong, Yachun
2017-08-01
In this paper, we are concerned with an SIS epidemic reaction-diffusion model with logistic source in spatially heterogeneous environment. We first discuss some basic properties of the parabolic system, including the uniform upper bound of solutions and global stability of the endemic equilibrium when spatial environment is homogeneous. Our primary focus is to determine the asymptotic profile of endemic equilibria (when exist) if the diffusion (migration) rate of the susceptible or infected population is small or large. Combined with the results of Li et al. (J Differ Equ 262:885-913, 2017) where the case of linear source is studied, our analysis suggests that varying total population enhances persistence of infectious disease.
Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2009-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Language Model Combination and Adaptation Using Weighted Finite State Transducers
NASA Technical Reports Server (NTRS)
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
2010-01-01
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
Analysis of an entrainment model of the jet in a crossflow
NASA Technical Reports Server (NTRS)
Chang, H. S.; Werner, J. E.
1972-01-01
A theoretical model has been proposed for the problem of a round jet in an incompressible cross-flow. The method of matched asymptotic expansions has been applied to this problem. For the solution to the flow problem in the inner region, the re-entrant wake flow model was used with the re-entrant flow representing the fluid entrained by the jet. Higher order corrections are obtained in terms of this basic solution. The perturbation terms in the outer region was found to be a line distribution of doublets and sources. The line distribution of sources represents the combined effect of the entrainment and the displacement.
USDA-ARS?s Scientific Manuscript database
This study investigates the utility of integrating remotely sensed estimates of leaf chlorophyll (Cab) into a therma-based Two-Source Energy Balance (TSEB) model that estimates land-surface CO2 and energy fluxes using an analytical, light-use-efficiency (LUE) based model of canopy resistance. The LU...
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Field-scale forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2017-07-01
Understanding the effects of back diffusion of groundwater contaminants from low-permeability zones to aquifers is critical to making site management decisions related to remedial actions. Here, we combine aquifer and aquitard data to develop recommended site characterization strategies using a three-stage classification of plume life cycle based on the solute origins: aquifer source zone dissolution, source zone dissolution combined with back diffusion from an aquitard, and only back diffusion. We use measured aquitard concentration profile data from three field sites to identify signature shapes that are characteristic of these three stages. We find good fits to the measured data with analytical solutions that include the effects of advection and forward and back diffusion through low-permeability zones, and linearly and exponentially decreasing flux resulting from source dissolution in the aquifer. Aquifer contaminant time series data at monitoring wells from a mature site were well described using analytical solutions representing the combined case of source zone and back diffusion, while data from a site where the source had been isolated were well described solely by back diffusion. The modeling approach presented in this study is designed to enable site managers to implement appropriate remediation technologies at a proper timing for high- and low-permeability zones, considering estimated plume life cycle.
Expectancy effects in source memory: how moving to a bad neighborhood can change your memory.
Kroneisen, Meike; Woehe, Larissa; Rausch, Leonie Sophie
2015-02-01
Enhanced memory for cheaters could be suited to avoid social exchange situations in which we run the risk of getting exploited by others. Several experiments demonstrated that we have better source memory for faces combined with negative rather than positive behavior (Bell & Buchner, Memory & Cognition, 38, 29-41, 2010) or for cheaters and cooperators showing unexpected behavior (Bell, Buchner, Kroneisen, Giang, Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 1512-1529, 2012). In the present study, we compared two groups: Group 1 just saw faces combined with aggressive, prosocial or neutral behavior descriptions, but got no further information, whereas group 2 was explicitly told that they would now see the behavior descriptions of very aggressive and unsocial persons. To measure old-new discrimination, source memory, and guessing biases separately, we used a multinomial model. When having no expectancies about the behavior of the presented people, enhanced source memory for aggressive persons was found. In comparison, source memory for faces combined with prosocial behavior descriptions was significantly higher in the group expecting only aggressive persons. These findings can be attributed to a mechanism that focuses on expectancy-incongruent information, representing a more flexible and therefore efficient memory strategy for remembering exchange-relevant information.
Field-scale forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2017-07-01
Understanding the effects of back diffusion of groundwater contaminants from low-permeability zones to aquifers is critical to making site management decisions related to remedial actions. Here, we combine aquifer and aquitard data to develop recommended site characterization strategies using a three-stage classification of plume life cycle based on the solute origins: aquifer source zone dissolution, source zone dissolution combined with back diffusion from an aquitard, and only back diffusion. We use measured aquitard concentration profile data from three field sites to identify signature shapes that are characteristic of these three stages. We find good fits to the measured data with analytical solutions that include the effects of advection and forward and back diffusion through low-permeability zones, and linearly and exponentially decreasing flux resulting from source dissolution in the aquifer. Aquifer contaminant time series data at monitoring wells from a mature site were well described using analytical solutions representing the combined case of source zone and back diffusion, while data from a site where the source had been isolated were well described solely by back diffusion. The modeling approach presented in this study is designed to enable site managers to implement appropriate remediation technologies at a proper timing for high- and low-permeability zones, considering estimated plume life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
Pallas, Benoît; Clément-Vidal, Anne; Rebolledo, Maria-Camila; Soulié, Jean-Christophe; Luquet, Delphine
2013-01-01
The ability to assimilate C and allocate non-structural carbohydrates (NSCs) to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm) were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyze such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed. PMID:24204372
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Valdes, Paul J.; Kaplan, Jed O.
2018-04-01
The observed rise in atmospheric methane (CH4) from 375 ppbv during the Last Glacial Maximum (LGM: 21,000 years ago) to 680 ppbv during the late preindustrial era is not well understood. Atmospheric chemistry considerations implicate an increase in CH4 sources, but process-based estimates fail to reproduce the required amplitude. CH4 stable isotopes provide complementary information that can help constrain the underlying causes of the increase. We combine Earth System model simulations of the late preindustrial and LGM CH4 cycles, including process-based estimates of the isotopic discrimination of vegetation, in a box model of atmospheric CH4 and its isotopes. Using a Bayesian approach, we show how model-based constraints and ice core observations may be combined in a consistent probabilistic framework. The resultant posterior distributions point to a strong reduction in wetland and other biogenic CH4 emissions during the LGM, with a modest increase in the geological source, or potentially natural or anthropogenic fires, accounting for the observed enrichment of δ13CH4.
THREAT ANTICIPATION AND DECEPTIVE REASONING USING BAYESIAN BELIEF NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, Glenn O; Olama, Mohammed M; Lake, Joe E
Recent events highlight the need for tools to anticipate threats posed by terrorists. Assessing these threats requires combining information from disparate data sources such as analytic models, simulations, historical data, sensor networks, and user judgments. These disparate data can be combined in a coherent, analytically defensible, and understandable manner using a Bayesian belief network (BBN). In this paper, we develop a BBN threat anticipatory model based on a deceptive reasoning algorithm using a network engineering process that treats the probability distributions of the BBN nodes within the broader context of the system development process.
Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake
NASA Astrophysics Data System (ADS)
Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano
2016-04-01
On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.
NASA Astrophysics Data System (ADS)
Colin, Angel
2014-03-01
This paper describes an experimental setup for the spectral calibration of bolometric detectors used in radioastronomy. The system is composed of a Martin-Puplett interferometer with two identical artificial blackbody sources operating in the vacuum mode at 77 K and 300 K simultaneously. One source is integrated into a liquid nitrogen cryostat, and the other one into a vacuum chamber at room temperature. The sources were designed with a combination of conical with cylindrical geometries thus forming an orthogonal configuration to match the internal optics of the interfermometer. With a simple mathematical model we estimated emissivities of ε 0.995 for each source.
NASA Astrophysics Data System (ADS)
Heo, Jongbae; Dulger, Muaz; Olson, Michael R.; McGinnis, Jerome E.; Shelton, Brandon R.; Matsunaga, Aiko; Sioutas, Constantinos; Schauer, James J.
2013-07-01
Four hundred fine particulate matter (PM2.5) samples collected over a 1-year period at two sites in the Los Angeles Basin were analyzed for organic carbon (OC), elemental carbon (EC), water soluble organic carbon (WSOC) and organic molecular markers. The results were used in a Positive Matrix Factorization (PMF) receptor model to obtain daily, monthly and annual average source contributions to PM2.5 OC. Results of the PMF model showed similar source categories with comparable year-long contributions to PM2.5 OC across the sites. Five source categories providing reasonably stable profiles were identified: mobile, wood smoke, primary biogenic, and two types of secondary organic carbon (SOC) (i.e., anthropogenic and biogenic emissions). Total primary emission factors and total SOC factors contributed approximately 60% and 40%, respectively, to the annual-average OC concentrations. Primary sources showed strong seasonal patterns with high winter peaks and low summer peaks, while SOC showed a reverse pattern with highs in the spring and summer in the region. Interestingly, smoke from forest fires which occurred episodically in California during the summer and fall of 2009 was identified and combined with the primary biogenic source as one distinct factor to the OC budget. The PMF resolved factors were further investigated and compared to a chemical mass balance (CMB) model and a second multi-variant receptor model (UNMIX) using molecular markers considered in the PMF. Good agreement between the source contribution from mobile sources and biomass burning for three models were obtained, providing additional weight of evidence that these source apportionment techniques are sufficiently accurate for policy development. However, the CMB model did not quantify primary biogenic emissions, which were included in other sources with the SOC. Both multivariate receptor models, the PMF and the UNMIX, were unable to separate source contributions from diesel and gasoline engines.
Hybrid Air Quality Modeling Approach For Use in the Near ...
The Near-road EXposures to Urban air pollutant Study (NEXUS) investigated whether children with asthma living in close proximity to major roadways in Detroit, MI, (particularly near roadways with high diesel traffic) have greater health impacts associated with exposure to air pollutants than those living farther away. A major challenge in such health and exposure studies is the lack of information regarding pollutant exposure characterization. Air quality modeling can provide spatially and temporally varying exposure estimates for examining relationships between traffic-related air pollutants and adverse health outcomes. This paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associated with local variations of emissions and meteorology, were estimated using a combination of the AERMOD and R-LINE dispersion models, local emission source information from the National Emissions Inventory, detailed road network locations and traffic activity, and meteorological data from the Detroit City Airport. The regional background contribution was estimated using a combination of the Community Multiscale Air Quality (CMAQ) model and the Space/Time Ordinary Kriging (STOK) model. To capture the near-road pollutant gradients, refined “mini-grids” of model recep
Ng, Kok-Hoe
2016-06-01
The study aims to project future trends in living arrangements and access to children's cash contributions and market income sources among older people in Hong Kong. A cell-based model was constructed by combining available population projections, labour force projections, an extrapolation of the historical trend in living arrangements based on national survey datasets and a regression model on income sources. Under certain assumptions, the proportion of older people living with their children may decline from 59 to 48% during 2006-2030. Although access to market income sources may improve slightly, up to 20% of older people may have no access to either children's financial support or market income sources, and will not live with their children by 2030. Family support is expected to contract in the next two decades. Public pensions should be expanded to protect financially vulnerable older people. © 2015 AJA Inc.
Surface-water nutrient conditions and sources in the United States Pacific Northwest
Wise, D.R.; Johnson, H.M.
2011-01-01
The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.
Comparison of two trajectory based models for locating particle sources for two rural New York sites
NASA Astrophysics Data System (ADS)
Zhou, Liming; Hopke, Philip K.; Liu, Wei
Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
Stochastic Industrial Source Detection Using Lower Cost Methods
NASA Astrophysics Data System (ADS)
Thoma, E.; George, I. J.; Brantley, H.; Deshmukh, P.; Cansler, J.; Tang, W.
2017-12-01
Hazardous air pollutants (HAPs) can be emitted from a variety of sources in industrial facilities, energy production, and commercial operations. Stochastic industrial sources (SISs) represent a subcategory of emissions from fugitive leaks, variable area sources, malfunctioning processes, and improperly controlled operations. From the shared perspective of industries and communities, cost-effective detection of mitigable SIS emissions can yield benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations. Methods for SIS detection can be categorized by their spatial regime of operation, ranging from component-level inspection to high-sensitivity kilometer scale surveys. Methods can be temporally intensive (providing snap-shot measures) or sustained in both time-integrated and continuous forms. Each method category has demonstrated utility, however, broad adoption (or routine use) has thus far been limited by cost and implementation viability. Described here are a subset of SIS methods explored by the U.S EPA's next generation emission measurement (NGEM) program that focus on lower cost methods and models. An emerging systems approach that combines multiple forms to help compensate for reduced performance factors of lower cost systems is discussed. A case study of a multi-day HAP emission event observed by a combination of low cost sensors, open-path spectroscopy, and passive samplers is detailed. Early field results of a novel field gas chromatograph coupled with a fast HAP concentration sensor is described. Progress toward near real-time inverse source triangulation assisted by pre-modeled facility profiles using the Los Alamos Quick Urban & Industrial Complex (QUIC) model is discussed.
A compact model for electroosmotic flows in microfluidic devices
NASA Astrophysics Data System (ADS)
Qiao, R.; Aluru, N. R.
2002-09-01
A compact model to compute flow rate and pressure in microfluidic devices is presented. The microfluidic flow can be driven by either an applied electric field or a combined electric field and pressure gradient. A step change in the ζ-potential on a channel wall is treated by a pressure source in the compact model. The pressure source is obtained from the pressure Poisson equation and conservation of mass principle. In the proposed compact model, the complex fluidic network is simplified by an electrical circuit. The compact model can predict the flow rate, pressure distribution and other basic characteristics in microfluidic channels quickly with good accuracy when compared to detailed numerical simulation. Using the compact model, fluidic mixing and dispersion control are studied in a complex microfluidic network.
NASA Astrophysics Data System (ADS)
Mende, Denis; Böttger, Diana; Löwer, Lothar; Becker, Holger; Akbulut, Alev; Stock, Sebastian
2018-02-01
The European power grid infrastructure faces various challenges due to the expansion of renewable energy sources (RES). To conduct investigations on interactions between power generation and the power grid, models for the power market as well as for the power grid are necessary. This paper describes the basic functionalities and working principles of both types of models as well as steps to couple power market results and the power grid model. The combination of these models is beneficial in terms of gaining realistic power flow scenarios in the grid model and of being able to pass back results of the power flow and restrictions to the market model. Focus is laid on the power grid model and possible application examples like algorithms in grid analysis, operation and dynamic equipment modelling.
Piastra, Maria Carla; Nüßing, Andreas; Vorwerk, Johannes; Bornfleth, Harald; Oostenveld, Robert; Engwer, Christian; Wolters, Carsten H.
2018-01-01
In Electro- (EEG) and Magnetoencephalography (MEG), one important requirement of source reconstruction is the forward model. The continuous Galerkin finite element method (CG-FEM) has become one of the dominant approaches for solving the forward problem over the last decades. Recently, a discontinuous Galerkin FEM (DG-FEM) EEG forward approach has been proposed as an alternative to CG-FEM (Engwer et al., 2017). It was shown that DG-FEM preserves the property of conservation of charge and that it can, in certain situations such as the so-called skull leakages, be superior to the standard CG-FEM approach. In this paper, we developed, implemented, and evaluated two DG-FEM approaches for the MEG forward problem, namely a conservative and a non-conservative one. The subtraction approach was used as source model. The validation and evaluation work was done in statistical investigations in multi-layer homogeneous sphere models, where an analytic solution exists, and in a six-compartment realistically shaped head volume conductor model. In agreement with the theory, the conservative DG-FEM approach was found to be superior to the non-conservative DG-FEM implementation. This approach also showed convergence with increasing resolution of the hexahedral meshes. While in the EEG case, in presence of skull leakages, DG-FEM outperformed CG-FEM, in MEG, DG-FEM achieved similar numerical errors as the CG-FEM approach, i.e., skull leakages do not play a role for the MEG modality. In particular, for the finest mesh resolution of 1 mm sources with a distance of 1.59 mm from the brain-CSF surface, DG-FEM yielded mean topographical errors (relative difference measure, RDM%) of 1.5% and mean magnitude errors (MAG%) of 0.1% for the magnetic field. However, if the goal is a combined source analysis of EEG and MEG data, then it is highly desirable to employ the same forward model for both EEG and MEG data. Based on these results, we conclude that the newly presented conservative DG-FEM can at least complement and in some scenarios even outperform the established CG-FEM approaches in EEG or combined MEG/EEG source analysis scenarios, which motivates a further evaluation of DG-FEM for applications in bioelectromagnetism. PMID:29456487
Model Predictive Control of the Current Profile and the Internal Energy of DIII-D Plasmas
NASA Astrophysics Data System (ADS)
Lauret, M.; Wehner, W.; Schuster, E.
2015-11-01
For efficient and stable operation of tokamak plasmas it is important that the current density profile and the internal energy are jointly controlled by using the available heating and current-drive (H&CD) sources. The proposed approach is a version of nonlinear model predictive control in which the input set is restricted in size by the possible combinations of the H&CD on/off states. The controller uses real-time predictions over a receding-time horizon of both the current density profile (nonlinear partial differential equation) and the internal energy (nonlinear ordinary differential equation) evolutions. At every time instant the effect of every possible combination of H&CD sources on the current profile and internal energy is evaluated over the chosen time horizon. The combination that leads to the best result, which is assessed by a user-defined cost function, is then applied up until the next time instant. Simulations results based on a control-oriented transport code illustrate the effectiveness of the proposed control method. Supported by the US DOE under DE-FC02-04ER54698 & DE-SC0010661.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
NASA Astrophysics Data System (ADS)
El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.
2006-11-01
Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.
Kooistra, Lammert; Bergsma, Aldo; Chuma, Beatus; de Bruin, Sytze
2009-01-01
This paper describes the development of a sensor web based approach which combines earth observation and in situ sensor data to derive typical information offered by a dynamic web mapping service (WMS). A prototype has been developed which provides daily maps of vegetation productivity for the Netherlands with a spatial resolution of 250 m. Daily available MODIS surface reflectance products and meteorological parameters obtained through a Sensor Observation Service (SOS) were used as input for a vegetation productivity model. This paper presents the vegetation productivity model, the sensor data sources and the implementation of the automated processing facility. Finally, an evaluation is made of the opportunities and limitations of sensor web based approaches for the development of web services which combine both satellite and in situ sensor sources. PMID:22574019
Investigation of a long time series of CO2 from a tall tower using WRF-SPA
NASA Astrophysics Data System (ADS)
Smallman, Luke; Williams, Mathew; Moncrieff, John B.
2013-04-01
Atmospheric observations from tall towers are an important source of information about CO2 exchange at the regional scale. Here, we have used a forward running model, WRF-SPA, to generate a time series of CO2 at a tall tower for comparison with observations from Scotland over multiple years (2006-2008). We use this comparison to infer strength and distribution of sources and sinks of carbon and ecosystem process information at the seasonal scale. The specific aim of this research is to combine a high resolution (6 km) forward running meteorological model (WRF) with a modified version of a mechanistic ecosystem model (SPA). SPA provides surface fluxes calculated from coupled energy, hydrological and carbon cycles. This closely coupled representation of the biosphere provides realistic surface exchanges to drive mixing within the planetary boundary layer. The combined model is used to investigate the sources and sinks of CO2 and to explore which land surfaces contribute to a time series of hourly observations of atmospheric CO2 at a tall tower, Angus, Scotland. In addition to comparing the modelled CO2 time series to observations, modelled ecosystem specific (i.e. forest, cropland, grassland) CO2 tracers (e.g., assimilation and respiration) have been compared to the modelled land surface assimilation to investigate how representative tall tower observations are of land surface processes. WRF-SPA modelled CO2 time series compares well to observations (R2 = 0.67, rmse = 3.4 ppm, bias = 0.58 ppm). Through comparison of model-observation residuals, we have found evidence that non-cropped components of agricultural land (e.g., hedgerows and forest patches) likely contribute a significant and observable impact on regional carbon balance.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Interactive graphic editing tools in bioluminescent imaging simulation
NASA Astrophysics Data System (ADS)
Li, Hui; Tian, Jie; Luo, Jie; Wang, Ge; Cong, Wenxiang
2005-04-01
It is a challenging task to accurately describe complicated biological tissues and bioluminescent sources in bioluminescent imaging simulation. Several graphic editing tools have been developed to efficiently model each part of the bioluminescent simulation environment and to interactively correct or improve the initial models of anatomical structures or bioluminescent sources. There are two major types of graphic editing tools: non-interactive tools and interactive tools. Geometric building blocks (i.e. regular geometric graphics and superquadrics) are applied as non-interactive tools. To a certain extent, complicated anatomical structures and bioluminescent sources can be approximately modeled by combining a sufficient large number of geometric building blocks with Boolean operators. However, those models are too simple to describe the local features and fine changes in 2D/3D irregular contours. Therefore, interactive graphic editing tools have been developed to facilitate the local modifications of any initial surface model. With initial models composed of geometric building blocks, interactive spline mode is applied to conveniently perform dragging and compressing operations on 2D/3D local surface of biological tissues and bioluminescent sources inside the region/volume of interest. Several applications of the interactive graphic editing tools will be presented in this article.
An analysis of lamp irradiation in ellipsoidal mirror furnaces
NASA Astrophysics Data System (ADS)
Rivas, Damián; Vázquez-Espí, Carlos
2001-03-01
The irradiation generated by halogen lamps in ellipsoidal mirror furnaces is analyzed, in configurations suited to the study of the floating-zone technique for crystal growth in microgravity conditions. A line-source model for the lamp (instead of a point source) is developed, so that the longitudinal extent of the filament is taken into account. With this model the case of defocussed lamps can be handle analytically. In the model the lamp is formed by an aggregate of point-source elements, placed along the axis of the ellipsoid. For these point sources (which, in general, are defocussed) an irradiation model is formulated, within the approximation of geometrical optics. The irradiation profiles obtained (both on the lateral surface and on the inner base of the cylindrical sample) are analyzed. They present singularities related to the caustics formed by the family of reflected rays; these caustics are also analyzed. The lamp model is combined with a conduction-radiation model to study the temperature field in the sample. The effects of defocussing the lamp (common practice in crystal growth) are studied; advantages and also some drawbacks are pointed out. Comparison with experimental results is made.
Pandian, Suresh; Gokhale, Sharad; Ghoshal, Aloke Kumar
2011-02-15
A double-lane four-arm roundabout, where traffic movement is continuous in opposite directions and at different speeds, produces a zone responsible for recirculation of emissions within a road section creating canyon-type effect. In this zone, an effect of thermally induced turbulence together with vehicle wake dominates over wind driven turbulence causing pollutant emission to flow within, resulting into more or less equal amount of pollutants upwind and downwind particularly during low winds. Beyond this region, however, the effect of winds becomes stronger, causing downwind movement of pollutants. Pollutant dispersion caused by such phenomenon cannot be described accurately by open-terrain line source model alone. This is demonstrated by estimating one-minute average carbon monoxide concentration by coupling an open-terrain line source model with a street canyon model which captures the combine effect to describe the dispersion at non-signalized roundabout. The results of the modeling matched well with the measurements compared with the line source model alone and the prediction error reduced by about 50%. The study further demonstrated this with traffic emissions calculated by field and semi-empirical methods. Copyright © 2010 Elsevier B.V. All rights reserved.
Integrating multiple data sources in species distribution modeling: A framework for data fusion
Pacifici, Krishna; Reich, Brian J.; Miller, David A.W.; Gardner, Beth; Stauffer, Glenn E.; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A.
2017-01-01
The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species’ occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches (“Shared,” “Correlation,” “Covariates”) for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source (“Single”). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.
NASA Astrophysics Data System (ADS)
Park, K.; Emmons, L. K.; Mak, J. E.
2007-12-01
Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.
NASA Astrophysics Data System (ADS)
Park, K.; Mak, J. E.; Emmons, L. K.
2008-12-01
Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.
NASA Astrophysics Data System (ADS)
Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip
2017-12-01
To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.
Large Dataset of Acute Oral Toxicity Data Created for Testing ...
Acute toxicity data is a common requirement for substance registration in the US. Currently only data derived from animal tests are accepted by regulatory agencies, and the standard in vivo tests use lethality as the endpoint. Non-animal alternatives such as in silico models are being developed due to animal welfare and resource considerations. We compiled a large dataset of oral rat LD50 values to assess the predictive performance currently available in silico models. Our dataset combines LD50 values from five different sources: literature data provided by The Dow Chemical Company, REACH data from eChemportal, HSDB (Hazardous Substances Data Bank), RTECS data from Leadscope, and the training set underpinning TEST (Toxicity Estimation Software Tool). Combined these data sources yield 33848 chemical-LD50 pairs (data points), with 23475 unique data points covering 16439 compounds. The entire dataset was loaded into a chemical properties database. All of the compounds were registered in DSSTox and 59.5% have publically available structures. Compounds without a structure in DSSTox are currently having their structures registered. The structural data will be used to evaluate the predictive performance and applicable chemical domains of three QSAR models (TIMES, PROTOX, and TEST). Future work will combine the dataset with information from ToxCast assays, and using random forest modeling, assess whether ToxCast assays are useful in predicting acute oral toxicity. Pre
Ling, Z H; Guo, H; Cheng, H R; Yu, Y F
2011-10-01
The Positive Matrix Factorization (PMF) receptor model and the Observation Based Model (OBM) were combined to analyze volatile organic compound (VOC) data collected at a suburban site (WQS) in the PRD region. The purposes are to estimate the VOC source apportionment and investigate the contributions of these sources and species of these sources to the O(3) formation in PRD. Ten VOC sources were identified. We further applied the PMF-extracted concentrations of these 10 sources into the OBM and found "solvent usage 1", "diesel vehicular emissions" and "biomass/biofuel burning" contributed most to the O(3) formation at WQS. Among these three sources, higher Relative Incremental Reactivity (RIR)-weighted values of ethene, toluene and m/p-xylene indicated that they were mainly responsible for local O(3) formation in the region. Sensitivity analysis revealed that the sources of "diesel vehicular emissions", "biomass/biofuel burning" and "solvent usage 1" had low uncertainties whereas "gasoline evaporation" showed the highest uncertainty. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
Matawle, Jeevan Lal; Pervez, Shamsh; Deb, Manas Kanti; Shrivastava, Anjali; Tiwari, Suresh
2018-02-01
USEPA's UNMIX, positive matrix factorization (PMF) and effective variance-chemical mass balance (EV-CMB) receptor models were applied to chemically speciated profiles of 125 indoor PM 2.5 measurements, sampled longitudinally during 2012-2013 in low-income group households of Central India which uses solid fuels for cooking practices. Three step source apportionment studies were carried out to generate more confident source characterization. Firstly, UNMIX6.0 extracted initial number of source factors, which were used to execute PMF5.0 to extract source-factor profiles in second step. Finally, factor analog locally derived source profiles were supplemented to EV-CMB8.2 with indoor receptor PM 2.5 chemical profile to evaluate source contribution estimates (SCEs). The results of combined use of three receptor models clearly describe that UNMIX and PMF are useful tool to extract types of source categories within small receptor dataset and EV-CMB can pick those locally derived source profiles for source apportionment which are analog to PMF-extracted source categories. The source apportionment results have also shown three fold higher relative contribution of solid fuel burning emissions to indoor PM 2.5 compared to those measurements reported for normal households with LPG stoves. The previously reported influential source marker species were found to be comparatively similar to those extracted from PMF fingerprint plots. The comparison between PMF and CMB SCEs results were also found to be qualitatively similar. The performance fit measures of all three receptor models were cross-verified and validated and support each other to gain confidence in source apportionment results.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
Enhanced light absorption by mixed source black and brown carbon particles in UK winter
Liu, Shang; Aiken, Allison C.; Gorkowski, Kyle; Dubey, Manvendra K.; Cappa, Christopher D.; Williams, Leah R.; Herndon, Scott C.; Massoli, Paola; Fortner, Edward C.; Chhabra, Puneet S.; Brooks, William A.; Onasch, Timothy B.; Jayne, John T.; Worsnop, Douglas R.; China, Swarup; Sharma, Noopur; Mazzoleni, Claudio; Xu, Lu; Ng, Nga L.; Liu, Dantong; Allan, James D.; Lee, James D.; Fleming, Zoë L.; Mohr, Claudia; Zotter, Peter; Szidat, Sönke; Prévôt, André S. H.
2015-01-01
Black carbon (BC) and light-absorbing organic carbon (brown carbon, BrC) play key roles in warming the atmosphere, but the magnitude of their effects remains highly uncertain. Theoretical modelling and laboratory experiments demonstrate that coatings on BC can enhance BC's light absorption, therefore many climate models simply assume enhanced BC absorption by a factor of ∼1.5. However, recent field observations show negligible absorption enhancement, implying models may overestimate BC's warming. Here we report direct evidence of substantial field-measured BC absorption enhancement, with the magnitude strongly depending on BC coating amount. Increases in BC coating result from a combination of changing sources and photochemical aging processes. When the influence of BrC is accounted for, observationally constrained model calculations of the BC absorption enhancement can be reconciled with the observations. We conclude that the influence of coatings on BC absorption should be treated as a source and regionally specific parameter in climate models. PMID:26419204
NASA Astrophysics Data System (ADS)
Wittmer, I. K.; Bader, H.-P.; Scheidegger, R.; Stamm, C.
2016-02-01
During rain events, biocides and plant protection products are transported from agricultural fields but also from urban sources to surface waters. Originally designed to be biologically active, these compounds may harm organisms in aquatic ecosystems. Although several models allow either urban or agricultural storm events to be predicted, only few combine these two sources, and none of them include biocide losses from building envelopes. This study therefore aims to develop a model designed to predict water and substance flows from urban and agricultural sources to surface waters. We developed a model based on physical principles for water percolation and substance flow including micro- (also called matrix-) and macropore-flows for the agricultural areas together with a model representing sources, sewer systems and a wastewater treatment plant for urban areas. In a second step, the combined model was applied to a catchment where an extensive field study had been conducted. The modelled and measured discharge and compound results corresponded reasonably well in terms of quantity and dynamics. The total cumulative discharge was only slightly lower than the total measured discharge (factor 0.94). The total modelled losses of the agriculturally used herbicide atrazine were slightly lower (∼25%) than the measured losses when the soil pore water distribution coefficient (describing the partition between soil particles and pore water) (Kd) was kept constant and slightly higher if it was increased with time. The modelled urban losses of diuron from facades were within a factor of three with respect to the measured values. The results highlighted the change in importance of the flow components during a rain event from urban sources during the most intensive rain period towards agricultural ones over a prolonged time period. Applications to two other catchments, one neighbouring and one on another continent showed that the model can be applied using site specific data for land use, pesticide application, weather and literature data for soil related parameters such as saturated water content, hydraulic conductivity or lateral distances of the drainage pipes without any further calibration of parameters. This is a promising basis for using the model in a wide range of catchments.
Combining harmonic generation and laser chirping to achieve high spectral density in Compton sources
Terzić, Balša; Reeves, Cody; Krafft, Geoffrey A.
2016-04-25
Recently various laser-chirping schemes have been investigated with the goal of reducing or eliminating ponderomotive line broadening in Compton or Thomson scattering occurring at high laser intensities. Moreover, as a next level of detail in the spectrum calculations, we have calculated the line smoothing and broadening expected due to incident beam energy spread within a one-dimensional plane wave model for the incident laser pulse, both for compensated (chirped) and unchirped cases. The scattered compensated distributions are treatable analytically within three models for the envelope of the incident laser pulses: Gaussian, Lorentzian, or hyperbolic secant. We use the new results tomore » demonstrate that the laser chirping in Compton sources at high laser intensities: (i) enables the use of higher order harmonics, thereby reducing the required electron beam energies; and (ii) increases the photon yield in a small frequency band beyond that possible with the fundamental without chirping. We found that this combination of chirping and higher harmonics can lead to substantial savings in the design, construction and operational costs of the new Compton sources. This is of particular importance to the widely popular laser-plasma accelerator based Compton sources, as the improvement in their beam quality enters the regime where chirping is most effective.« less
Minnehaha Creek Watershed SWMM5 Model Data Analysis and Future Recommendations
2013-07-01
comprehensive inventory of data inconsistencies without a source data inventory. To solve this problem, MCWD needs to develop a detailed, georeferenced, GIS...LMCW models, USACE recommends that MCWD keep the SWMM5 models separated instead of combining them into one comprehensive SWMM5 model for the entire...SWMM5 geometry. SWMM5 offers three routing methods: steady flow, kinematic wave, and dynamic wave. Each method offers advantages and disadvantages and
An analytic model of axisymmetric mantle plume due to thermal and chemical diffusion
NASA Technical Reports Server (NTRS)
Liu, Mian; Chase, Clement G.
1990-01-01
An analytic model of axisymmetric mantle plumes driven by either thermal diffusion or combined diffusion of both heat and chemical species from a point source is presented. The governing equations are solved numerically in cylindrical coordinates for a Newtonian fluid with constant viscosity. Instead of starting from an assumed plume source, constraints on the source parameters, such as the depth of the source regions and the total heat input from the plume sources, are deduced using the geophysical characteristics of mantle plumes inferred from modelling of hotspot swells. The Hawaiian hotspot and the Bermuda hotspot are used as examples. Narrow mantle plumes are expected for likely mantle viscosities. The temperature anomaly and the size of thermal plumes underneath the lithosphere can be sensitive indicators of plume depth. The Hawaiian plume is likely to originate at a much greater depth than the Bermuda plume. One suggestive result puts the Hawaiian plume source at a depth near the core-mantle boundary and the source of the Bermuda plume in the upper mantle, close to the 700 km discontinuity. The total thermal energy input by the source region to the Hawaiian plume is about 5 x 10(10) watts. The corresponding diameter of the source region is about 100 to 150 km. Chemical diffusion from the same source does not affect the thermal structure of the plume.
Huang, Kuixian; Luo, Xingzhang
2018-01-01
The purpose of this study is to recognize the contamination characteristics of trace metals in soils and apportion their potential sources in Northern China to provide a scientific basis for basic of soil environment management and pollution control. The data set of metals for 12 elements in surface soil samples was collected. The enrichment factor and geoaccumulation index were used to identify the general geochemical characteristics of trace metals in soils. The UNMIX and positive matrix factorizations (PMF) models were comparatively applied to apportion their potential sources. Furthermore, geostatistical tools were used to study the spatial distribution of pollution characteristics and to identify the affected regions of sources that were derived from apportionment models. The soils were contaminated by Cd, Hg, Pb and Zn to varying degree. Industrial activities, agricultural activities and natural sources were identified as the potential sources determining the contents of trace metals in soils with contributions of 24.8%–24.9%, 33.3%–37.2% and 38.0%–41.8%, respectively. The slightly different results obtained from UNMIX and PMF might be caused by the estimations of uncertainty and different algorithms within the models. PMID:29474412
NASA Astrophysics Data System (ADS)
Clark, David A.
2012-09-01
Acquisition of magnetic gradient tensor data is likely to become routine in the near future. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for several elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalised source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetisation direction. In combination the NSS and its vector gradient determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Inversion based on the vector gradient of the NSS over the Tallawang magnetite deposit obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Besides the geological applications, the algorithms for the dipole model are readily applicable to the detection, location and characterisation (DLC) of magnetic objects, such as naval mines, unexploded ordnance, shipwrecks, archaeological artefacts, and buried drums.
Microbial Source Module (MSM): Documenting the Science ...
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consumed and produced by the MSM which is based on the HSPF (Bicknell et al., 1997) Bacterial Indicator Tool (EPA, 2013b, 2013c). Non-point sources include numbers, locations, and shedding rates of domestic agricultural animals (dairy and beef cows, swine, poultry, etc.) and wildlife (deer, duck, raccoon, etc.). Monthly maximum microbial storage and accumulation rates on the land surface, adjusted for die-off, are computed over an entire season for four land-use types (cropland, pasture, forest, and urbanized/mixed-use) for each subwatershed. Monthly point source microbial loadings to instream locations (i.e., stream segments that drain individual sub-watersheds) are combined and determined for septic systems, direct instream shedding by cattle, and POTWs/WWTPs (Publicly Owned Treatment Works/Wastewater Treatment Plants). The MSM functions within a larger modeling system that characterizes human-health risk resulting from ingestion of water contaminated with pathogens. The loading estimates produced by the MSM are input to the HSPF model that simulates flow and microbial fate/transport within a watershed. Microbial counts within recreational waters are then input to the MRA-IT model (Soller et
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen
2017-05-22
This presentation is part of US-China Clean Coal project and describes the impact of power plant cycling, techno economic modeling of combined IGCC and CCS, integrated capacity generation decision making for power utilities, and a new decision support tool for integrated assessment of CCUS.
NASA Astrophysics Data System (ADS)
Guttikunda, S. K.; Johnson, T. M.; Procee, P.
2004-12-01
Fossil fuel combustion for domestic cooking and heating, power generation, industrial processes, and motor vehicles are the primary sources of air pollution in the developing country cities. Over the past twenty years, major advances have been made in understanding the social and economic consequences of air pollution. In both industrialized and developing countries, it has been shown that air pollution from energy combustion has detrimental impacts on human health and the environment. Lack of information on the sectoral contributions to air pollution - especially fine particulates, is one of the typical constraints for an effective integrated urban air quality management program. Without such information, it is difficult, if not impossible, for decision makers to provide policy advice and make informed investment decisions related to air quality improvements in developing countries. This also raises the need for low-cost ways of determining the principal sources of fine PM for a proper planning and decision making. The project objective is to develop and verify a methodology to assess and monitor the sources of PM, using a combination of ground-based monitoring and source apportionment techniques. This presentation will focus on four general tasks: (1) Review of the science and current activities in the combined use of monitoring data and modeling for better understanding of PM pollution. (2) Review of recent advances in atmospheric source apportionment techniques (e.g., principal component analysis, organic markers, source-receptor modeling techniques). (3) Develop a general methodology to use integrated top-down and bottom-up datasets. (4) Review of a series of current case studies from Africa, Asia and Latin America and the methodologies applied to assess the air pollution and its sources.
NASA Technical Reports Server (NTRS)
Creamean, J. M.; Ault, A. P.; White, A. B.; Neiman, P. J.; Ralph, F. M.; Minnis, Patrick; Prather, K. A.
2014-01-01
Aerosols that serve as cloud condensation nuclei (CCN) and ice nuclei (IN) have the potential to profoundly influence precipitation processes. Furthermore, changes in orographic precipitation have broad implications for reservoir storage and flood risks. As part of the CalWater I field campaign (2009-2011), the impacts of aerosol sources on precipitation were investigated in the California Sierra Nevada. In 2009, the precipitation collected on the ground was influenced by both local biomass burning (up to 79% of the insoluble residues found in precipitation) and long-range transported dust and biological particles (up to 80% combined), while in 2010, by mostly local sources of biomass burning and pollution (30-79% combined), and in 2011 by mostly long-range transport from distant sources (up to 100% dust and biological). Although vast differences in the source of residues was observed from year-to-year, dust and biological residues were omnipresent (on average, 55% of the total residues combined) and were associated with storms consisting of deep convective cloud systems and larger quantities of precipitation initiated in the ice phase. Further, biological residues were dominant during storms with relatively warm cloud temperatures (up to -15 C), suggesting these particles were more efficient IN compared to mineral dust. On the other hand, lower percentages of residues from local biomass burning and pollution were observed (on average 31% and 9%, respectively), yet these residues potentially served as CCN at the base of shallow cloud systems when precipitation quantities were low. The direct connection of the source of aerosols within clouds and precipitation type and quantity can be used in models to better assess how local emissions versus long-range transported dust and biological aerosols play a role in impacting regional weather and climate, ultimately with the goal of more accurate predictive weather forecast models and water resource management.
Brennan, Angela K.; Hoard, Christopher J.; Duris, Joseph W.; Ogdahl, Mary E.; Steinman, Alan D.
2016-01-29
Simulations also were run using the BATHTUB model to evaluate the number of days Silver Lake could experience algal blooms (algal blooms are defined as modeled chlorophyll a in excess of 10 micrograms per liter [µg/L]) as a result of an increase/decrease in phosphorus and nitrogen loading from groundwater, Hunter Creek, and (or) a combination of sources. If the phosphorus and nitrogen loading from Hunter Creek is decreased (and all other sources are not altered), Silver Lake will continue to experience algal blooms, but less frequently than what is currently experienced. The same scenario holds true if the nutrient loading from groundwater is decreased. Another scenario was simulated using a combination of sources, which includes increases and decreases in phosphorus and nitrogen loading from sources that are the most likely to be managed, and includes groundwater (as a result of conversion of household septic to sewers), Hunter Creek (conversion of household septic to sewers), and lawn runoff. Results of the BATHTUB model indicated that a 50-percent reduction of phosphorus and nitrogen from these sources would result in a considerable decrease in algal bloom frequency (from 231 to 132 days) and severity, and a 75-percent reduction would greatly reduce algal bloom occurrence on Silver Lake (from 231 to 57 days). BATHTUB model scenarios based on septic load model: A scenario also was conducted using the BATHTUB model to simulate the conversion of septic to sewer and included a low, high, and medium (likely) scenario of nutrient loading to Silver Lake. Simulations of the BATHTUB model indicated that, under the likely scenario, the conversion of all onsite septic treatment to sewers would result in an overall change in lake trophic status from eutrophic to mesotrophic, thereby reducing the frequency of algal blooms and algal bloom intensity on Silver Lake (chlorophyll a >10 µg/L, from 231 to 184 days per year, or chlorophyll a >20 µg/L, from 80 to 49 days per year).
NASA Astrophysics Data System (ADS)
Arcavi, Iair
2018-03-01
The kilonova associated with GW170817 displayed early blue emission, which has been interpreted as a signature of either radioactive decay in low-opacity ejecta, relativistic boosting of radioactive decay in high-velocity ejecta, the cooling of material heated by a wind or by a “cocoon” surrounding a jet, or a combination thereof. Distinguishing between these mechanisms is important for constraining the ejecta components and their parameters, which tie directly into the physics we can learn from these events. I compile published ultraviolet, optical, and infrared light curves of the GW170817 kilonova and examine whether the combined data set can be used to distinguish between early-emission models. The combined optical data show an early rise consistent with radioactive decay of low-opacity ejecta as the main emission source, but the subsequent decline is fit well by all models. A lack of constraints on the ultraviolet flux during the first few hours after discovery allows for both radioactive decay and other cooling mechanisms to explain the early bolometric light curve. This analysis demonstrates that early (few hours after merger) high-cadence optical and ultraviolet observations will be critical for determining the source of blue emission in future kilonovae.
Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.
Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J
2017-03-07
Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.
NASA Astrophysics Data System (ADS)
Ramezani, Zeinab; Orouji, Ali A.
2017-08-01
This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.
Current Source Based on H-Bridge Inverter with Output LCL Filter
NASA Astrophysics Data System (ADS)
Blahnik, Vojtech; Talla, Jakub; Peroutka, Zdenek
2015-09-01
The paper deals with a control of current source with an LCL output filter. The controlled current source is realized as a single-phase inverter and output LCL filter provides low ripple of output current. However, systems incorporating LCL filters require more complex control strategies and there are several interesting approaches to the control of this type of converter. This paper presents the inverter control algorithm, which combines model based control with a direct current control based on resonant controllers and single-phase vector control. The primary goal is to reduce the current ripple and distortion under required limits and provides fast and precise control of output current. The proposed control technique is verified by measurements on the laboratory model.
A Simple Model of Pulsed Ejector Thrust Augmentation
NASA Technical Reports Server (NTRS)
Wilson, Jack; Deloof, Richard L. (Technical Monitor)
2003-01-01
A simple model of thrust augmentation from a pulsed source is described. In the model it is assumed that the flow into the ejector is quasi-steady, and can be calculated using potential flow techniques. The velocity of the flow is related to the speed of the starting vortex ring formed by the jet. The vortex ring properties are obtained from the slug model, knowing the jet diameter, speed and slug length. The model, when combined with experimental results, predicts an optimum ejector radius for thrust augmentation. Data on pulsed ejector performance for comparison with the model was obtained using a shrouded Hartmann-Sprenger tube as the pulsed jet source. A statistical experiment, in which ejector length, diameter, and nose radius were independent parameters, was performed at four different frequencies. These frequencies corresponded to four different slug length to diameter ratios, two below cut-off, and two above. Comparison of the model with the experimental data showed reasonable agreement. Maximum pulsed thrust augmentation is shown to occur for a pulsed source with slug length to diameter ratio equal to the cut-off value.
Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi
2017-01-01
Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.
A Survey of Insider Attack Detection Research
2008-08-25
modeling of statistical features , such as the frequency of events, the duration of events, the co-occurrence of multiple events combined through...forms of attack that have been reported [Error! Reference source not found.]. For example: • Unauthorized extraction , duplication, or exfiltration...network level. Schultz pointed out that not one approach will work but solutions need to be based on multiple sensors to be able to find any combination
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
NASA Astrophysics Data System (ADS)
Wang, Hong; Li, Xiufeng; Ge, Peng
2017-02-01
We propose a design method of an optical lens combined with a total internal reflection (TIR) freeform surface for a LED front fog lamp. The TIR freeform surface controls the edge rays of the LED source. It totally reflects the edge rays and makes them emit from the top surface of the lens. And the middle rays of the LED source go through the refractive surface and reach the measured plane. We simulate the model by Monte Carlo method. Simulation results show that the front fog lamp system can satisfy the requirement of ECE R19 Rev7. The light control efficiency can reach up to 76%.
Evaluation of nitrous acid sources and sinks in urban outflow
NASA Astrophysics Data System (ADS)
Gall, Elliott T.; Griffin, Robert J.; Steiner, Allison L.; Dibb, Jack; Scheuer, Eric; Gong, Longwen; Rutter, Andrew P.; Cevik, Basak K.; Kim, Saewung; Lefer, Barry; Flynn, James
2016-02-01
Intensive air quality measurements made from June 22-25, 2011 in the outflow of the Dallas-Fort Worth (DFW) metropolitan area are used to evaluate nitrous acid (HONO) sources and sinks. A two-layer box model was developed to assess the ability of established and recently identified HONO sources and sinks to reproduce observations of HONO mixing ratios. A baseline model scenario includes sources and sinks established in the literature and is compared to scenarios including three recently identified sources: volatile organic compound-mediated conversion of nitric acid to HONO (S1), biotic emission from the ground (S2), and re-emission from a surface nitrite reservoir (S3). For all mechanisms, ranges of parametric values span lower- and upper-limit values. Model outcomes for 'likely' estimates of sources and sinks generally show under-prediction of HONO observations, implying the need to evaluate additional sources and variability in estimates of parameterizations, particularly during daylight hours. Monte Carlo simulation is applied to model scenarios constructed with sources S1-S3 added independently and in combination, generally showing improved model outcomes. Adding sources S2 and S3 (scenario S2/S3) appears to best replicate observed HONO, as determined by the model coefficient of determination and residual sum of squared errors (r2 = 0.55 ± 0.03, SSE = 4.6 × 106 ± 7.6 × 105 ppt2). In scenario S2/S3, source S2 is shown to account for 25% and 6.7% of the nighttime and daytime budget, respectively, while source S3 accounts for 19% and 11% of the nighttime and daytime budget, respectively. However, despite improved model fit, there remains significant underestimation of daytime HONO; on average, a 0.15 ppt/s unknown daytime HONO source, or 67% of the total daytime source, is needed to bring scenario S2/S3 into agreement with observation. Estimates of 'best fit' parameterizations across lower to upper-limit values results in a moderate reduction of the unknown daytime source, from 0.15 to 0.10 ppt/s.
Probing the X-Ray Binary Populations of the Ring Galaxy NGC 1291
NASA Technical Reports Server (NTRS)
Luo, B.; Fabbiano, G.; Fragos, T.; Kim, D. W.; Belczynski, K.; Brassington, N. J.; Pellegrini, S.; Tzanavaris, P.; Wang, J.; Zezas, A.
2012-01-01
We present Chandra studies of the X-ray binary (XRB) populations in the bulge and ring regions of the ring galaxy NGC 1291. We detect 169 X-ray point sources in the galaxy, 75 in the bulge and 71 in the ring, utilizing the four available Chandra observations totaling an effective exposure of 179 ks. We report photometric properties of these sources in a point-source catalog. There are approx. 40% of the bulge sources and approx. 25% of the ring sources showing > 3(sigma) long-term variability in their X-ray count rate. The X-ray colors suggest that a significant fraction of the bulge (approx. 75%) and ring (approx. 65%) sources are likely low-mass X-ray binaries (LMXBs). The spectra of the nuclear source indicate that it is a low-luminosity AGN with moderate obscuration; spectral variability is observed between individual observations. We construct 0.3-8.0 keV X-ray luminosity functions (XLFs) for the bulge and ring XRB populations, taking into account the detection incompleteness and background AGN contamination. We reach 90% completeness limits of approx.1.5 x 10(exp 37) and approx. 2.2 x 10(exp 37) erg/s for the bulge and ring populations, respectively. Both XLFs can be fit with a broken power-law model, and the shapes are consistent with those expected for populations dominated by LMXBs. We perform detailed population synthesis modeling of the XRB populations in NGC 1291 , which suggests that the observed combined XLF is dominated by aD old LMXB population. We compare the bulge and ring XRB populations, and argue that the ring XRBs are associated with a younger stellar population than the bulge sources, based on the relative over-density of X-ray sources in the ring, the generally harder X-ray color of the ring sources, the overabundance of luminous sources in the combined XLF, and the flatter shape of the ring XLF.
Fine-grained suspended sediment source identification for the Kharaa River basin, northern Mongolia
NASA Astrophysics Data System (ADS)
Rode, Michael; Theuring, Philipp; Collins, Adrian L.
2015-04-01
Fine sediment inputs into river systems can be a major source of nutrients and heavy metals and have a strong impact on the water quality and ecosystem functions of rivers and lakes, including those in semiarid regions. However, little is known to date about the spatial distribution of sediment sources in most large scale river basins in Central Asia. Accordingly, a sediment source fingerprinting technique was used to assess the spatial sources of fine-grained (<10 microns) sediment in the 15 000 km2 Kharaa River basin in northern Mongolia. Five field sampling campaigns in late summer 2009, and spring and late summer in both 2010 and 2011, were conducted directly after high water flows, to collect an overall total of 900 sediment samples. The work used a statistical approach for sediment source discrimination with geochemical composite fingerprints based on a new Genetic Algorithm (GA)-driven Discriminant Function Analysis, the Kruskal-Wallis H-test and Principal Component Analysis. The composite fingerprints were subsequently used for numerical mass balance modelling with uncertainty analysis. The contributions of the individual sub-catchment spatial sediment sources varied from 6.4% (the headwater sub-catchment of Sugnugur Gol) to 36.2% (the Kharaa II sub-catchment in the middle reaches of the study basin) with the pattern generally showing higher contributions from the sub-catchments in the middle, rather than the upstream, portions of the study area. The importance of riverbank erosion was shown to increase from upstream to midstream tributaries. The source tracing procedure provides results in reasonable accordance with previous findings in the study region and demonstrates the general applicability and associated uncertainties of an approach for fine-grained sediment source investigation in large scale semi-arid catchments. The combined application of source fingerprinting and catchment modelling approaches can be used to assess whether tracing estimates are credible and in combination such approaches provide a basis for making sediment source apportionment more compelling to catchment stakeholders and managers.
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
Modelling remediation scenarios in historical mining catchments.
Gamarra, Javier G P; Brewer, Paul A; Macklin, Mark G; Martin, Katherine
2014-01-01
Local remediation measures, particularly those undertaken in historical mining areas, can often be ineffective or even deleterious because erosion and sedimentation processes operate at spatial scales beyond those typically used in point-source remediation. Based on realistic simulations of a hybrid landscape evolution model combined with stochastic rainfall generation, we demonstrate that similar remediation strategies may result in differing effects across three contrasting European catchments depending on their topographic and hydrologic regimes. Based on these results, we propose a conceptual model of catchment-scale remediation effectiveness based on three basic catchment characteristics: the degree of contaminant source coupling, the ratio of contaminated to non-contaminated sediment delivery, and the frequency of sediment transport events.
NASA Technical Reports Server (NTRS)
Hayden, R. E.; Kadman, Y.; Chanaud, R. C.
1972-01-01
The feasibility of quieting the externally-blown-flap (EBF) noise sources which are due to interaction of jet exhaust flow with deployed flaps was demonstrated on a 1/15-scale 3-flap EBF model. Sound field characteristics were measured and noise reduction fundamentals were reviewed in terms of source models. Test of the 1/15-scale model showed broadband noise reductions of up to 20 dB resulting from combination of variable impedance flap treatment and mesh grids placed in the jet flow upstream of the flaps. Steady-state lift, drag, and pitching moment were measured with and without noise reduction treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.J.; Warner, J.A.; LeBarron, N.
Processes that use energetic ions for large substrates require that the time-averaged erosion effects from the ion flux be uniform across the surface. A numerical model has been developed to determine this flux and its effects on surface etching of a silica/photoresist combination. The geometry of the source and substrate is very similar to a typical deposition geometry with single or planetary substrate rotation. The model was used to tune an inert ion-etching process that used single or multiple Kaufman sources to less than 3% uniformity over a 30-cm aperture after etching 8 {micro}m of material. The same model canmore » be used to predict uniformity for ion-assisted deposition (IAD).« less
Nakamura, M; Saito, K; Wakabayashi, M
1990-04-01
The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.
NASA Astrophysics Data System (ADS)
Jolliff, Jason Keith; Smith, Travis A.; Ladner, Sherwin; Arnone, Robert A.
2014-03-01
The U.S. Naval Research Laboratory (NRL) is developing nowcast/forecast software systems designed to combine satellite ocean color data streams with physical circulation models in order to produce prognostic fields of ocean surface materials. The Deepwater Horizon oil spill in the Gulf of Mexico provided a test case for the Bio-Optical Forecasting (BioCast) system to rapidly combine the latest satellite imagery of the oil slick distribution with surface circulation fields in order to produce oil slick transport scenarios and forecasts. In one such sequence of experiments, MODIS satellite true color images were combined with high-resolution ocean circulation forecasts from the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS®) to produce 96-h oil transport simulations. These oil forecasts predicted a major oil slick landfall at Grand Isle, Louisiana, USA that was subsequently observed. A key driver of the landfall scenario was the development of a coastal buoyancy current associated with Mississippi River Delta freshwater outflow. In another series of experiments, longer-term regional circulation model results were combined with oil slick source/sink scenarios to simulate the observed containment of surface oil within the Gulf of Mexico. Both sets of experiments underscore the importance of identifying and simulating potential hydrodynamic conduits of surface oil transport. The addition of explicit sources and sinks of surface oil concentrations provides a framework for increasingly complex oil spill modeling efforts that extend beyond horizontal trajectory analysis.
NASA Astrophysics Data System (ADS)
Uranishi, Katsushige; Ikemori, Fumikazu; Nakatsubo, Ryohei; Shimadera, Hikari; Kondo, Akira; Kikutani, Yuki; Asano, Katsuyoshi; Sugata, Seiji
2017-10-01
This study presented a comparison approach with multiple source apportionment methods to identify which sectors of emission data have large biases. The source apportionment methods for the comparison approach included both receptor and chemical transport models, which are widely used to quantify the impacts of emission sources on fine particulate matter of less than 2.5 μm in diameter (PM2.5). We used daily chemical component concentration data in the year 2013, including data for water-soluble ions, elements, and carbonaceous species of PM2.5 at 11 sites in the Kinki-Tokai district in Japan in order to apply the Positive Matrix Factorization (PMF) model for the source apportionment. Seven PMF factors of PM2.5 were identified with the temporal and spatial variation patterns and also retained features of the sites. These factors comprised two types of secondary sulfate, road transportation, heavy oil combustion by ships, biomass burning, secondary nitrate, and soil and industrial dust, accounting for 46%, 17%, 7%, 14%, 13%, and 3% of the PM2.5, respectively. The multiple-site data enabled a comprehensive identification of the PM2.5 sources. For the same period, source contributions were estimated by air quality simulations using the Community Multiscale Air Quality model (CMAQ) with the brute-force method (BFM) for four source categories. Both models provided consistent results for the following three of the four source categories: secondary sulfates, road transportation, and heavy oil combustion sources. For these three target categories, the models' agreement was supported by the small differences and high correlations between the CMAQ/BFM- and PMF-estimated source contributions to the concentrations of PM2.5, SO42-, and EC. In contrast, contributions of the biomass burning sources apportioned by CMAQ/BFM were much lower than and little correlated with those captured by the PMF model, indicating large uncertainties in the biomass burning emissions used in the CMAQ simulations. Thus, this comparison approach using the two antithetical models enables us to identify which sectors of emission data have large biases for improvement of future air quality simulations.
Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier
2016-12-01
The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Murine Model to Study Epilepsy and SUDEP Induced by Malaria Infection
Ssentongo, Paddy; Robuccio, Anna E.; Thuku, Godfrey; Sim, Derek G.; Nabi, Ali; Bahari, Fatemeh; Shanmugasundaram, Balaji; Billard, Myles W.; Geronimo, Andrew; Short, Kurt W.; Drew, Patrick J.; Baccon, Jennifer; Weinstein, Steven L.; Gilliam, Frank G.; Stoute, José A.; Chinchilli, Vernon M.; Read, Andrew F.; Gluckman, Bruce J.; Schiff, Steven J.
2017-01-01
One of the largest single sources of epilepsy in the world is produced as a neurological sequela in survivors of cerebral malaria. Nevertheless, the pathophysiological mechanisms of such epileptogenesis remain unknown and no adjunctive therapy during cerebral malaria has been shown to reduce the rate of subsequent epilepsy. There is no existing animal model of postmalarial epilepsy. In this technical report we demonstrate the first such animal models. These models were created from multiple mouse and parasite strain combinations, so that the epilepsy observed retained universality with respect to genetic background. We also discovered spontaneous sudden unexpected death in epilepsy (SUDEP) in two of our strain combinations. These models offer a platform to enable new preclinical research into mechanisms and prevention of epilepsy and SUDEP. PMID:28272506
Forecasting database for the tsunami warning regional center for the western Mediterranean Sea
NASA Astrophysics Data System (ADS)
Gailler, A.; Hebert, H.; Loevenbruck, A.; Hernandez, B.
2010-12-01
Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed, but they present a challenge to run in real-time, partly due to computational limitations and also to a lack of detailed knowledge on the earthquake rupture parameters. Through the establishment of the tsunami warning regional center for NE Atlantic and western Mediterranean Sea, the CEA is especially in charge of providing rapidly a map with uncertainties showing zones in the main axis of energy at the Mediterranean scale. The strategy is based initially on a pre-computed tsunami scenarios database, as source parameters available a short time after an earthquake occurs are preliminary and may be somewhat inaccurate. Existing numerical models are good enough to provide a useful guidance for warning structures to be quickly disseminated. When an event will occur, an appropriate variety of offshore tsunami propagation scenarios by combining pre-computed propagation solutions (single or multi sources) may be recalled through an automatic interface. This approach would provide quick estimates of tsunami offshore propagation, and aid hazard assessment and evacuation decision-making. As numerical model accuracy is inherently limited by errors in bathymetry and topography, and as inundation maps calculation is more complex and expensive in term of computational time, only tsunami offshore propagation modeling will be included in the forecasting database using a single sparse bathymetric computation grid for the numerical modeling. Because of too much variability in the mechanism of tsunamigenic earthquakes, all possible magnitudes cannot be represented in the scenarios database. In principle, an infinite number of tsunami propagation scenarios can be constructed by linear combinations of a finite number of pre-computed unit scenarios. The whole notion of a pre-computed forecasting database also requires a historical earthquake and tsunami database, as well as an up-to-date seismotectonic database including faults geometry and a zonation based on seismotectonic synthesis of source zones and tsunamigenic faults. Our forecast strategy is thus based on a unit source function methodology, whereby the model runs are combined and scaled linearly to produce any composite tsunamis propagation solution. Each unit source function is equivalent to a tsunami generated by a Mo 1.75E+19 N.m earthquake (Mw ~6.8) with a rectangular fault 25 km by 20 km in size and 1 m in slip. The faults of the unit functions are placed adjacent to each other, following the discretization of the main seismogenic faults bounding the western Mediterranean basin. The number of unit functions involved varies with the magnitude of the wanted composite solution and the combined waveheights are multiplied by a given scaling factor to produce the new arbitrary scenario. Some test-cases examples are presented (e.g., Boumerdès 2003 [Algeria, Mw 6.8], Djijel 1856 [Algeria, Mw 7.2], Ligure 1887 [Italia, Mw 6.5-6.7]).
Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M
2018-05-17
Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.
Acoustic Parametric Array for Identifying Standoff Targets
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Rudd, K. E.
2010-02-01
An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.
Stabilized determination of geopotential coefficients by the mixed hom-BLUP approach
NASA Technical Reports Server (NTRS)
Middel, B.; Schaffrin, B.
1989-01-01
For the determination of geopotential coefficients, data can be used from rather different sources, e.g., satellite tracking, gravimetry, or altimetry. As each data type is particularly sensitive to certain wavelengths of the spherical harmonic coefficients it is of essential importance how they are treated in a combination solution. For example the longer wavelengths are well described by the coefficients of a model derived by satellite tracking, while other observation types such as gravity anomalies, delta g, and geoid heights, N, from altimetry contain only poor information for these long wavelengths. Therefore, the lower coefficients of the satellite model should be treated as being superior in the combination. In the combination a new method is presented which turns out to be highly suitable for this purpose due to its great flexibility combined with robustness.
NASA Astrophysics Data System (ADS)
Yetman, G.; Downs, R. R.
2011-12-01
Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.
NASA Astrophysics Data System (ADS)
Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G.
2009-04-01
Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of the inundation for a specific area, the wave height at coast at this area and the estimated times of arrival (ETAs) of the waves, caused by one tsunamigenic source with a specific magnitude. These parameters from the several scenarios can overlap each other along the coast and must be combined to get one comprehensive hazard assessment for all possible future tsunamis at the region under observation. The simplest way to derive the inundation probability along the coast using the multiscenario approach is to overlay all scenario inundation results and to determine how often a point on land will be significantly inundated from the various scenarios. But this does not take into account that the used tsunamigenic sources for the modeled scenarios have different likelihoods of causing a tsunami. Hence a statistical analysis of historical data and geophysical investigation results based on numerical modelling results is added to the hazard assessment, which clearly improves the significance of the hazard assessment. For this purpose the present method is developed and contains a complex logical combination of the diverse probabilities assessed like probability of occurrence for different earthquake magnitudes at different localities, probability of occurrence for a specific wave height at the coast and the probability for every point on land likely to get hit by a tsunami. The values are combined by a logical tree technique and quantified by statistical analysis of historical data and of the tsunami modelling results as mentioned before. This results in a tsunami inundation probability map covering the South West Coast of Indonesia which nevertheless shows a significant spatial diversity offering a good base for evacuation planning and mitigation strategies. Keywords: tsunami hazard assessment, tsunami modelling, probabilistic analysis, early warning
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Surface-Water Nutrient Conditions and Sources in the United States Pacific Northwest1
Wise, Daniel R; Johnson, Henry M
2011-01-01
Abstract The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts. PMID:22457584
A hadoop-based method to predict potential effective drug combination.
Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.
A Hadoop-Based Method to Predict Potential Effective Drug Combination
Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789
Race of source effects in the elaboration likelihood model.
White, P H; Harkins, S G
1994-11-01
In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2017-05-01
Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
Emission Mechanisms in X-Ray Faint Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Brown, B. A.; Bregman, J. N.
1999-12-01
To understand the X-ray emission in normal elliptical galaxies, it is important to determine the relative contributions of hot interstellar gas and discrete sources to the observed emission. In X-ray luminous ellipticals, a hot gaseous component dominates the emission from X-ray binaries and other discrete sources. It is expected that, as one looks toward lower X-ray luminous galaxies, that the hot gas will contribute less to the overall X-ray emission and that discrete sources will supply most, if not all of, the observed X-ray emission. Here we examine ROSAT HRI and PSPC data for seventeen optically bright (BT < 11.15) elliptical galaxies with log(LX/L_B) < 29.7 ergs s-1/L⊙ . Radial surface brightness profiles are modeled with a modified King beta model and a de Vaucouleurs r1/4 law (similar to a beta = 0.5 beta model). For galaxy profiles where the two models are easily distinguishable, the models are combined, and fit to the data to determine or set upper limits to the discrete source contribution. The modeled data suggest that X-ray faint elliptical galaxies may still retain a sizable fraction of hot gas, but that emission from discrete sources are a significant component of the total observed X-ray emission. Support for this project has been provided by NASA and the National Academy of Sciences.
SPATIAL PREDICTION USING COMBINED SOURCES OF DATA
For improved environmental decision-making, it is important to develop new models for spatial prediction that accurately characterize important spatial and temporal patterns of air pollution. As the U .S. Environmental Protection Agency begins to use spatial prediction in the reg...
Expert Panels, Consumers, and Chemistry.
ERIC Educational Resources Information Center
Rehfeldt, Thomas K.
2000-01-01
Studied the attributes, properties, and consumer acceptance of antiperspirant products through responses of 400 consumers (consumer data), expert panel data, and analytical data about the products. Results show how the Rasch model can provide the tool necessary to combine data from several sources. (SLD)
Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.
Frank, Scott D; Odom, Robert I; Collis, Jon M
2013-03-01
Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.
Controlled-source seismic interferometry with one way wave fields
NASA Astrophysics Data System (ADS)
van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.
2008-12-01
In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.
NASA Astrophysics Data System (ADS)
Hornung, Thomas; Simon, Kai; Lausen, Georg
Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.
Leak detection utilizing analog binaural (VLSI) techniques
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor)
1995-01-01
A detection method and system utilizing silicon models of the traveling wave structure of the human cochlea to spatially and temporally locate a specific sound source in the presence of high noise pandemonium. The detection system combines two-dimensional stereausis representations, which are output by at least three VLSI binaural hearing chips, to generate a three-dimensional stereausis representation including both binaural and spectral information which is then used to locate the sound source.
Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki
2015-04-01
The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Chen, Qiang; Xu, Qian; Zhang, Yijun; Yang, Yinghui; Yong, Qi; Liu, Guoxiang; Liu, Xianwen
2018-03-01
Single satellite geodetic technique has weakness for mapping sequence of ground deformation associated with serial seismic events, like InSAR with long revisiting period readily leading to mixed complex deformation signals from multiple events. It challenges the observation capability of single satellite geodetic technique for accurate recognition of individual surface deformation and earthquake model. The rapidly increasing availability of various satellite observations provides good solution for overcoming the issue. In this study, we explore a sequential combination of multiple overlapping datasets from ALOS/PALSAR, ENVISAT/ASAR and GPS observations to separate surface deformation associated with the 2011 Mw 9.0 Tohoku-Oki major quake and two strong aftershocks including the Mw 6.6 Iwaki and Mw 5.8 Ibaraki events. We first estimate the fault slip model of major shock with ASAR interferometry and GPS displacements as constraints. Due to the used PALSAR interferogram spanning the period of all the events, we then remove the surface deformation of major shock through forward calculated prediction thus obtaining PALSAR InSAR deformation associated with the two strong aftershocks. The inversion for source parameters of Iwaki aftershock is conducted using the refined PALSAR deformation considering that the higher magnitude Iwaki quake has dominant deformation contribution than the Ibaraki event. After removal of deformation component of Iwaki event, we determine the fault slip distribution of Ibaraki shock using the remained PALSAR InSAR deformation. Finally, the complete source models for the serial seismic events are clearly identified from the sequential combination of multi-source satellite observations, which suggest that the major quake is a predominant mega-thrust rupture, whereas the two aftershocks are normal faulting motion. The estimated seismic moment magnitude for the Tohoku-Oki, Iwaki and Ibaraki evens are Mw 9.0, Mw 6.85 and Mw 6.11, respectively.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
NASA Astrophysics Data System (ADS)
Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean
2016-04-01
A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2014-12-01
After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.
Models, Measurements, and Local Decisions: Assessing and ...
This presentation includes a combination of modeling and measurement results to characterize near-source air quality in Newark, New Jersey with consideration of how this information could be used to inform decision making to reduce risk of health impacts. Decisions could include either exposure or emissions reduction, and a host of stakeholders, including residents, academics, NGOs, local and federal agencies. This presentation includes results from the C-PORT modeling system, and from a citizen science project from the local area. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Ravel, André; Hurst, Matt; Petrica, Nicoleta; David, Julie; Mutschall, Steven K; Pintar, Katarina; Taboada, Eduardo N; Pollari, Frank
2017-01-01
Human campylobacteriosis is a common zoonosis with a significant burden in many countries. Its prevention is difficult because humans can be exposed to Campylobacter through various exposures: foodborne, waterborne or by contact with animals. This study aimed at attributing campylobacteriosis to sources at the point of exposure. It combined comparative exposure assessment and microbial subtype comparison with subtypes defined by comparative genomic fingerprinting (CGF). It used isolates from clinical cases and from eight potential exposure sources (chicken, cattle and pig manure, retail chicken, beef, pork and turkey meat, and surface water) collected within a single sentinel site of an integrated surveillance system for enteric pathogens in Canada. Overall, 1518 non-human isolates and 250 isolates from domestically-acquired human cases were subtyped and their subtype profiles analyzed for source attribution using two attribution models modified to include exposure. Exposure values were obtained from a concurrent comparative exposure assessment study undertaken in the same area. Based on CGF profiles, attribution was possible for 198 (79%) human cases. Both models provide comparable figures: chicken meat was the most important source (65-69% of attributable cases) whereas exposure to cattle (manure) ranked second (14-19% of attributable cases), the other sources being minor (including beef meat). In comparison with other attributions conducted at the point of production, the study highlights the fact that Campylobacter transmission from cattle to humans is rarely meat borne, calling for a closer look at local transmission from cattle to prevent campylobacteriosis, in addition to increasing safety along the chicken supply chain.
NASA Astrophysics Data System (ADS)
Guzmán, Gema; Laguna, Ana; Cañasveras, Juan Carlos; Boulal, Hakim; Barrón, Vidal; Gómez-Macpherson, Helena; Giráldez, Juan Vicente; Gómez, José Alfonso
2015-05-01
Although soil erosion is one of the main threats to agriculture sustainability in many areas of the world, its processes are difficult to measure and still need a better characterization. The use of iron oxides as sediment tracers, combined with erosion and mixing models opens up a pathway for improving the knowledge of the erosion and redistribution of soil, determining sediment sources and sinks. In this study, magnetite and a multivariate mixing model were used in rainfall simulations at the micro-plot scale to determine the source of the sediment at different stages of a furrow-ridge system both with (+T) and without (-T) wheel tracks. At a plot scale, magnetite, hematite and goethite combined with two soil erosion models based on the kinematic wave approach were used in a sprinkler irrigation test to study trends in sediment transport and tracer dynamics along furrow lengths under a wide range of scenarios. In the absence of any stubble cover, sediment contribution from the ridges was larger than the furrow bed one, almost 90%, while an opposite trend was observed with stubble, with a smaller contribution from the ridge (32%) than that of the bed, at the micro-plot trials. Furthermore, at a plot scale, the tracer concentration analysis showed an exponentially decreasing trend with the downstream distance both for sediment detachment along furrows and soil source contribution from tagged segments. The parameters of the distributed model KINEROS2 have been estimated using the PEST Model to obtain a more accurate evaluation. Afterwards, this model was used to simulate a broad range of common scenarios of topography and rainfall from commercial farms in southern Spain. Higher slopes had a significant influence on sediment yields while long furrow distances allowed a more efficient water use. For the control of runoff, and therefore soil loss, an equilibrium between irrigation design (intensity, duration, water pattern) and hydric needs of the crops should be defined in order to establish a sustainable management strategy.
Combination of surface and borehole seismic data for robust target-oriented imaging
NASA Astrophysics Data System (ADS)
Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees
2016-05-01
A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.
Ginter, S
2000-07-01
Ultrasound (US) thermotherapy is used to treat tumours, located deep in human tissue, by heat. It features by the application of high intensity focused ultrasound (HIFU), high local temperatures of about 90 degrees C and short treating time of a few seconds. Dosage of the therapy remains a problem. To get it under control, one has to know the heat source, i.e. the amount of absorbed US power, which shows nonlinear influences. Therefore, accurate simulations are essential. In this paper, an improved simulation model is introduced which enables accurate investigations of US thermotherapy. It combines nonlinear US propagation effects, which lead to generation of higher harmonics, with a broadband frequency-power law absorption typical for soft tissue. Only the combination of both provides a reliable calculation of the generated heat. Simulations show the influence of nonlinearities and broadband damping for different source signals on the absorbed US power density distribution.
Theory for source-responsive and free-surface film modeling of unsaturated flow
Nimmo, J.R.
2010-01-01
A new model explicitly incorporates the possibility of rapid response, across significant distance, to substantial water input. It is useful for unsaturated flow processes that are not inherently diffusive, or that do not progress through a series of equilibrium states. The term source-responsive is used to mean that flow responds sensitively to changing conditions at the source of water input (e.g., rainfall, irrigation, or ponded infiltration). The domain of preferential flow can be conceptualized as laminar flow in free-surface films along the walls of pores. These films may be considered to have uniform thickness, as suggested by field evidence that preferential flow moves at an approximately uniform rate when generated by a continuous and ample water supply. An effective facial area per unit volume quantitatively characterizes the medium with respect to source-responsive flow. A flow-intensity factor dependent on conditions within the medium represents the amount of source-responsive flow at a given time and position. Laminar flow theory provides relations for the velocity and thickness of flowing source-responsive films. Combination with the Darcy-Buckingham law and the continuity equation leads to expressions for both fluxes and dynamic water contents. Where preferential flow is sometimes or always significant, the interactive combination of source-responsive and diffuse flow has the potential to improve prediction of unsaturated-zone fluxes in response to hydraulic inputs and the evolving distribution of soil moisture. Examples for which this approach is efficient and physically plausible include (i) rainstorm-generated rapid fluctuations of a deep water table and (ii) space- and time-dependent soil water content response to infiltration in a macroporous soil. ?? Soil Science Society of America.
Rg-Lg coupling as a Lg-wave excitation mechanism
NASA Astrophysics Data System (ADS)
Ge, Z.; Xie, X.
2003-12-01
Regional phase Lg is predominantly comprised of shear wave energy trapped in the crust. Explosion sources are expected to be less efficient for excitation of Lg phases than earthquakes to the extent that the source can be approximated as isotropic. Shallow explosions generate relatively large surface wave Rg compared to deeper earthquakes, and Rg is readily disrupted by crustal heterogeneity. Rg energy may thus scatter into trapped crustal S-waves near the source region and contribute to low-frequency Lg wave. In this study, a finite-difference modeling plus the slowness analysis are used for investigating the above mentioned Lg-wave excitation mechanism. The method allows us to investigate near source energy partitioning in multiple domains including frequency, slowness and time. The main advantage of this method is that it can be applied at close range, before Lg is actually formed, which allows us to use very fine near source velocity model to simulate the energy partitioning process. We use a layered velocity structure as the background model and add small near source random velocity patches to the model to generate the Rg to Lg coupling. Two types of simulations are conducted, (1) a fixed shallow explosion source vs. randomness at different depths and (2) a fixed shallow randomness vs. explosion sources at different depths. The results show apparent couplings between the Rg and Lg waves at lower frequencies (0.3-1.5 Hz). A shallow source combined with shallow randomness generates the maximum Lg-wave, which is consistent with the Rg energy distribution of a shallow explosion source. The Rg energy and excited Lg energy show a near linear relationship. The numerical simulation and slowness analysis suggest that the Rg to Lg coupling is an effective excitation mechanism for low frequency Lg-waves from a shallow explosion source.
Impedance cardiography: What is the source of the signal?
NASA Astrophysics Data System (ADS)
Patterson, R. P.
2010-04-01
Impedance cardiography continues to be investigated for various applications. Instruments for its use are available commercially. Almost all of the recent presentations and articles along with commercial advertisements have assumed that aortic volume pulsation is the source of the signal. A review of the literature will reveal that there is no clear evidence for this assumption. Starting with the first paper on impedance cardiography in 1964, which assumed the lung was the source of the signal, the presentation will review many studies in the 60's, 70's and 80's, which suggest the aorta and other vessels as well as atria and again the lung as possible sources. Current studies based on high resolution thoracic models will be presented that show the aorta as contributing only approximately 1% of the total impedance measurement, making it an unlikely candidate for the major contributor to the signal. Combining the results of past studies along with recent work based on models, suggest other vessels and regions as possible sources.
Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S
2016-01-01
Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.
NASA Astrophysics Data System (ADS)
Jánský, Jaroslav; Lucas, Greg M.; Kalb, Christina; Bayona, Victor; Peterson, Michael J.; Deierling, Wiebke; Flyer, Natasha; Pasko, Victor P.
2017-12-01
This work analyzes different current source and conductivity parameterizations and their influence on the diurnal variation of the global electric circuit (GEC). The diurnal variations of the current source parameterizations obtained using electric field and conductivity measurements from plane overflights combined with global Tropical Rainfall Measuring Mission satellite data give generally good agreement with measured diurnal variation of the electric field at Vostok, Antarctica, where reference experimental measurements are performed. An approach employing 85 GHz passive microwave observations to infer currents within the GEC is compared and shows the best agreement in amplitude and phase with experimental measurements. To study the conductivity influence, GEC models solving the continuity equation in 3-D are used to calculate atmospheric resistance using yearly averaged conductivity obtained from the global circulation model Community Earth System Model (CESM). Then, using current source parameterization combining mean currents and global counts of electrified clouds, if the exponential conductivity is substituted by the conductivity from CESM, the peak to peak diurnal variation of the ionospheric potential of the GEC decreases from 24% to 20%. The main reason for the change is the presence of clouds while effects of 222Rn ionization, aerosols, and topography are less pronounced. The simulated peak to peak diurnal variation of the electric field at Vostok is increased from 15% to 18% from the diurnal variation of the global current in the GEC if conductivity from CESM is used.
Ransom, Katherine M; Grote, Mark N.; Deinhart, Amanda; Eppich, Gary; Kendall, Carol; Sanborn, Matthew E.; Sounders, A. Kate; Wimpenny, Joshua; Yin, Qing-zhu; Young, Megan B.; Harter, Thomas
2016-01-01
Groundwater quality is a concern in alluvial aquifers that underlie agricultural areas, such as in the San Joaquin Valley of California. Shallow domestic wells (less than 150 m deep) in agricultural areas are often contaminated by nitrate. Agricultural and rural nitrate sources include dairy manure, synthetic fertilizers, and septic waste. Knowledge of the relative proportion that each of these sources contributes to nitrate concentration in individual wells can aid future regulatory and land management decisions. We show that nitrogen and oxygen isotopes of nitrate, boron isotopes, and iodine concentrations are a useful, novel combination of groundwater tracers to differentiate between manure, fertilizers, septic waste, and natural sources of nitrate. Furthermore, in this work, we develop a new Bayesian mixing model in which these isotopic and elemental tracers were used to estimate the probability distribution of the fractional contributions of manure, fertilizers, septic waste, and natural sources to the nitrate concentration found in an individual well. The approach was applied to 56 nitrate-impacted private domestic wells located in the San Joaquin Valley. Model analysis found that some domestic wells were clearly dominated by the manure source and suggests evidence for majority contributions from either the septic or fertilizer source for other wells. But, predictions of fractional contributions for septic and fertilizer sources were often of similar magnitude, perhaps because modeled uncertainty about the fraction of each was large. For validation of the Bayesian model, fractional estimates were compared to surrounding land use and estimated source contributions were broadly consistent with nearby land use types.
THE EFFECT OF CHLORINE EMISSIONS ON TROPOSPHERIC OZONE IN THE UNITED STATES
The effect of chlorine emissions on atmospheric ozone in the continental United States was evaluated. Atmospheric chlorine chemistry was combined with the carbon bond mechanism and incorporated into the Community Multiscale Air Quality model. Sources of chlorine included anthrop...
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.
2008-12-01
A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.
River Export of Plastic from Land to Sea: A Global Modeling Approach
NASA Astrophysics Data System (ADS)
Siegfried, Max; Gabbert, Silke; Koelmans, Albert A.; Kroeze, Carolien; Löhr, Ansje; Verburg, Charlotte
2016-04-01
Plastic is increasingly considered a serious cause of water pollution. It is a threat to aquatic ecosystems, including rivers, coastal waters and oceans. Rivers transport considerable amounts of plastic from land to sea. The quantity and its main sources, however, are not well known. Assessing the amount of macro- and microplastic transport from river to sea is, therefore, important for understanding the dimension and the patterns of plastic pollution of aquatic ecosystems. In addition, it is crucial for assessing short- and long-term impacts caused by plastic pollution. Here we present a global modelling approach to quantify river export of plastic from land to sea. Our approach accounts for different types of plastic, including both macro- and micro-plastics. Moreover, we distinguish point sources and diffuse sources of plastic in rivers. Our modelling approach is inspired by global nutrient models, which include more than 6000 river basins. In this paper, we will present our modelling approach, as well as first model results for micro-plastic pollution in European rivers. Important sources of micro-plastics include personal care products, laundry, household dust and car tyre wear. We combine information on these sources with information on sewage management, and plastic retention during river transport for the largest European rivers. Our modelling approach may help to better understand and prevent water pollution by plastic , and at the same time serves as 'proof of concept' for future application on global scale.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Loch, R A; Sobierajski, R; Louis, E; Bosgra, J; Bijkerk, F
2012-12-17
The single shot damage thresholds of multilayer optics for high-intensity short-wavelength radiation sources are theoretically investigated, using a model developed on the basis of experimental data obtained at the FLASH and LCLS free electron lasers. We compare the radiation hardness of commonly used multilayer optics and propose new material combinations selected for a high damage threshold. Our study demonstrates that the damage thresholds of multilayer optics can vary over a large range of incidence fluences and can be as high as several hundreds of mJ/cm(2). This strongly suggests that multilayer mirrors are serious candidates for damage resistant optics. Especially, multilayer optics based on Li(2)O spacers are very promising for use in current and future short-wavelength radiation sources.
NASA Technical Reports Server (NTRS)
Kimball, John; Kang, Sinkyu
2003-01-01
The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.
Neset, Tina-Simone Schmid; Singer, Heinz; Longrée, Philipp; Bader, Hans-Peter; Scheidegger, Ruth; Wittmer, Anita; Andersson, Jafet Clas Martin
2010-07-15
This paper explores the potential of combining substance-flow modelling with water and wastewater sampling to trace consumption-related substances emitted through the urban wastewater. The method is exemplified on sucralose. Sucralose is a chemical sweetener that is 600 times sweeter than sucrose and has been on the European market since 2004. As a food additive, sucralose has recently increased in usage in a number of foods, such as soft drinks, dairy products, candy and several dietary products. In a field campaign, sucralose concentrations were measured in the inflow and outflow of the local wastewater treatment plant in Linköping, Sweden, as well as upstream and downstream of the receiving stream and in Lake Roxen. This allows the loads emitted from the city to be estimated. A method consisting of solid-phase extraction followed by liquid chromatography and high resolution mass spectrometry was used to quantify the sucralose in the collected surface and wastewater samples. To identify and quantify the sucralose sources, a consumption analysis of households including small business enterprises was conducted as well as an estimation of the emissions from the local food industry. The application of a simple model including uncertainty and sensitivity analysis indicates that at present not one large source but rather several small sources contribute to the load coming from households, small business enterprises and industry. This is in contrast to the consumption pattern seen two years earlier, which was dominated by one product. The inflow to the wastewater treatment plant decreased significantly from other measurements made two years earlier. The study shows that the combination of substance-flow modelling with the analysis of the loads to the receiving waters helps us to understand consumption-related emissions. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
Bayesian analysis for erosion modelling of sediments in combined sewer systems.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Previous research has confirmed that the sediments at the bed of combined sewer systems are the main source of particulate and organic pollution during rain events contributing to combined sewer overflows. However, existing urban stormwater models utilize inappropriate sediment transport formulas initially developed from alluvial hydrodynamics. Recently, a model has been formulated and profoundly assessed based on laboratory experiments to simulate the erosion of sediments in sewer pipes taking into account the increase in strength with depth in the weak layer of deposits. In order to objectively evaluate this model, this paper presents a Bayesian analysis of the model using field data collected in sewer pipes in Paris under known hydraulic conditions. The test has been performed using a MCMC sampling method for calibration and uncertainty assessment. Results demonstrate the capacity of the model to reproduce erosion as a direct response to the increase in bed shear stress. This is due to the model description of the erosional strength in the deposits and to the shape of the measured bed shear stress. However, large uncertainties in some of the model parameters suggest that the model could be over-parameterised and necessitates a large amount of informative data for its calibration.
Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques
NASA Astrophysics Data System (ADS)
Basu, N. B.; Fure, A. D.; Jawitz, J. W.
2006-12-01
Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).
A Seismic Source Model for Central Europe and Italy
NASA Astrophysics Data System (ADS)
Nyst, M.; Williams, C.; Onur, T.
2006-12-01
We present a seismic source model for Central Europe (Belgium, Germany, Switzerland, and Austria) and Italy, as part of an overall seismic risk and loss modeling project for this region. A separate presentation at this conference discusses the probabilistic seismic hazard and risk assessment (Williams et al., 2006). Where available we adopt regional consensus models and adjusts these to fit our format, otherwise we develop our own model. Our seismic source model covers the whole region under consideration and consists of the following components: 1. A subduction zone environment in Calabria, SE Italy, with interface events between the Eurasian and African plates and intraslab events within the subducting slab. The subduction zone interface is parameterized as a set of dipping area sources that follow the geometry of the surface of the subducting plate, whereas intraslab events are modeled as plane sources at depth; 2. The main normal faults in the upper crust along the Apennines mountain range, in Calabria and Central Italy. Dipping faults and (sub-) vertical faults are parameterized as dipping plane and line sources, respectively; 3. The Upper and Lower Rhine Graben regime that runs from northern Italy into eastern Belgium, parameterized as a combination of dipping plane and line sources, and finally 4. Background seismicity, parameterized as area sources. The fault model is based on slip rates using characteristic recurrence. The modeling of background and subduction zone seismicity is based on a compilation of several national and regional historic seismic catalogs using a Gutenberg-Richter recurrence model. Merging the catalogs encompasses the deletion of double, fake and very old events and the application of a declustering algorithm (Reasenberg, 2000). The resulting catalog contains a little over 6000 events, has an average b-value of -0.9, is complete for moment magnitudes 4.5 and larger, and is used to compute a gridded a-value model (smoothed historical seismicity) for the region. The logic tree weighs various completeness intervals and minimum magnitudes. Using a weighted scheme of European and global ground motion models together with a detailed site classification map for Europe based on Eurocode 8, we generate hazard maps for recurrence periods of 200, 475, 1000 and 2500 yrs.
NASA Astrophysics Data System (ADS)
Cain, Michelle; France, James; Pyle, John; Warwick, Nicola; Fisher, Rebecca; Lowry, Dave; Allen, Grant; O'Shea, Sebastian; Illingworth, Samuel; Jones, Ben; Gallagher, Martin; Welpott, Axel; Muller, Jennifer; Bauguitte, Stephane; George, Charles; Hayman, Garry; Manning, Alistair; Myhre, Catherine Lund; Lanoisellé, Mathias; Nisbet, Euan
2016-04-01
An airmass of enhanced methane was sampled during a research flight at ~600 m to ~2000 m altitude between the North coast of Norway and Svalbard on 21 July 2012. The largest source of methane in the summertime Arctic is wetland emissions. Did this enhancement in methane come from wetland emissions? The airmass was identified through continuous methane measurements using a Los Gatos fast greenhouse gas analyser on board the UK's BAe-146 Atmospheric Research Aircraft (ARA) as part of the MAMM (Methane in the Arctic: Measurements and Modelling) campaign. A Lagrangian particle dispersion model (the UK Met Office's NAME model) was run backwards to identify potential methane source regions. This was combined with a methane emission inventory to create "pseudo observations" to compare with the aircraft observations. This modelling was used to constrain the δ13C CH4 wetland source signature (where δ13C CH4 is the ratio of 13C to 12C in methane), resulting in a most likely signature of -73‰ (±4‰7‰). The NAME back trajectories suggest a methane source region of north-western Russian wetlands, and -73‰ is consistent with in situ measurements of wetland methane at similar latitudes in Scandinavia. This analysis has allowed us to study emissions from remote regions for which we do not have in situ observations, giving us an extra tool in the determination of the isotopic source variation of global methane emissions.
Fan, Jin; Yue, Xiaoying; Sun, Qinghua; Wang, Shigong
2017-06-01
A severe dust event occurred from April 23 to April 27, 2014, in East Asia. A state-of-the-art online atmospheric chemistry model, WRF/Chem, was combined with a dust model, GOCART, to better understand the entire process of this event. The natural color images and aerosol optical depth (AOD) over the dust source region are derived from datasets of moderate resolution imaging spectroradiometer (MODIS) loaded on a NASA Aqua satellite to trace the dust variation and to verify the model results. Several meteorological conditions, such as pressure, temperature, wind vectors and relative humidity, are used to analyze meteorological dynamic. The results suggest that the dust emission occurred only on April 23 and 24, although this event lasted for 5days. The Gobi Desert was the main source for this event, and the Taklamakan Desert played no important role. This study also suggested that the landform of the source region could remarkably interfere with a dust event. The Tarim Basin has a topographical effect as a "dust reservoir" and can store unsettled dust, which can be released again as a second source, making a dust event longer and heavier. Copyright © 2016. Published by Elsevier B.V.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
SU-E-T-278: Realization of Dose Verification Tool for IMRT Plan Based On DPM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Jinfeng; Cao, Ruifen; Dai, Yumei
Purpose: To build a Monte Carlo dose verification tool for IMRT Plan by implementing a irradiation source model into DPM code. Extend the ability of DPM to calculate any incident angles and irregular-inhomogeneous fields. Methods: With the virtual source and the energy spectrum which unfolded from the accelerator measurement data,combined with optimized intensity maps to calculate the dose distribution of the irradiation irregular-inhomogeneous field. The irradiation source model of accelerator was substituted by a grid-based surface source. The contour and the intensity distribution of the surface source were optimized by ARTS (Accurate/Advanced Radiotherapy System) optimization module based on the tumormore » configuration. The weight of the emitter was decided by the grid intensity. The direction of the emitter was decided by the combination of the virtual source and the emitter emitting position. The photon energy spectrum unfolded from the accelerator measurement data was adjusted by compensating the contaminated electron source. For verification, measured data and realistic clinical IMRT plan were compared with DPM dose calculation. Results: The regular field was verified by comparing with the measured data. It was illustrated that the differences were acceptable (<2% inside the field, 2–3mm in the penumbra). The dose calculation of irregular field by DPM simulation was also compared with that of FSPB (Finite Size Pencil Beam) and the passing rate of gamma analysis was 95.1% for peripheral lung cancer. The regular field and the irregular rotational field were all within the range of permitting error. The computing time of regular fields were less than 2h, and the test of peripheral lung cancer was 160min. Through parallel processing, the adapted DPM could complete the calculation of IMRT plan within half an hour. Conclusion: The adapted parallelized DPM code with irradiation source model is faster than classic Monte Carlo codes. Its computational accuracy and speed satisfy the clinical requirement, and it is expectable to be a Monte Carlo dose verification tool for IMRT Plan. Strategic Priority Research Program of the China Academy of Science(XDA03040000); National Natural Science Foundation of China (81101132)« less
Samuel, Jonathan C; Sankhulani, Edward; Qureshi, Javeria S; Baloyi, Paul; Thupi, Charles; Lee, Clara N; Miller, William C; Cairns, Bruce A; Charles, Anthony G
2012-01-01
Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192-0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available.
NASA Astrophysics Data System (ADS)
Restrepo-Estrada, Camilo; de Andrade, Sidgley Camargo; Abe, Narumi; Fava, Maria Clara; Mendiondo, Eduardo Mario; de Albuquerque, João Porto
2018-02-01
Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. Thus, there is still a gap in research with regard to the use of social media as a proxy for rainfall-runoff estimations and flood forecasting. To address this, we propose using a transformation function that creates a proxy variable for rainfall by analysing geo-social media messages and rainfall measurements from authoritative sources, which are later incorporated within a hydrological model for streamflow estimation. We found that the combined use of official rainfall values with the social media proxy variable as input for the Probability Distributed Model (PDM), improved streamflow simulations for flood monitoring. The combination of authoritative sources and transformed geo-social media data during flood events achieved a 71% degree of accuracy and a 29% underestimation rate in a comparison made with real streamflow measurements. This is a significant improvement on the respective values of 39% and 58%, achieved when only authoritative data were used for the modelling. This result is clear evidence of the potential use of derived geo-social media data as a proxy for environmental variables for improving flood early-warning systems.
Samuel, Jonathan C.; Sankhulani, Edward; Qureshi, Javeria S.; Baloyi, Paul; Thupi, Charles; Lee, Clara N.; Miller, William C.; Cairns, Bruce A.; Charles, Anthony G.
2012-01-01
Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192–0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available. PMID:22355338
Adding source positions to the IVS Combination
NASA Astrophysics Data System (ADS)
Bachmann, S.; Thaller, D.
2016-12-01
Simultaneous estimation of source positions, Earth orientation parameters (EOPs) and station positions in one common adjustment is crucial for a consistent generation of celestial and terrestrial reference frame (CRF and TRF, respectively). VLBI is the only technique to guarantee this consistency. Previous publications showed that the VLBI intra-technique combination could improve the quality of the EOPs and station coordinates compared to the individual contributions. By now, the combination of EOP and station coordinates is well established within the IVS and in combination with other space geodetic techniques (e.g. inter-technique combined TRF like the ITRF). Most of the contributing IVS Analysis Centers (AC) now provide source positions as a third parameter type (besides EOP and station coordinates), which have not been used for an operational combined solution yet. A strategy for the combination of source positions has been developed and integrated into the routine IVS combination. Investigations are carried out to compare the source positions derived from different IVS ACs with the combined estimates to verify whether the source positions are improved by the combination, as it has been proven for EOP and station coordinates. Furthermore, global solutions of source positions, i.e., so-called catalogues describing a CRF, are generated consistently with the TRF similar to the IVS operational combined quarterly solution. The combined solutions of the source positions time series and the consistently generated TRF and CRF are compared internally to the individual solutions of the ACs as well as to external CRF catalogues and TRFs. Additionally, comparisons of EOPs based on different CRF solutions are presented as an outlook for consistent EOP, CRF and TRF realizations.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.
Analysis of Asian Outflow over the Western Pacific using Observations from Trace-P
NASA Technical Reports Server (NTRS)
Jacob, Daniel J.
2004-01-01
Our analysis of the TRACE-P data focused on answering the following questions: 1) How do anthropogenic sources in Asia contribute to chemical outflow over the western Pacific in spring? 2) How does biomass burning in southeast Asia contribute to this outflow? 3) How can the TRACE-P observations be used to better quantify the sources of environmentally important gases in eastern Asia? Our strategy drew on a combination of data analysis and global 3-D modeling, as described below. We also contributed to the planning and execution of TRACE-P through service as mission scientist and by providing chemical model forecasts in the field.
Accelerating Exploitation of Low-grade Intelligence through Semantic Text Processing of Social Media
2013-06-01
importance as an information source. The brevity of social media content (e.g., 140 characters per tweet) combined with the increasing usage of mobile...platform imports unstructured text from a variety of sources and then maps the text to an existing ontology of frames (FrameNet, https...framenet.icsi.berkeley.edu/fndrupal/) during a process of Semantic Role Labeling ( SRL ). FrameNet is a structured language model grounded in the theory of Frame
ISO deep far-infrared survey in the Lockman Hole
NASA Astrophysics Data System (ADS)
Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.
1999-03-01
Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.
Mapping the spatio-temporal risk of lead exposure in apex species for more effective mitigation
Mateo-Tomás, Patricia; Olea, Pedro P.; Jiménez-Moreno, María; Camarero, Pablo R.; Sánchez-Barbudo, Inés S.; Rodríguez Martín-Doimeadios, Rosa C.; Mateo, Rafael
2016-01-01
Effective mitigation of the risks posed by environmental contaminants for ecosystem integrity and human health requires knowing their sources and spatio-temporal distribution. We analysed the exposure to lead (Pb) in griffon vulture Gyps fulvus—an apex species valuable as biomonitoring sentinel. We determined vultures' lead exposure and its main sources by combining isotope signatures and modelling analyses of 691 bird blood samples collected over 5 years. We made yearlong spatially explicit predictions of the species risk of lead exposure. Our results highlight elevated lead exposure of griffon vultures (i.e. 44.9% of the studied population, approximately 15% of the European, showed lead blood levels more than 200 ng ml−1) partly owing to environmental lead (e.g. geological sources). These exposures to environmental lead of geological sources increased in those vultures exposed to point sources (e.g. lead-based ammunition). These spatial models and pollutant risk maps are powerful tools that identify areas of wildlife exposure to potentially harmful sources of lead that could affect ecosystem and human health. PMID:27466455
Ultrasound acoustic wave energy transfer and harvesting
NASA Astrophysics Data System (ADS)
Shahab, Shima; Leadenham, Stephen; Guillot, François; Sabra, Karim; Erturk, Alper
2014-04-01
This paper investigates low-power electricity generation from ultrasound acoustic wave energy transfer combined with piezoelectric energy harvesting for wireless applications ranging from medical implants to naval sensor systems. The focus is placed on an underwater system that consists of a pulsating source for spherical wave generation and a harvester connected to an external resistive load for quantifying the electrical power output. An analytical electro-acoustic model is developed to relate the source strength to the electrical power output of the harvester located at a specific distance from the source. The model couples the energy harvester dynamics (piezoelectric device and electrical load) with the source strength through the acoustic-structure interaction at the harvester-fluid interface. Case studies are given for a detailed understanding of the coupled system dynamics under various conditions. Specifically the relationship between the electrical power output and system parameters, such as the distance of the harvester from the source, dimensions of the harvester, level of source strength, and electrical load resistance are explored. Sensitivity of the electrical power output to the excitation frequency in the neighborhood of the harvester's underwater resonance frequency is also reported.
Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins
Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.
2011-01-01
Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology.
Toni, Tina; Tidor, Bruce
2013-01-01
Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology. PMID:23555205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F; Park, J; Barraclough, B
2016-06-15
Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less
NASA Technical Reports Server (NTRS)
Matthews, Elaine; Walter, B.; Bogner, J.; Sarma, D.; Portmey, G.; Travis, Larry (Technical Monitor)
2001-01-01
In situ measurements of atmospheric methane concentrations begun in the early 1980s show decadal trends, as well as large interannual variations, in growth rate. Recent research indicates that while wetlands can explain several of the large growth anomalies for individual years, the decadal trend may be the combined effect of increasing sinks, due to increases in tropospheric OH, and stabilizing sources. We discuss new 20-year histories of annual, global source strengths for all major methane sources, i.e., natural wetlands, rice cultivation, ruminant animals, landfills, fossil fuels, and biomass burning. We also present estimates of the temporal pattern of the sink required to reconcile these sources and atmospheric concentrations over this time period. Analysis of the individual emission sources, together with model-derived estimates of the OH sink strength, indicates that the growth rate of atmospheric methane observed over the last 20 years can only be explained by a combination of changes in source emissions and an increasing tropospheric sink. Direct validation of the global sources and the terrestrial sink is not straightforward, in part because some sources/sinks are relatively small and diffuse (e.g., landfills and soil consumption), as well as because the atmospheric record integrates multiple and substantial sources and tropospheric sinks in regions such as the tropics. We discuss ways to develop and test criteria for rejecting and/or accepting a suite of scenarios for the methane budget.
Investigating source processes of isotropic events
NASA Astrophysics Data System (ADS)
Chiang, Andrea
This dissertation demonstrates the utility of the complete waveform regional moment tensor inversion for nuclear event discrimination. I explore the source processes and associated uncertainties for explosions and earthquakes under the effects of limited station coverage, compound seismic sources, assumptions in velocity models and the corresponding Green's functions, and the effects of shallow source depth and free-surface conditions. The motivation to develop better techniques to obtain reliable source mechanism and assess uncertainties is not limited to nuclear monitoring, but they also provide quantitative information about the characteristics of seismic hazards, local and regional tectonics and in-situ stress fields of the region . This dissertation begins with the analysis of three sparsely recorded events: the 14 September 1988 US-Soviet Joint Verification Experiment (JVE) nuclear test at the Semipalatinsk test site in Eastern Kazakhstan, and two nuclear explosions at the Chinese Lop Nor test site. We utilize a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long period waveforms and first motion observations provides unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We examine the effects of the free surface on the moment tensor via synthetic testing, and apply the moment tensor based discrimination method to well-recorded chemical explosions. These shallow chemical explosions represent rather severe source-station geometry in terms of the vanishing traction issues. We show that the combined waveform and first motion method enables the unique discrimination of these events, even though the data include unmodeled single force components resulting from the collapse and blowout of the quarry face immediately following the initial explosion. In contrast, recovering the announced explosive yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique. The estimation of seismic source parameters is dependent upon having a well-calibrated velocity model to compute the Green's functions for the inverse problem. Ideally, seismic velocity models are calibrated through broadband waveform modeling, however in regions of low seismicity velocity models derived from body or surface wave tomography may be employed. Whether a velocity model is 1D or 3D, or based on broadband seismic waveform modeling or the various tomographic techniques, the uncertainty in the velocity model can be the greatest source of error in moment tensor inversion. These errors have not been fully investigated for the nuclear discrimination problem. To study the effects of unmodeled structures on the moment tensor inversion, we set up a synthetic experiment where we produce synthetic seismograms for a 3D model (Moschetti et al., 2010) and invert these data using Green's functions computed with a 1D velocity mode (Song et al., 1996) to evaluate the recoverability of input solutions, paying particular attention to biases in the isotropic component. The synthetic experiment results indicate that the 1D model assumption is valid for moment tensor inversions at periods as short as 10 seconds for the 1D western U.S. model (Song et al., 1996). The correct earthquake mechanisms and source depth are recovered with statistically insignificant isotropic components as determined by the F-test. Shallow explosions are biased by the theoretical ISO-CLVD tradeoff but the tectonic release component remains low, and the tradeoff can be eliminated with constraints from P wave first motion. Path-calibration to the 1D model can reduce non-double-couple components in earthquakes, non-isotropic components in explosions and composite sources and improve the fit to the data. When we apply the 3D model to real data, at long periods (20-50 seconds), we see good agreement in the solutions between the 1D and 3D models and slight improvement in waveform fits when using the 3D velocity model Green's functions. (Abstract shortened by ProQuest.).
Export of microplastics from land to sea. A modelling approach.
Siegfried, Max; Koelmans, Albert A; Besseling, Ellen; Kroeze, Carolien
2017-12-15
Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea. The model accounts for different types and sources of microplastics entering river systems via point sources. We combine information on these sources with information on sewage management and plastic retention during river transport for the largest European rivers. Sources of microplastics include personal care products, laundry, household dust and tyre and road wear particles (TRWP). Most of the modelled microplastics exported by rivers to seas are synthetic polymers from TRWP (42%) and plastic-based textiles abraded during laundry (29%). Smaller sources are synthetic polymers and plastic fibres in household dust (19%) and microbeads in personal care products (10%). Microplastic export differs largely among European rivers, as a result of differences in socio-economic development and technological status of sewage treatment facilities. About two-thirds of the microplastics modelled in this study flow into the Mediterranean and Black Sea. This can be explained by the relatively low microplastic removal efficiency of sewage treatment plants in the river basins draining into these two seas. Sewage treatment is generally more efficient in river basins draining into the North Sea, the Baltic Sea and the Atlantic Ocean. We use our model to explore future trends up to the year 2050. Our scenarios indicate that in the future river export of microplastics may increase in some river basins, but decrease in others. Remarkably, for many basins we calculate a reduction in river export of microplastics from point-sources, mainly due to an anticipated improvement in sewage treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Reilly, S; Maynard, M; Marshall, E
Purpose: Limitations seen in previous skeletal dosimetry models, which are still employed in commonly used software today, include the lack of consideration of electron escape and cross-fire from cortical bone, the modeling of infinite spongiosa, the disregard of the effect of varying cellularity on active marrow self-irradiation, and the lack of use of the more recent ICRP definition of a 50 micron surrogate tissue region for the osteoprogenitor cells - shallow marrow. These limitations were addressed in the present dosimetry model. Methods: Electron transport was completed to determine specific absorbed fractions to active marrow and shallow marrow of the skeletalmore » regions of the adult female. The bone macrostructure was obtained from the whole-body hybrid computational phantom of the UF series of reference phantoms, while the bone microstructure was derived from microCT images of skeletal region samples taken from a 45 year-old female cadaver. The target tissue regions were active marrow and shallow marrow. The source tissues were active marrow, inactive marrow, trabecular bone volume, trabecular bone surfaces, cortical bone volume and cortical bone surfaces. The marrow cellularity was varied from 10 to 100 percent for active marrow self-irradiation. A total of 33 discrete electron energies, ranging from 1 keV to 10 MeV, were either simulated or modeled analytically. Results: The method of combining macro- and microstructure absorbed fractions calculated using MCNPX electron transport was found to yield results similar to those determined with the PIRT model for the UF adult male in the Hough et al. study. Conclusion: The calculated skeletal averaged absorbed fractions for each source-target combination were found to follow similar trends of more recent dosimetry models (image-based models) and did not follow current models used in nuclear medicine dosimetry at high energies (due to that models use of an infinite expanse of trabecular spongiosa)« less
DOT National Transportation Integrated Search
2014-07-01
Freight- carrying truck traffic is common on : Florida highways. This traffic has been increasing : steadily for many years with ever more impacts : on highway efficiency and safety and damage to : highways. Reducing the efficient movement of : traff...
An Approximation for Computing Reduction in Bandwidth Requirements Using Intelligent Multiplexers
1993-03-01
Naval Postgraduate School__ Sc. ALX)HRSS(City, State, andLPCode) 10 . SOURCE 01- FUNUINU NUMBERS Monterey, CA 93943 ELEGRMEN N RO. EC NO.ORUII EME T O... 10 IV. MODEL DEVELOPMENT ............................................................................ 13 A. THE ERLANG MODEL...channel. Also, the silent periods of a voice or data transmission go unused. For combined voice and data traffic, TDM averages only 10 -25% efficiency
Students' Use of the Energy Model to Account for Changes in Physical Systems
ERIC Educational Resources Information Center
Papadouris, Nico; Constantinou, Constantinos P.; Kyratsi, Theodora
2008-01-01
The aim of this study is to explore the ways in which students, aged 11-14 years, account for certain changes in physical systems and the extent to which they draw on an energy model as a common framework for explaining changes observed in diverse systems. Data were combined from two sources: interviews with 20 individuals and an open-ended…
POI Summarization by Aesthetics Evaluation From Crowd Source Social Media.
Qian, Xueming; Li, Cheng; Lan, Ke; Hou, Xingsong; Li, Zhetao; Han, Junwei
2018-03-01
Place-of-Interest (POI) summarization by aesthetics evaluation can recommend a set of POI images to the user and it is significant in image retrieval. In this paper, we propose a system that summarizes a collection of POI images regarding both aesthetics and diversity of the distribution of cameras. First, we generate visual albums by a coarse-to-fine POI clustering approach and then generate 3D models for each album by the collected images from social media. Second, based on the 3D to 2D projection relationship, we select candidate photos in terms of the proposed crowd source saliency model. Third, in order to improve the performance of aesthetic measurement model, we propose a crowd-sourced saliency detection approach by exploring the distribution of salient regions in the 3D model. Then, we measure the composition aesthetics of each image and we explore crowd source salient feature to yield saliency map, based on which, we propose an adaptive image adoption approach. Finally, we combine the diversity and the aesthetics to recommend aesthetic pictures. Experimental results show that the proposed POI summarization approach can return images with diverse camera distributions and aesthetics.
Visual and Vestibular Determinants of Perceived Eye-Level
NASA Technical Reports Server (NTRS)
Cohen, Malcolm Martin
2003-01-01
Both gravitational and optical sources of stimulation combine to determine the perceived elevations of visual targets. The ways in which these sources of stimulation combine with one another in operational aeronautical environments are critical for pilots to make accurate judgments of the relative altitudes of other aircraft and of their own altitude relative to the terrain. In a recent study, my colleagues and I required eighteen observers to set visual targets at their apparent horizon while they experienced various levels of G(sub z) in the human centrifuge at NASA-Ames Research Center. The targets were viewed in darkness and also against specific background optical arrays that were oriented at various angles with respect to the vertical; target settings were lowered as Gz was increased; this effect was reduced when the background optical array was visible. Also, target settings were displaced in the direction that the background optical array was pitched. Our results were attributed to the combined influences of otolith-oculomotor mechanisms that underlie the elevator illusion and visual-oculomotor mechanisms (optostatic responses) that underlie the perceptual effects of viewing pitched optical arrays that comprise the background. In this paper, I present a mathematical model that describes the independent and combined effects of G(sub z) intensity and the orientation and structure of background optical arrays; the model predicts quantitative deviations from normal accurate perceptions of target localization under a variety of conditions. Our earlier experimental results and the mathematical model are described in some detail, and the effects of viewing specific optical arrays under various gravitational-inertial conditions encountered in aeronautical environments are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo
2011-05-20
For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrkmore » 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.« less
Architecture for spacecraft operations planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1991-01-01
A system which generates plans for the dynamic environment of space operations is discussed. This system synthesizes plans by combining known operations under a set of physical, functional, and temperal constraints from various plan entities, which are modeled independently but combine in a flexible manner to suit dynamic planning needs. This independence allows the generation of a single plan source which can be compiled and applied to a variety of agents. The architecture blends elements of temperal logic, nonlinear planning, and object oriented constraint modeling to achieve its flexibility. This system was applied to the domain of the Intravehicular Activity (IVA) maintenance and repair aboard Space Station Freedom testbed.
Introductory comments on the USGS geographic applications program
NASA Technical Reports Server (NTRS)
Gerlach, A. C.
1970-01-01
The third phase of remote sensing technologies and potentials applied to the operations of the U.S. Geological Survey is introduced. Remote sensing data with multidisciplinary spatial data from traditional sources is combined with geographic theory and techniques of environmental modeling. These combined imputs are subject to four sequential activities that involve: (1) thermatic mapping of land use and environmental factors; (2) the dynamics of change detection; (3) environmental surveillance to identify sudden changes and general trends; and (4) preparation of statistical model and analytical reports. Geography program functions, products, clients, and goals are presented in graphical form, along with aircraft photo missions, geography test sites, and FY-70.
Sokolova, Ekaterina; Petterson, Susan R; Dienus, Olaf; Nyström, Fredrik; Lindgren, Per-Eric; Pettersson, Thomas J R
2015-09-01
Norovirus contamination of drinking water sources is an important cause of waterborne disease outbreaks. Knowledge on pathogen concentrations in source water is needed to assess the ability of a drinking water treatment plant (DWTP) to provide safe drinking water. However, pathogen enumeration in source water samples is often not sufficient to describe the source water quality. In this study, the norovirus concentrations were characterised at the contamination source, i.e. in sewage discharges. Then, the transport of norovirus within the water source (the river Göta älv in Sweden) under different loading conditions was simulated using a hydrodynamic model. Based on the estimated concentrations in source water, the required reduction of norovirus at the DWTP was calculated using quantitative microbial risk assessment (QMRA). The required reduction was compared with the estimated treatment performance at the DWTP. The average estimated concentration in source water varied between 4.8×10(2) and 7.5×10(3) genome equivalents L(-1); and the average required reduction by treatment was between 7.6 and 8.8 Log10. The treatment performance at the DWTP was estimated to be adequate to deal with all tested loading conditions, but was heavily dependent on chlorine disinfection, with the risk of poor reduction by conventional treatment and slow sand filtration. To our knowledge, this is the first article to employ discharge-based QMRA, combined with hydrodynamic modelling, in the context of drinking water. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara
2015-09-01
Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.
NASA Astrophysics Data System (ADS)
Sonderfeld, Hannah; Boesch, Hartmut; Jeanjean, Antoine P. R.; Riddick, Stuart N.; Allen, Grant; Ars, Sebastien; Davies, Stewart; Harris, Neil; Humpage, Neil; Leigh, Roland; Pitt, Joseph
2017-04-01
Globally, the waste sector contributes to nearly a fifth of anthropogenic methane (CH4) emitted to the atmosphere and is the second largest source of methane in the UK. In recent years great improvements to reduce those emissions have been achieved by installation of methane recovery systems at landfill sites and subsequently methane emissions reported in national emission inventories have been reduced. Nevertheless, methane emissions of landfills remain uncertain and quantification of emission fluxes is essential to verify reported emission inventories and to monitor changes in emissions. We are presenting data from the deployment of an in situ FTIR (Fourier Transform Infrared Spectrometer, Ecotech) for continuous and simultaneous sampling of CO2, CH4, N2O and CO with high time resolution of the order of minutes. During a two week field campaign at an operational landfill site in Eastern England in August 2014, measurements were taken within a radius of 320 m of the uncovered and active area of the landfill, which was still filled with new incoming waste. We have applied a computation fluid dynamics (CFD) model, constrained with local wind measurements and a detailed topographic map of the landfill site, to the in situ concentration data to calculate CH4 fluxes of the active site. A mean daytime flux of 0.83 mg m-2 s-1 (53.26 kg h-1) was calculated for the area of the active site. An additional source area was identified and incorporated into the CFD model, which resulted in higher total methane emissions of 75.97 kg h-1 for the combined emission areas. Our method of combining a CFD model to in situ data, in medium proximity of the source area, allows to distinguish between different emission areas and thereby provide more detailed information compared to bulk emission approaches.
Optimal observables for multiparameter seismic tomography
NASA Astrophysics Data System (ADS)
Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner
2014-08-01
We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.
Sources, sinks, and spatial ecology of cotton mice in longleaf pine stands undergoing restoration
Sharp, N.W.; Mitchell, M.S.; Grand, J.B.
2009-01-01
The Fire and Fire Surrogate studya replicated, manipulative experimentsought the most economically and ecologically efficient way to restore the nation's fire-maintained ecosystems. As part of this study, we conducted a 3-year markrecapture study, comprising 105,000 trap-nights, to assess demographic responses of cotton mice (Peromyscus gossypinus) to Fire and Fire Surrogate treatments at the Gulf Coastal Plain site, where longleaf pine was the ecosystem to be restored. We compared competing models to evaluate restoration effects on variation in apparent survival and recruitment over time, space, and treatment, and incorporated measures of available source habitat for cotton mice with reverse-time modeling to infer immigration from outside the study area. The top-ranked survival model contained only variation over time, but the closely ranked 2nd and 3rd models included variation over space and treatment, respectively. The top 4 recruitment models all included effects for availability of source habitat and treatments. Burning appeared to degrade habitat quality for cotton mice, showing demographic characteristics of a sink, but treatments combining fire with thinning of trees or application of herbicide to the understory appeared to improve habitat quality, possibly creating sources. Bottomland hardwoods outside the study also acted as sources by providing immigrants to experimental units. Models suggested that population dynamics operated over multiple spatial scales. Treatments applied to 15-ha stands probably only caused local variation in vital rates within the larger population. ?? 2009 American Society of Mammalogists.
NASA Astrophysics Data System (ADS)
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The results of the synthetic and field data testing show that the new observing system and statistical approach greatly decreases the incidence of false alarms (that is, wrongly identifying a well site to be leaking) compared with the same tests that do not use the NZMB approach and therefore offers increased leak detection and sizing capabilities.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Modeling Passive Propagation of Malwares on the WWW
NASA Astrophysics Data System (ADS)
Chunbo, Liu; Chunfu, Jia
Web-based malwares host in websites fixedly and download onto user's computers automatically while users browse. This passive propagation pattern is different from that of traditional viruses and worms. A propagation model based on reverse web graph is proposed. In this model, propagation of malwares is analyzed by means of random jump matrix which combines orderness and randomness of user browsing behaviors. Explanatory experiments, which has single or multiple propagation sources respectively, prove the validity of the model. Using this model, people can evaluate the hazardness of specified websites and take corresponding countermeasures.
Probabilistic forecasts of debris-flow hazard at the regional scale with a combination of models.
NASA Astrophysics Data System (ADS)
Malet, Jean-Philippe; Remaître, Alexandre
2015-04-01
Debris flows are one of the many active slope-forming processes in the French Alps, where rugged and steep slopes mantled by various slope deposits offer a great potential for triggering hazardous events. A quantitative assessment of debris-flow hazard requires the estimation, in a probabilistic framework, of the spatial probability of occurrence of source areas, the spatial probability of runout areas, the temporal frequency of events, and their intensity. The main objective of this research is to propose a pipeline for the estimation of these quantities at the region scale using a chain of debris-flow models. The work uses the experimental site of the Barcelonnette Basin (South French Alps), where 26 active torrents have produced more than 150 debris-flow events since 1850 to develop and validate the methodology. First, a susceptibility assessment is performed to identify the debris-flow prone source areas. The most frequently used approach is the combination of environmental factors with GIS procedures and statistical techniques, integrating or not, detailed event inventories. Based on a 5m-DEM and derivatives, and information on slope lithology, engineering soils and landcover, the possible source areas are identified with a statistical logistic regression model. The performance of the statistical model is evaluated with the observed distribution of debris-flow events recorded after 1850 in the study area. The source areas in the three most active torrents (Riou-Bourdoux, Faucon, Sanières) are well identified by the model. Results are less convincing for three other active torrents (Bourget, La Valette and Riou-Chanal); this could be related to the type of debris-flow triggering mechanism as the model seems to better spot the open slope debris-flow source areas (e.g. scree slopes), but appears to be less efficient for the identification of landslide-induced debris flows. Second, a susceptibility assessment is performed to estimate the possible runout distance with a process-based model. The MassMov-2D code is a two-dimensional model of mud and debris flow dynamics over complex topography, based on a numerical integration of the depth-averaged motion equations using shallow water approximation. The run-out simulations are performed for the most active torrents. The performance of the model has been evaluated by comparing modelling results with the observed spreading areas of several recent debris flows. Existing data on the debris flow volume, input discharge and deposits were used to back-analyze those events and estimate the values of the model parameters. Third, hazard is estimated on the basis of scenarios computed in a probabilistic way, for volumes in the range 20'000 to 350'000 m3, and for several combinations of rheological parameters. In most cases, the simulations indicate that the debris flows cause significant overflowing on the alluvial fans for volumes exceeding 100'000 m3 (height of deposits > 2 m, velocities > 5 m.s-1). Probabilities of debris flow runout and debris flow intensities are then computed for each terrain units.
Thermal Image Sensing Model for Robotic Planning and Search.
Castro Jiménez, Lídice E; Martínez-García, Edgar A
2016-08-08
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
NASA Astrophysics Data System (ADS)
Wong, Man Sing; Nichol, Janet Elizabeth; Lee, Kwon Ho
2010-10-01
Hong Kong, a commercial and financial city located in south-east China has suffered serious air pollution for the last decade due largely to rapid urban and industrial expansion of the cities of mainland China. However, the potential sources and pathways of aerosols transported to Hong Kong have not been well researched due to the lack of air quality monitoring stations in southern China. Here, an integrated method combining the AErosol RObotic NETwork (AERONET) data, trajectory and Potential Source Contribution Function (PSCF) modeling is used to identify the potential transport pathways and contribution of sources from four characteristic aerosol types. Four characteristic aerosol types were defined using a total of 730 AERONET data measurements between 2005 and 2008. They are coastal urban, polluted urban, dust (likely to be long distance desert dust), and heavy pollution. Results show that the sources of polluted urban and heavy pollution are associated with industrial emissions in southern China, whereas coastal urban aerosols have been affected both from natural marine aerosol and emissions. The PSCF map of dust shows a wide range of pathways followed by east- and south-eastwards trajectories from northwest China to Hong Kong. Although the contribution from dust sources is small compared to the anthropogenic aerosols, a serious recent dust outbreak has been observed in Hong Kong with an elevation of the Air Pollution Index to 500, compared with 50-100 on normal days. Therefore, the combined use of clustered AERONET data, trajectory and the PSCF models can help to resolve the longstanding issue about source regions and characteristics of pollutants carried to Hong Kong.
Identifying sediment sources in the sediment TMDL process
Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph P.; Landy, R.B.; Gorman Sanisaca, Lillian E.
2015-01-01
Sediment is an important pollutant contributing to aquatic-habitat degradation in many waterways of the United States. This paper discusses the application of sediment budgets in conjunction with sediment fingerprinting as tools to determine the sources of sediment in impaired waterways. These approaches complement monitoring, assessment, and modeling of sediment erosion, transport, and storage in watersheds. Combining the sediment fingerprinting and sediment budget approaches can help determine specific adaptive management plans and techniques applied to targeting hot spots or areas of high erosion.
Bernknopf, Richard L.; Dinitz, Laura B.; Loague, Keith
2001-01-01
An integrated earth science-economics model, developed within a geographic information system (GIS), combines a regional-scale nonpoint source vulnerability assessment with a specific remediation measure to avoid unnecessary agricultural production costs associated with the use of agrochemicals in the Pearl Harbor basin on the island of Oahu, Hawaii. This approach forms the core of a risk-based regulation for the application of agrochemicals and estimates the benefits of an information-based approach to decisionmaking.
NASA Astrophysics Data System (ADS)
Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi
2017-07-01
While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.
NASA Astrophysics Data System (ADS)
Aur, K. A.; Poppeliers, C.; Preston, L. A.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, Ross F.
2016-08-29
This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularlymore » for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.« less
NASA Astrophysics Data System (ADS)
Masciopinto, Costantino; Volpe, Angela; Palmiotta, Domenico; Cherubini, Claudia
2010-09-01
A combination of a parallel fracture model with the PHREEQC-2 geochemical model was developed to simulate sequential flow and chemical transport with reactions in fractured media where both laminar and turbulent flows occur. The integration of non-laminar flow resistances in one model produced relevant effects on water flow velocities, thus improving model prediction capabilities on contaminant transport. The proposed conceptual model consists of 3D rock-blocks, separated by horizontal bedding plane fractures with variable apertures. Particle tracking solved the transport equations for conservative compounds and provided input for PHREEQC-2. For each cluster of contaminant pathways, PHREEQC-2 determined the concentration for mass-transfer, sorption/desorption, ion exchange, mineral dissolution/precipitation and biodegradation, under kinetically controlled reactive processes of equilibrated chemical species. Field tests have been performed for the code verification. As an example, the combined model has been applied to a contaminated fractured aquifer of southern Italy in order to simulate the phenol transport. The code correctly fitted the field available data and also predicted a possible rapid depletion of phenols as a result of an increased biodegradation rate induced by a simulated artificial injection of nitrates, upgradient to the sources.
NASA Astrophysics Data System (ADS)
Wayand, N. E.; Hamlet, A. F.; Hughes, M. R.; Feld, S.; Lundquist, J. D.
2012-12-01
The data required to drive distributed hydrological models is significantly limited within mountainous terrain due to a scarcity of observations. This study evaluated three common configurations of forcing data: a) one low-elevation station, combined with empirical techniques, b) gridded output from the Weather Research and Forecasting (WRF) model, and c) a combination of the two. Each configuration was evaluated within the heavily-instrumented North Fork American River Basin in northern California, during October-June 2000-2010. Simulations of streamflow and snowpack using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted precipitation and radiation as variables whose sources resulted in significant differences. The best source of precipitation data varied between years. On average, the performance of WRF and the single station distributed using the Parameter Regression on Independent Slopes Model (PRISM), were not significantly different. The average percent biases in simulated streamflow were 3.4% and 0.9%, for configurations a) and b) respectively, even though precipitation compared directly with gauge measurements was biased high by 6% and 17%, suggesting that gauge undercatch may explain part of the bias. Simulations of snowpack using empirically-estimated long-wave irradiance resulted in melt rates lower than those observed at high-elevation sites, while at lower-elevations the same forcing caused significant mid-winter melt that was not observed (Figure 1). These results highlight the complexity of how forcing data sources impact hydrology over different areas (high vs. low elevation snow) and different time-periods. Overall, results support the use of output from the WRF model over empirical techniques in regions with limited station data. FIG. 1. (a,b) Simulated SWE from DHSVM compared to observations at the Sierra Snow Lab (2100m) and Blue Canyon (1609m) during 2008 - 2009. Modeled (c,d) internal pack temperature, (e,f) downward short-wave irradiance, (g,h) downward long-wave irradiance, and (i,k) net-irradiance. Note that the timeperiod of plots e,g,i focus on the melt season (March-May), and plots f,h,j focus on the erroneous mid-winter melt event during January - time-periods marked with vertical dashed lines in (a) and (b).
Using Seismic and Infrasonic Data to Identify Persistent Sources
NASA Astrophysics Data System (ADS)
Nava, S.; Brogan, R.
2014-12-01
Data from seismic and infrasound sensors were combined to aid in the identification of persistent sources such as mining-related explosions. It is of interest to operators of seismic networks to identify these signals in their event catalogs. Acoustic signals below the threshold of human hearing, in the frequency range of ~0.01 to 20 Hz are classified as infrasound. Persistent signal sources are useful as ground truth data for the study of atmospheric infrasound signal propagation, identification of manmade versus naturally occurring seismic sources, and other studies. By using signals emanating from the same location, propagation studies, for example, can be conducted using a variety of atmospheric conditions, leading to improvements to the modeling process for eventual use where the source is not known. We present results from several studies to identify ground truth sources using both seismic and infrasound data.
Factors affecting nutrient trends in major rivers of the Chesapeake Bay Watershed
Sprague, Lori A.; Langland, M.J.; Yochum, S.E.; Edwards, R.E.; Blomquist, J.D.; Phillips, S.W.; Shenk, G.W.; Preston, S.D.
2000-01-01
Trends in nutrient loads and flow-adjusted concentrations in the major rivers entering Chesapeake Bay were computed on the basis of water-quality data collected between 1985 and 1998 at 29 monitoring stations in the Susquehanna, Potomac, James, Rappahannock, York, Patuxent, and Choptank River Basins. Two computer models?the Chesapeake Bay Watershed Model (WSM) and the U.S. Geological Survey?s 'Spatially Referenced Regressions on Watershed attributes' (SPARROW) Model?were used to help explain the major factors affecting the trends. Results from WSM simulations provided information on temporal changes in contributions from major nutrient sources, and results from SPARROW model simulations provided spatial detail on the distribution of nutrient yields in these basins. Additional data on nutrient sources, basin characteristics, implementation of management practices, and ground-water inputs to surface water were analyzed to help explain the trends. The major factors affecting the trends were changes in nutrient sources and natural variations in streamflow. The dominant source of nitrogen and phosphorus from 1985 to 1998 in six of the seven tributary basins to Chesapeake Bay was determined to be agriculture. Because of the predominance of agricultural inputs, changes in agricultural nutrient sources such as manure and fertilizer, combined with decreases in agricultural acreage and implementation of best management practices (BMPs), had the greatest impact on the trends in flow-adjusted nutrient concentrations. Urban acreage and population, however, were noted to be increasing throughout the Chesapeake Bay Watershed, and as a result, delivered loads of nutrients from urban areas increased during the study period. Overall, agricultural nutrient management, in combination with load decreases from point sources due to facility upgrades and the phosphate detergent ban, led to downward trends in flow-adjusted nutrient concentrations atmany of the monitoring stations in the watershed. The loads of nutrients, however, were not reduced significantly at most of the monitoring stations. This is due primarily to higher streamflow in the latter years of the monitoring period, which led to higher loading in those years.Results of this study indicate a need for more detailed information on BMP effectiveness under a full range of hydrologic conditions and in different areas of the watershed; an internally consistent fertilizer data set; greater consideration of the effects of watershed processes on nutrient transport; a refinement of current modeling efforts; and an expansion of the non-tidal monitoring network in the Chesapeake Bay Watershed.
NASA Astrophysics Data System (ADS)
Karki, Rajesh
Renewable energy application in electric power systems is growing rapidly worldwide due to enhanced public concerns for adverse environmental impacts and escalation in energy costs associated with the use of conventional energy sources. Photovoltaics and wind energy sources are being increasingly recognized as cost effective generation sources. A comprehensive evaluation of reliability and cost is required to analyze the actual benefits of utilizing these energy sources. The reliability aspects of utilizing renewable energy sources have largely been ignored in the past due the relatively insignificant contribution of these sources in major power systems, and consequently due to the lack of appropriate techniques. Renewable energy sources have the potential to play a significant role in the electrical energy requirements of small isolated power systems which are primarily supplied by costly diesel fuel. A relatively high renewable energy penetration can significantly reduce the system fuel costs but can also have considerable impact on the system reliability. Small isolated systems routinely plan their generating facilities using deterministic adequacy methods that cannot incorporate the highly erratic behavior of renewable energy sources. The utilization of a single probabilistic risk index has not been generally accepted in small isolated system evaluation despite its utilization in most large power utilities. Deterministic and probabilistic techniques are combined in this thesis using a system well-being approach to provide useful adequacy indices for small isolated systems that include renewable energy. This thesis presents an evaluation model for small isolated systems containing renewable energy sources by integrating simulation models that generate appropriate atmospheric data, evaluate chronological renewable power outputs and combine total available energy and load to provide useful system indices. A software tool SIPSREL+ has been developed which generates risk, well-being and energy based indices to provide realistic cost/reliability measures of utilizing renewable energy. The concepts presented and the examples illustrated in this thesis will help system planners to decide on appropriate installation sites, the types and mix of different energy generating sources, the optimum operating policies, and the optimum generation expansion plans required to meet increasing load demands in small isolated power systems containing photovoltaic and wind energy sources.
NASA Astrophysics Data System (ADS)
Waite, Gregory P.; Lanza, Federica
2016-10-01
Magmatic processes produce a rich variety of volcano seismic signals, ranging over several orders of magnitude in frequency and over a wide range of mechanism types. We examined signals from 400 to 10 s period associated with explosive eruptions at Fuego volcano, Guatemala, that were recorded over 19 days in 2009 on broadband stations with 30 s and 60 s corner periods. The raw data from the closest stations include tilt effects on the horizontal components but also have significant signal at periods below the instrument corners on the vertical components, where tilt effects should be negligible. We address the problems of tilt-affected horizontal waveforms through a joint waveform inversion of translation and rotation, which allows for an investigation of the varying influence of tilt with period. Using a phase-weighted stack of six similar events, we invert for source moment tensor using multiple bands. We use a grid search for source type and constrained inversions, which provides a quantitative measure of source mechanism reliability. The 30-10 s band-pass results are consistent with previous work that modeled data with a combined two crack or crack and pipe model. At the longest-period band examined, 400-60 s, the source mechanism is like a pipe that could represent the shallowest portion of the conduit. On the other hand, source mechanisms in some bands are unconstrained, presumably due to the combined tilt-dominated and translation-dominated signals, which are not coincident in space and have different time spans.
Siniatchkin, Michael; Moeller, Friederike; Jacobs, Julia; Stephani, Ulrich; Boor, Rainer; Wolff, Stephan; Jansen, Olav; Siebner, Hartwig; Scherg, Michael
2007-09-01
The ballistocardiogram (BCG) represents one of the most prominent sources of artifacts that contaminate the electroencephalogram (EEG) during functional MRI. The BCG artifacts may affect the detection of interictal epileptiform discharges (IED) in patients with epilepsy, reducing the sensitivity of the combined EEG-fMRI method. In this study we improved the BCG artifact correction using a multiple source correction (MSC) approach. On the one hand, a source analysis of the IEDs was applied to the EEG data obtained outside the MRI scanner to prevent the distortion of EEG signals of interest during the correction of BCG artifacts. On the other hand, the topographies of the BCG artifacts were defined based on the EEG recorded inside the scanner. The topographies of the BCG artifacts were then added to the surrogate model of IED sources and a combined source model was applied to the data obtained inside the scanner. The artifact signal was then subtracted without considerable distortion of the IED topography. The MSC approach was compared with the traditional averaged artifact subtraction (AAS) method. Both methods reduced the spectral power of BCG-related harmonics and enabled better detection of IEDs. Compared with the conventional AAS method, the MSC approach increased the sensitivity of IED detection because the IED signal was less attenuated when subtracting the BCG artifacts. The proposed MSC method is particularly useful in situations in which the BCG artifact is spatially correlated and time-locked with the EEG signal produced by the focal brain activity of interest.
Sheesley, Rebecca J; Schauer, James J; Orf, Marya L
2010-02-01
Industrial sources can have a significant but poorly defined impact on ambient particulate matter concentrations in select areas. Detailed emission profiles are often not available and are hard to develop because of the diversity of emissions across time and space at large industrial complexes. A yearlong study was conducted in an industrial area in Detroit, MI, which combined real-time particle mass (tapered element oscillating microbalance) and black carbon (aetholometer) measurements with molecular marker measurements of monthly average concentrations as well as daily concentrations of select high pollution days. The goal of the study was to use the real-time data to define days in which the particulate matter concentration in the atmosphere was largely impacted by local source emissions and to use daily speciation data to derive emission profiles for the industrial source. When combined with motor vehicle exhaust, wood smoke and road dust profiles, the industrial source profile was used to determine the contribution of the local industrial source to the total organic carbon (OC) concentrations using molecular marker-chemical mass balance modeling (MM-CMB). The MM-CMB analysis revealed that the industrial source had minimal impact on the monthly average carbonaceous aerosol concentration, but contributed approximately 2 microg m(-3), or a little over one-third of the total OC, on select high-impact days.
Dynamical model for the toroidal sporadic meteors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorný, Petr; Vokrouhlický, David; Nesvorný, David
More than a decade of radar operations by the Canadian Meteor Orbit Radar have allowed both young and moderately old streams to be distinguished from the dispersed sporadic background component. The latter has been categorized according to broad radiant regions visible to Earth-based observers into three broad classes: the helion and anti-helion source, the north and south apex sources, and the north and south toroidal sources (and a related arc structure). The first two are populated mainly by dust released from Jupiter-family comets and new comets. Proper modeling of the toroidal sources has not to date been accomplished. Here, wemore » develop a steady-state model for the toroidal source of the sporadic meteoroid complex, compare our model with the available radar measurements, and investigate a contribution of dust particles from our model to the whole population of sporadic meteoroids. We find that the long-term stable part of the toroidal particles is mainly fed by dust released by Halley type (long period) comets (HTCs). Our synthetic model reproduces most of the observed features of the toroidal particles, including the most troublesome low-eccentricity component, which is due to a combination of two effects: particles' ability to decouple from Jupiter and circularize by the Poynting-Robertson effect, and large collision probability for orbits similar to that of the Earth. Our calibrated model also allows us to estimate the total mass of the HTC-released dust in space and check the flux necessary to maintain the cloud in a steady state.« less
Tritium power source for long-lived sensors
NASA Astrophysics Data System (ADS)
Litz, M. S.; Katsis, D. C.; Russo, J. A.; Carroll, J. J.
2014-06-01
A tritium-based indirect converting photovoltaic (PV) power source has been designed and prototyped as a long-lived (~15 years) power source for sensor networks. Tritium is a biologically benign beta emitter and low-cost isotope acquired from commercial vendors for this purpose. The power source combines tritium encapsulated with a radioluminescent phosphor coupled to a commercial PV cell. The tritium, phosphor, and PV components are packaged inside a BA5590-style military-model enclosure. The package has been approved by the nuclear regulatory commission (NRC) for use by DOD. The power source is designed to produce 100μW electrical power for an unattended radiation sensor (scintillator and avalanche photodiode) that can detect a 20 μCi source of 137Cs at three meters. This beta emitting indirect photon conversion design is presented as step towards the development of practical, logistically acceptable, lowcost long-lived compact power sources for unattended sensor applications in battlefield awareness and environmental detection.
Collison nebulizer as a new soft ionization source for mass spectrometry
NASA Astrophysics Data System (ADS)
Pervukhin, V. V.; Sheven', D. G.; Kolomiets, Yu. N.
2016-08-01
We have proposed that a Collison-type nebulizer be used as an ionization source for mass spectrometry with ionization under atmospheric pressure. This source does not require the use of electric voltage, radioactive sources, heaters, or liquid pumps. It has been shown that the number of ions produced by the 63Ni radioactive source is three to four times larger than the number of ions produced by acoustic ionization sources. We have considered the possibility of using a Collison-type nebulizer in combination with a vortex focusing system as an ion source for extractive ionization of compounds under atmospheric pressure. The ionization of volatile substances in crossflows of a charged aerosol and an analyte (for model compounds of the amine class, viz., diethylaniline, triamylamine, and cocaine) has been investigated. It has been shown that the limit of detecting cocaine vapor by this method is on the level of 4.6 × 10-14 g/cm3.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources
NASA Astrophysics Data System (ADS)
Dai, Fa Foster
Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.
Synchrotron radiation and diffusive shock acceleration - A short review and GRB perspective
NASA Astrophysics Data System (ADS)
Karlica, Mile
2015-12-01
In this talk we present the sponge" model and its possible implications on the GRB afterglow light curves. "Sponge" model describes source of GRB afterglow radiation as fragmented GRB ejecta where bubbles move through the rarefied medium. In the first part of the talk a short introduction to synchrotron radiation and Fermi acceleration was presented. In the assumption that X-ray luminosity of GRB afterglow phase comes from the kinetic energy losses of clouds in ejecta medium radiated as synchrotron radiation we solved currently very simple equation of motion to find which combination of cloud and medium regime describes the afterglow light curve the best. We proposed for the first step to watch simple combinations of expansion regimes for both bubbles and surrounding medium. The closest case to the numerical fit of GRB 150403A with time power law index k = 1.38 is the combination of constant bubbles and Sedov like expanding medium with time power law index k = 1.25. Of course the question of possible mixture of variuos regime combinations is still open within this model.
Lin, Kai; Zhang, Lanwei; Han, Xue; Meng, Zhaoxu; Zhang, Jianming; Wu, Yifan; Cheng, Dayou
2018-03-28
In this study, Qula casein derived from yak milk casein was hydrolyzed using a two-enzyme combination approach, and high angiotensin I-converting enzyme (ACE) inhibitory activity peptides were screened by quantitative structure-activity relationship (QSAR) modeling integrated with molecular docking analysis. Hydrolysates (<3 kDa) derived from combinations of thermolysin + alcalase and thermolysin + proteinase K demonstrated high ACE inhibitory activities. Peptide sequences in hydrolysates derived from these two combinations were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). On the basis of the QSAR modeling prediction, a total of 16 peptides were selected for molecular docking analysis. The docking study revealed that four of the peptides (KFPQY, MPFPKYP, MFPPQ, and QWQVL) bound the active site of ACE. These four novel peptides were chemically synthesized, and their IC 50 was determined. Among these peptides, KFPQY showed the highest ACE inhibitory activity (IC 50 = 12.37 ± 0.43 μM). Our study indicated that Qula casein presents an excellent source to produce ACE inhibitory peptides.
NASA Astrophysics Data System (ADS)
Sousa, Vagner Candido de; Silva, Tarcísio Marinelli Pereira; De Marqui Junior, Carlos
2017-10-01
In this paper, the combined effects of semi-passive control using shunted piezoelectric material and passive pseudoelastic hysteresis of shape memory springs on the aerolastic behavior of a typical section is investigated. An aeroelastic model that accounts for the presence of both smart materials employed as mechanical energy dissipation devices is presented. The Brinson model is used to simulate the shape memory material. New expressions for the modeling of the synchronized switch damping on inductor technique (developed for enhanced piezoelectric damping) are presented, resulting in better agreement with experimental data. The individual effects of each nonlinear mechanism on the aeroelastic behavior of the typical section are first verified. Later, the combined effects of semi-passive piezoelectric control and passive shape memory alloy springs on the post-critical behavior of the system are discussed in details. The range of post-flutter airflow speeds with stable limit cycle oscillations is significantly increased due to the combined effects of both sources of energy dissipation, providing an effective and autonomous way to modify the behavior of aeroelastic systems using smart materials.
A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.
2017-03-01
Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictions of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Dall'Anese, Emiliano
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less
NASA Astrophysics Data System (ADS)
Vaysman, Ya I.; Surkov, AA; Surkova, Yu I.; Kychkin, AV
2017-06-01
The article is devoted to the use of renewable energy sources and the assessment of the feasibility of their use in the climatic conditions of the Western Urals. A simulation model that calculates the efficiency of a combined power installations (CPI) was (RES) developed. The CPI consists of the geothermal heat pump (GHP) and the vacuum solar collector (VCS) and is based on the research model. This model allows solving a wide range of problems in the field of energy and resource efficiency, and can be applied to other objects using RES. Based on the research recommendations for optimizing the management and the application of CPI were given. The optimization system will give a positive effect in the energy and resource consumption of low-rise residential buildings projects.
On buoyancy-driven natural ventilation of a room with a heated floor
NASA Astrophysics Data System (ADS)
Gladstone, Charlotte; Woods, Andrew W.
2001-08-01
The natural ventilation of a room, both with a heated floor and connected to a cold exterior through two openings, is investigated by combining quantitative models with analogue laboratory experiments. The heated floor generates an areal source of buoyancy while the openings allow displacement ventilation to operate. When combined, these produce a steady state in which the air in the room is well-mixed, and the heat provided by the floor equals the heat lost by displacement. We develop a quantitative model describing this process, in which the advective heat transfer through the openings is balanced with the heat flux supplied at the floor. This model is successfully tested with observations from small-scale analogue laboratory experiments. We compare our results with the steady-state flow associated with a point source of buoyancy: for a given applied heat flux, an areal source produces heated air of lower temperature but a greater volume flux of air circulates through the room. We generalize the model to account for the effects of (i) a cooled roof as well as a heated floor, and (ii) an external wind or temperature gradient. In the former case, the direction of the flow through the openings depends on the temperature of the exterior air relative to an averaged roof and floor temperature. In the latter case, the flow is either buoyancy dominated or wind dominated depending on the strength of the pressure associated with the wind. Furthermore, there is an intermediate multiple-solution regime in which either flow regime may develop.
NASA Astrophysics Data System (ADS)
Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.
2018-03-01
Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.
Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception
Rohe, Tim; Noppeney, Uta
2015-01-01
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328
Roksandic, Mirjana; Nikitović, Dejana; Rodríguez Suárez, Roberto; Smith, David; Kanik, Nadine; García Jordá, Dailys; Buhay, William M.
2017-01-01
The general lack of well-preserved juvenile skeletal remains from Caribbean archaeological sites has, in the past, prevented evaluations of juvenile dietary changes. Canímar Abajo (Cuba), with a large number of well-preserved juvenile and adult skeletal remains, provided a unique opportunity to fully assess juvenile paleodiets from an ancient Caribbean population. Ages for the start and the end of weaning and possible food sources used for weaning were inferred by combining the results of two Bayesian probability models that help to reduce some of the uncertainties inherent to bone collagen isotope based paleodiet reconstructions. Bone collagen (31 juveniles, 18 adult females) was used for carbon and nitrogen isotope analyses. The isotope results were assessed using two Bayesian probability models: Weaning Ages Reconstruction with Nitrogen isotopes and Stable Isotope Analyses in R. Breast milk seems to have been the most important protein source until two years of age with some supplementary food such as tropical fruits and root cultigens likely introduced earlier. After two, juvenile diets were likely continuously supplemented by starch rich foods such as root cultigens and legumes. By the age of three, the model results suggest that the weaning process was completed. Additional indications suggest that animal marine/riverine protein and maize, while part of the Canímar Abajo female diets, were likely not used to supplement juvenile diets. The combined use of both models here provided a more complete assessment of the weaning process for an ancient Caribbean population, indicating not only the start and end ages of weaning but also the relative importance of different food sources for different age juveniles. PMID:28459816
Chinique de Armas, Yadira; Roksandic, Mirjana; Nikitović, Dejana; Rodríguez Suárez, Roberto; Smith, David; Kanik, Nadine; García Jordá, Dailys; Buhay, William M
2017-01-01
The general lack of well-preserved juvenile skeletal remains from Caribbean archaeological sites has, in the past, prevented evaluations of juvenile dietary changes. Canímar Abajo (Cuba), with a large number of well-preserved juvenile and adult skeletal remains, provided a unique opportunity to fully assess juvenile paleodiets from an ancient Caribbean population. Ages for the start and the end of weaning and possible food sources used for weaning were inferred by combining the results of two Bayesian probability models that help to reduce some of the uncertainties inherent to bone collagen isotope based paleodiet reconstructions. Bone collagen (31 juveniles, 18 adult females) was used for carbon and nitrogen isotope analyses. The isotope results were assessed using two Bayesian probability models: Weaning Ages Reconstruction with Nitrogen isotopes and Stable Isotope Analyses in R. Breast milk seems to have been the most important protein source until two years of age with some supplementary food such as tropical fruits and root cultigens likely introduced earlier. After two, juvenile diets were likely continuously supplemented by starch rich foods such as root cultigens and legumes. By the age of three, the model results suggest that the weaning process was completed. Additional indications suggest that animal marine/riverine protein and maize, while part of the Canímar Abajo female diets, were likely not used to supplement juvenile diets. The combined use of both models here provided a more complete assessment of the weaning process for an ancient Caribbean population, indicating not only the start and end ages of weaning but also the relative importance of different food sources for different age juveniles.
Adaptive data-driven models for estimating carbon fluxes in the Northern Great Plains
Wylie, B.K.; Fosnight, E.A.; Gilmanov, T.G.; Frank, A.B.; Morgan, J.A.; Haferkamp, Marshall R.; Meyers, T.P.
2007-01-01
Rangeland carbon fluxes are highly variable in both space and time. Given the expansive areas of rangelands, how rangelands respond to climatic variation, management, and soil potential is important to understanding carbon dynamics. Rangeland carbon fluxes associated with Net Ecosystem Exchange (NEE) were measured from multiple year data sets at five flux tower locations in the Northern Great Plains. These flux tower measurements were combined with 1-km2 spatial data sets of Photosynthetically Active Radiation (PAR), Normalized Difference Vegetation Index (NDVI), temperature, precipitation, seasonal NDVI metrics, and soil characteristics. Flux tower measurements were used to train and select variables for a rule-based piece-wise regression model. The accuracy and stability of the model were assessed through random cross-validation and cross-validation by site and year. Estimates of NEE were produced for each 10-day period during each growing season from 1998 to 2001. Growing season carbon flux estimates were combined with winter flux estimates to derive and map annual estimates of NEE. The rule-based piece-wise regression model is a dynamic, adaptive model that captures the relationships of the spatial data to NEE as conditions evolve throughout the growing season. The carbon dynamics in the Northern Great Plains proved to be in near equilibrium, serving as a small carbon sink in 1999 and as a small carbon source in 1998, 2000, and 2001. Patterns of carbon sinks and sources are very complex, with the carbon dynamics tilting toward sources in the drier west and toward sinks in the east and near the mountains in the extreme west. Significant local variability exists, which initial investigations suggest are likely related to local climate variability, soil properties, and management.
Force on Force Modeling with Formal Task Structures and Dynamic Geometry
2017-03-24
task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test
A generic open-source software framework supporting scenario simulations in bioterrorist crises.
Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie
2013-09-01
Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.
Fast in-memory elastic full-waveform inversion using consumer-grade GPUs
NASA Astrophysics Data System (ADS)
Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge
2017-04-01
Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.
On numerical model of time-dependent processes in three-dimensional porous heat-releasing objects
NASA Astrophysics Data System (ADS)
Lutsenko, Nickolay A.
2016-10-01
The gas flows in the gravity field through porous objects with heat-releasing sources are investigated when the self-regulation of the flow rate of the gas passing through the porous object takes place. Such objects can appear after various natural or man-made disasters (like the exploded unit of the Chernobyl NPP). The mathematical model and the original numerical method, based on a combination of explicit and implicit finite difference schemes, are developed for investigating the time-dependent processes in 3D porous energy-releasing objects. The advantage of the numerical model is its ability to describe unsteady processes under both natural convection and forced filtration. The gas cooling of 3D porous objects with different distribution of heat sources is studied using computational experiment.
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
Evaluation of substitution monopole models for tire noise sound synthesis
NASA Astrophysics Data System (ADS)
Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.
2010-01-01
Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.
Word Processing Curriculum Guide.
ERIC Educational Resources Information Center
Anderson, Marcia A.; Kusek, Robert W.
A combination of facts, examples, models, tools, and sources useful in developing and teaching word processing (WP) programs is provided in this guide. Eight sections are included. Sections 1 and 2 present introductory information on WP (e.g., history, five phases of WP, problems occurring in WP offices, factors of people, procedures, and…
Environmental science and management are fed by individual studies of pollution effects, often focused on single locations. Data are encountered data, typically from multiple sources and on different time and spatial scales. Statistical issues including publication bias and m...
Modeling measured glottal volume velocity waveforms.
Verneuil, Andrew; Berry, David A; Kreiman, Jody; Gerratt, Bruce R; Ye, Ming; Berke, Gerald S
2003-02-01
The source-filter theory of speech production describes a glottal energy source (volume velocity waveform) that is filtered by the vocal tract and radiates from the mouth as phonation. The characteristics of the volume velocity waveform, the source that drives phonation, have been estimated, but never directly measured at the glottis. To accomplish this measurement, constant temperature anemometer probes were used in an in vivo canine constant pressure model of phonation. A 3-probe array was positioned supraglottically, and an endoscopic camera was positioned subglottically. Simultaneous recordings of airflow velocity (using anemometry) and glottal area (using stroboscopy) were made in 3 animals. Glottal airflow velocities and areas were combined to produce direct measurements of glottal volume velocity waveforms. The anterior and middle parts of the glottis contributed significantly to the volume velocity waveform, with less contribution from the posterior part of the glottis. The measured volume velocity waveforms were successfully fitted to a well-known laryngeal airflow model. A noninvasive measured volume velocity waveform holds promise for future clinical use.
NASA Astrophysics Data System (ADS)
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-01
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Courageot, Estelle; Sayah, Rima; Huet, Christelle
2010-05-07
Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.
Systems biology derived source-sink mechanism of BMP gradient formation
Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei
2017-01-01
A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development. PMID:28826472
Systems biology derived source-sink mechanism of BMP gradient formation.
Zinski, Joseph; Bu, Ye; Wang, Xu; Dou, Wei; Umulis, David; Mullins, Mary C
2017-08-09
A morphogen gradient of Bone Morphogenetic Protein (BMP) signaling patterns the dorsoventral embryonic axis of vertebrates and invertebrates. The prevailing view in vertebrates for BMP gradient formation is through a counter-gradient of BMP antagonists, often along with ligand shuttling to generate peak signaling levels. To delineate the mechanism in zebrafish, we precisely quantified the BMP activity gradient in wild-type and mutant embryos and combined these data with a mathematical model-based computational screen to test hypotheses for gradient formation. Our analysis ruled out a BMP shuttling mechanism and a bmp transcriptionally-informed gradient mechanism. Surprisingly, rather than supporting a counter-gradient mechanism, our analyses support a fourth model, a source-sink mechanism, which relies on a restricted BMP antagonist distribution acting as a sink that drives BMP flux dorsally and gradient formation. We measured Bmp2 diffusion and found that it supports the source-sink model, suggesting a new mechanism to shape BMP gradients during development.
Turner, Alison; Mulla, Abeda; Booth, Andrew; Aldridge, Shiona; Stevens, Sharon; Battye, Fraser; Spilsbury, Peter
2016-10-01
NHS England's Five Year Forward View (NHS England, Five Year Forward View, 2014) formally introduced a strategy for new models of care driven by simultaneous pressures to contain costs, improve care and deliver services closer to home through integrated models. This synthesis focuses on a multispecialty community provider (MCP) model. This new model of care seeks to overcome the limitations in current models of care, often based around single condition-focused pathways, in contrast to patient-focused delivery (Royal College of General Practitioners, The 2022 GP: compendium of evidence, 2012) which offers greater continuity of care in recognition of complex needs and multimorbidity. The synthesis, an innovative combination of best fit framework synthesis and realist synthesis, will develop a "blueprint" which articulates how and why MCP models work, to inform design of future iterations of the MCP model. A systematic search will be conducted to identify research and practice-derived evidence to achieve a balance that captures the historical legacy of MCP models but focuses on contemporary evidence. Sources will include bibliographic databases including MEDLINE, PreMEDLINE, CINAHL, Embase, HMIC and Cochrane Library; and grey literature sources. The Best Fit synthesis methodology will be combined with a synthesis following realist principles which are particularly suited to exploring what works, when, for whom and in what circumstances. The aim of this synthesis is to provide decision makers in health and social care with a practical evidence base relating to the multispecialty community provider (MCP) model of care. PROSPERO CRD42016039552 .
A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex
NASA Astrophysics Data System (ADS)
Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter
2017-11-01
Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO : ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian; Aur, Katherine Anderson; Preston, Leiph
This report shows the results of constructing predictive atmospheric models for the Source Physics Experiments 1-6. Historic atmospheric data are combined with topography to construct an atmo- spheric model that corresponds to the predicted (or actual) time of a given SPE event. The models are ultimately used to construct atmospheric Green's functions to be used for subsequent analysis. We present three atmospheric models for each SPE event: an average model based on ten one- hour snap shots of the atmosphere and two extrema models corresponding to the warmest, coolest, windiest, etc. atmospheric snap shots. The atmospheric snap shots consist ofmore » wind, temperature, and pressure profiles of the atmosphere for a one-hour time window centered at the time of the predicted SPE event, as well as nine additional snap shots for each of the nine preceding years, centered at the time and day of the SPE event.« less
NASA Astrophysics Data System (ADS)
Sun, Liqun; Chen, Yudao; Jiang, Lingzhi; Cheng, Yaping
2018-01-01
The water level fluctuation of groundwater will affect the BTEX dissolution in the fuel leakage source zone. In order to study the effect, a leakage test of gasoline was performed in the sand-tank model in the laboratory, and the concentrations of BTEX along with water level were monitored over a long period. Combined with VISUAL MODFLOW software, RT3D module was used to simulate the concentrations of BTEX, and mass flux method was used to evaluate the effects of water level fluctuation on the BTEX dissolution. The results indicate that water level fluctuation can significantly increase the concentration of BTEX dissolved in the leakage source zone. The dissolved amount of BTEX can reach up to 2.4 times under the water level fluctuation condition. The method of numerical simulation combined with mass flux calculation can be used to evaluate the effect of water level fluctuation on BTEX dissolution.
Sheldon, Kennon M; Sommet, Nicolas; Corcoran, Mike; Elliot, Andrew J
2018-04-01
We created a life-goal assessment drawing from self-determination theory and achievement goal literature, examining its predictive power regarding immoral behavior and subjective well-being. Our source items assessed direction and energization of motivation, via the distinction between intrinsic and extrinsic aims and between intrinsic and extrinsic reasons for acting, respectively. Fused source items assessed four goal complexes representing a combination of direction and energization. Across three studies ( Ns = 109, 121, and 398), the extrinsic aim/extrinsic reason complex was consistently associated with immoral and/or unethical behavior beyond four source and three other goal complex variables. This was consistent with the triangle model of responsibility's claim that immoral behaviors may result when individuals disengage the self from moral prescriptions. The extrinsic/extrinsic complex also predicted lower subjective well-being, albeit less consistently. Our goal complex approach sheds light on how self-determination theory's goal contents and organismic integration mini-theories interact, particularly with respect to unethical behavior.
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
Holtschlag, David J.; Koschik, John A.
2004-01-01
Source areas to public water intakes on the St. Clair-Detroit River Waterway were identified by use of hydrodynamic simulation and particle-tracking analyses to help protect public supplies from contaminant spills and discharges. This report describes techniques used to identify these areas and illustrates typical results using selected points on St. Clair River and Lake St. Clair. Parameterization of an existing two-dimensional hydrodynamic model (RMA2) of the St. Clair-Detroit River Waterway was enhanced to improve estimation of local flow velocities. Improvements in simulation accuracy were achieved by computing channel roughness coefficients as a function of flow depth, and determining eddy viscosity coefficients on the basis of velocity data. The enhanced parameterization was combined with refinements in the model mesh near 13 public water intakes on the St. Clair-Detroit River Waterway to improve the resolution of flow velocities while maintaining consistency with flow and water-level data. Scenarios representing a range of likely flow and wind conditions were developed for hydrodynamic simulation. Particle-tracking analyses combined advective movements described by hydrodynamic scenarios with random components associated with sub-grid-scale movement and turbulent mixing to identify source areas to public water intakes.
Zheng, Jianqiu; Thornton, Peter; Painter, Scott; Gu, Baohua; Wullschleger, Stan; Graham, David
2018-06-13
This anaerobic carbon decomposition model is developed with explicit representation of fermentation, methanogenesis and iron reduction by combining three well-known modeling approaches developed in different disciplines. A pool-based model to represent upstream carbon transformations and replenishment of DOC pool, a thermodynamically-based model to calculate rate kinetics and biomass growth for methanogenesis and Fe(III) reduction, and a humic ion-binding model for aqueous phase speciation and pH calculation are implemented into the open source geochemical model PHREEQC (V3.0). Installation of PHREEQC is required to run this model.
Phase equilibria constraints on models of subduction zone magmatism
NASA Astrophysics Data System (ADS)
Myers, James D.; Johnston, Dana A.
Petrologic models of subduction zone magmatism can be grouped into three broad classes: (1) predominantly slab-derived, (2) mainly mantle-derived, and (3) multi-source. Slab-derived models assume high-alumina basalt (HAB) approximates primary magma and is derived by partial fusion of the subducting slab. Such melts must, therefore, be saturated with some combination of eclogite phases, e.g. cpx, garnet, qtz, at the pressures, temperatures and water contents of magma generation. In contrast, mantle-dominated models suggest partial melting of the mantle wedge produces primary high-magnesia basalts (HMB) which fractionate to yield derivative HAB magmas. In this context, HMB melts should be saturated with a combination of peridotite phases, i.e. ol, cpx and opx, and have liquid-lines-of-descent that produce high-alumina basalts. HAB generated in this manner must be saturated with a mafic phase assemblage at the intensive conditions of fractionation. Multi-source models combine slab and mantle components in varying proportions to generate the four main lava types (HMB, HAB, high-magnesia andesites (HMA) and evolved lavas) characteristic of subduction zones. The mechanism of mass transfer from slab to wedge as well as the nature and fate of primary magmas vary considerably among these models. Because of their complexity, these models imply a wide range of phase equilibria. Although the experiments conducted on calc-alkaline lavas are limited, they place the following limitations on arc petrologic models: (1) HAB cannot be derived from HMB by crystal fractionation at the intensive conditions thus far investigated, (2) HAB could be produced by anhydrous partial fusion of eclogite at high pressure, (3) HMB liquids can be produced by peridotite partial fusion 50-60 km above the slab-mantle interface, (4) HMA cannot be primary magmas derived by partial melting of the subducted slab, but could have formed by slab melt-peridotite interaction, and (5) many evolved calc-alkaline lavas could have been formed by crystal fractionation at a range of crustal pressures.
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less
Near-field hazard assessment of March 11, 2011 Japan Tsunami sources inferred from different methods
Wei, Y.; Titov, V.V.; Newman, A.; Hayes, G.; Tang, L.; Chamberlin, C.
2011-01-01
Tsunami source is the origin of the subsequent transoceanic water waves, and thus the most critical component in modern tsunami forecast methodology. Although impractical to be quantified directly, a tsunami source can be estimated by different methods based on a variety of measurements provided by deep-ocean tsunameters, seismometers, GPS, and other advanced instruments, some in real time, some in post real-time. Here we assess these different sources of the devastating March 11, 2011 Japan tsunami by model-data comparison for generation, propagation and inundation in the near field of Japan. This study provides a comparative study to further understand the advantages and shortcomings of different methods that may be potentially used in real-time warning and forecast of tsunami hazards, especially in the near field. The model study also highlights the critical role of deep-ocean tsunami measurements for high-quality tsunami forecast, and its combination with land GPS measurements may lead to better understanding of both the earthquake mechanisms and tsunami generation process. ?? 2011 MTS.
Volatility-resolved source apportionment of primary and secondary organic aerosol over Europe
NASA Astrophysics Data System (ADS)
Skyllakou, Ksakousti; Fountoukis, Christos; Charalampidis, Panagiotis; Pandis, Spyros N.
2017-10-01
A three-dimensional regional chemical transport model (Particulate Matter Comprehensive Air Quality Model with Extensions, PMCAMx) was applied over Europe combined with a source apportionment algorithm, the Particulate Source Apportionment Technology (PSAT), in order to quantify the sources which contribute to the primary and secondary organic aerosol (OA) during different seasons. The PSAT algorithm was first extended to allow the quantification of the sources of OA as a function of volatility. The most significant OA sources during May were biogenic, while during February residential wood combustion and during September wildfires dominated. The contributions of the various sources have strong spatial dependence. Wildfires were significant OA sources (38% of the OA) for Russia during September, but had a much lower impact (5%) in Scandinavia. The above results are in general consistent with the findings of the CARBOSOL project for selected sites in Europe. For remote sites such as Finokalia in Crete, more than 90% of the OA has undergone two or more generations of oxidation for all seasons. This highly processed oxidized OA is predicted to also dominate over much of Europe during the summer and fall. The first generation SOA is predicted to represent 20-30% of the OA in central and northern Europe during these photochemically active periods.
Combining multiple sources of data to inform conservation of Lesser Prairie-Chicken populations
Ross, Beth; Haukos, David A.; Hagen, Christian A.; Pitman, James
2018-01-01
Conservation of small populations is often based on limited data from spatially and temporally restricted studies, resulting in management actions based on an incomplete assessment of the population drivers. If fluctuations in abundance are related to changes in weather, proper management is especially important, because extreme weather events could disproportionately affect population abundance. Conservation assessments, especially for vulnerable populations, are aided by a knowledge of how extreme events influence population status and trends. Although important for conservation efforts, data may be limited for small or vulnerable populations. Integrated population models maximize information from various sources of data to yield population estimates that fully incorporate uncertainty from multiple data sources while allowing for the explicit incorporation of environmental covariates of interest. Our goal was to assess the relative influence of population drivers for the Lesser Prairie-Chicken (Tympanuchus pallidicinctus) in the core of its range, western and southern Kansas, USA. We used data from roadside lek count surveys, nest monitoring surveys, and survival data from telemetry monitoring combined with climate (Palmer drought severity index) data in an integrated population model. Our results indicate that variability in population growth rate was most influenced by variability in juvenile survival. The Palmer drought severity index had no measurable direct effects on adult survival or mean number of offspring per female; however, there were declines in population growth rate following severe drought. Because declines in population growth rate occurred at a broad spatial scale, declines in response to drought were likely due to decreases in chick and juvenile survival rather than emigration outside of the study area. Overall, our model highlights the importance of accounting for environmental and demographic sources of variability, and provides a thorough method for simultaneously evaluating population demography in response to long-term climate effects.
Papadelis, Christos; Eickhoff, Simon B; Zilles, Karl; Ioannides, Andreas A
2011-01-01
This study combines source analysis imaging data for early somatosensory processing and the probabilistic cytoarchitectonic maps (PCMs). Human somatosensory evoked fields (SEFs) were recorded by stimulating left and right median nerves. Filtering the recorded responses in different frequency ranges identified the most responsive frequency band. The short-latency averaged SEFs were analyzed using a single equivalent current dipole (ECD) model and magnetic field tomography (MFT). The identified foci of activity were superimposed with PCMs. Two major components of opposite polarity were prominent around 21 and 31 ms. A weak component around 25 ms was also identified. For the most responsive frequency band (50-150 Hz) ECD and MFT revealed one focal source at the contralateral Brodmann area 3b (BA3b) at the peak of N20. The component ~25 ms was localised in Brodmann area 1 (BA1) in 50-150 Hz. By using ECD, focal generators around 28-30 ms located initially in BA3b and 2 ms later to BA1. MFT also revealed two focal sources - one in BA3b and one in BA1 for these latencies. Our results provide direct evidence that the earliest cortical response after median nerve stimulation is generated within the contralateral BA3b. BA1 activation few milliseconds later indicates a serial mode of somatosensory processing within cytoarchitectonic SI subdivisions. Analysis of non-invasive magnetoencephalography (MEG) data and the use of PCMs allow unambiguous and quantitative (probabilistic) interpretation of cytoarchitectonic identity of activated areas following median nerve stimulation, even with the simple ECD model, but only when the model fits the data extremely well. Copyright © 2010 Elsevier Inc. All rights reserved.
Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode
NASA Astrophysics Data System (ADS)
Seibert, P.; Frank, A.
2004-01-01
The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.
The trigger mechanism of low-frequency earthquakes on Montserrat
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Tuffen, H.; Collier, L.; Green, D.; Powell, T.; Dingwell, D.
2006-05-01
A careful analysis of low-frequency seismic events on Soufrièere Hills volcano, Montserrat, points to a source mechanism that is non-destructive, repetitive, and has a stationary source location. By combining these seismological clues with new field evidence and numerical magma flow modelling, we propose a seismic trigger model which is based on brittle failure of magma in the glass transition. Loss of heat and gas from the magma results in a strong viscosity gradient across a dyke or conduit. This leads to a build-up of shear stress near the conduit wall where magma can rupture in a brittle manner, as field evidence from a rhyolitic dyke demonstrates. This brittle failure provides seismic energy, the majority of which is trapped in the conduit or dyke forming the low-frequency coda of the observed seismic signal. The trigger source location marks the transition from ductile conduit flow to friction-controlled magma ascent. As the trigger mechanism is governed by the depth-dependent magma parameters, the source location remains fixed at a depth where the conditions allow brittle failure. This is reflected in the fixed seismic source locations.
de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine
2016-03-01
Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.
Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert
2011-12-01
General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.
Brooke, Russell J; Kretzschmar, Mirjam E E; Hackert, Volker; Hoebe, Christian J P A; Teunis, Peter F M; Waller, Lance A
2017-01-01
We develop a novel approach to study an outbreak of Q fever in 2009 in the Netherlands by combining a human dose-response model with geostatistics prediction to relate probability of infection and associated probability of illness to an effective dose of Coxiella burnetii. The spatial distribution of the 220 notified cases in the at-risk population are translated into a smooth spatial field of dose. Based on these symptomatic cases, the dose-response model predicts a median of 611 asymptomatic infections (95% range: 410, 1,084) for the 220 reported symptomatic cases in the at-risk population; 2.78 (95% range: 1.86, 4.93) asymptomatic infections for each reported case. The low attack rates observed during the outbreak range from (Equation is included in full-text article.)to (Equation is included in full-text article.). The estimated peak levels of exposure extend to the north-east from the point source with an increasing proportion of asymptomatic infections further from the source. Our work combines established methodology from model-based geostatistics and dose-response modeling allowing for a novel approach to study outbreaks. Unobserved infections and the spatially varying effective dose can be predicted using the flexible framework without assuming any underlying spatial structure of the outbreak process. Such predictions are important for targeting interventions during an outbreak, estimating future disease burden, and determining acceptable risk levels.
Cinfony – combining Open Source cheminformatics toolkits behind a common interface
O'Boyle, Noel M; Hutchison, Geoffrey R
2008-01-01
Background Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java), have different underlying chemical models and have different application programming interfaces (APIs). Results We describe Cinfony, a Python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit. PMID:19055766
Joint optimization of source, mask, and pupil in optical lithography
NASA Astrophysics Data System (ADS)
Li, Jia; Lam, Edmund Y.
2014-03-01
Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.
Wang, Le; Wang, Chong; Mei, Huan; Shen, Yongnian; Lv, Guixia; Zeng, Rong; Zhan, Ping; Li, Dongmei; Liu, Weida
2016-02-01
Mouse model is an appropriate tool for pathogenic determination and study of host defenses during the fungal infection. Here, we established a mouse model of candidiasis with concurrent oral and vaginal mucosal infection. Two C. albicans strains sourced from clinical candidemia (SC5314) and mucosal infection (ATCC62342) were tested in ICR mice. The different combinational panels covering estrogen and immunosuppressive agents, cortisone, prednisolone and cyclophosphamide were used for concurrent oral and vaginal candidiasis establishment. Prednisolone in combination with estrogen proved an optimal mode for concurrent mucosal infection establishment. The model maintained for 1 week with fungal burden reached at least 10(5) cfu/g of tissue. This mouse model was evaluated by in vivo pharmacodynamics of fluconazole and host mucosal immunity of IL-17 and IL-23. Mice infected by SC5314 were cured by fluconazole. An increase in IL-23 in both oral and vaginal homogenates was observed after infection, while IL-17 only had a prominent elevation in oral tissue. This model could properly mimic complicated clinical conditions and provides a valuable means for antifungal assay in vivo and may also provide a useful method for the evaluation of host-fungal interactions.
Magnitudes and Sources of Catchment Sediment: When A + B Doesn't Equal C
NASA Astrophysics Data System (ADS)
Simon, A.
2015-12-01
The export of land-based sediments to receiving waters can cause degradation of water quality and habitat, loss of reservoir capacity and damage to reef ecosystems. Predictions of sources and magnitudes generally come from simulations using catchment models that focus on overland flow processes at the expense of gully and channel processes. This is not appropriate for many catchments where recent research has shown that the dominant erosion sources have shifted from the uplands and fields following European Settlement, to the alluvial valleys today. Still, catchment models which fail to adequately address channel and bank processes are still the overwhelming choice by resource agencies to help manage sediment export. These models often utilize measured values of sediment load at the river mouth to "calibrate" the magnitude of loads emanating from uplands and fields. The difference between the sediment load at the mouth and the simulated upland loading is then proportioned to channel sources.Bank erosion from the Burnett River (a "Reef Catchment" in eastern Queensland) was quantified by comparisons of bank-top locations and by numerical modeling using BSTEM. Results show that bank-derived sediment contributes between 44 and 73% of the sediment load being exported to the Coral Sea. In comparison reported results from a catchment model showed bank contributions of 8%. In absolute terms, this is an increase in the reported average, annual rate of bank erosion from 0.175 Mt/y to 2.0 Mt/y.In the Hoteo River, New Zealand, a rural North Island catchment characterized by resistant cohesive sediments, bank erosion was found to contribute at least 48% of the total specific yield of sediment. Combining the bank-derived, fine-grained loads from some of the major tributaries gives a total, average annual loading rate for fine material of about 10,900 t/y for the studied reaches in the Hoteo River System. If the study was extended to include the lower reaches of the main stem channel and other tributary reaches, this percentage would be higher. Similar studies in the United States using combinations of empirical and numerical modeling techniques have also disclosed that bank-derived sediment can be the major source of sediment in many catchments. An approach to improve the accuracy of predictions is proposed.
Flooding dynamics on the lower Amazon floodplain
NASA Astrophysics Data System (ADS)
Rudorff, C.; Melack, J. M.; Bates, P. D.
2013-05-01
We analyzed flooding dynamics of a large floodplain lake in the lower reach of the Amazon River for the period between 1995 through 2010. Floodplain inundation was simulated using the LISFLOOD-FP model, which combines one-dimensional river routing with two-dimensional overland flow, and a local hydrological model. Accurate representation of floodplain flows and inundation extent depends on the quality of the digital elevation model (DEM). We combined digital topography (derived from the Shuttle Radar Topography Mission) with extensive floodplain echo-sounding data to generate a hydraulically sound DEM. Analysis of daily water balances revealed that the dominant source of inflow alternated seasonally among direct rain and local runoff (October through January), Amazon River (March through August), and seepage (September). As inflows from the Amazon River increase during the rising limb of the hydrograph, regional floodwaters encounter the floodplain partially inundated from local hydrological inputs. At peak flow the floodplain routes, on average, 2.5% of the total discharge for this reach. The falling limb of the hydrograph coincides with the locally dry period, allowing seepage of water stored in sediments to become a dominant source. The average annual inflow from the Amazon River was 58.8 km3 (SD = 33.5), representing more than three thirds (80%) of inputs from all sources, with substantial inter-annual variability. The average annual net export of water from the floodplain to the Amazon River was 7.9 km3 (SD = 2.7).
A cautionary tale: A study of a methane enhancement over the North Sea
NASA Astrophysics Data System (ADS)
Cain, M.; Warwick, N. J.; Fisher, R. E.; Lowry, D.; Lanoisellé, M.; Nisbet, E. G.; France, J.; Pitt, J.; O'Shea, S.; Bower, K. N.; Allen, G.; Illingworth, S.; Manning, A. J.; Bauguitte, S.; Pisso, I.; Pyle, J. A.
2017-07-01
Airborne measurements of a methane (CH4) plume over the North Sea from August 2013 are analyzed. The plume was only observed downwind of circumnavigated gas fields, and three methods are used to determine its source. First, a mass balance calculation assuming a gas field source gives a CH4 emission rate between 2.5 ± 0.8×104 and 4.6 ± 1.5×104 kg h-1. This would be greater than the industry's reported 0.5% leak rate if it were emitting for more than half the time. Second, annual average UK CH4 emissions are combined with an atmospheric dispersion model to create pseudo-observations. Clean air from the North Atlantic passed over mainland UK, picking up anthropogenic emissions. To best explain the observed plume using pseudo-observations, an additional North Sea source from the gas rigs area is added. Third, the δ13C-CH4 from the plume is shown to be -53‰, which is lighter than fossil gas but heavier than the UK average emission. We conclude that either an additional small-area mainland source is needed, combined with temporal variability in emission or transport in small-scale meteorological features. Alternatively, a combination of additional sources that are at least 75% from the mainland (-58‰) and up to 25% from the North Sea gas rigs area (-32‰) would explain the measurements. Had the isotopic analysis not been performed, the likely conclusion would have been of a gas field source of CH4. This demonstrates the limitation of analyzing mole fractions alone, as the simplest explanation is rejected based on analysis of isotopic data.
NASA Astrophysics Data System (ADS)
Kumar, Awkash; Patil, Rashmi S.; Dikshit, Anil Kumar; Kumar, Rakesh; Brandt, Jørgen; Hertel, Ole
2016-10-01
The accuracy of the results from an air quality model is governed by the quality of emission and meteorological data inputs in most of the cases. In the present study, two air quality models were applied for inverse modelling to determine the particulate matter emission strengths of urban and regional sources in and around Mumbai in India. The study takes outset in an existing emission inventory for Total Suspended Particulate Matter (TSPM). Since it is known that the available TSPM inventory is uncertain and incomplete, this study will aim for qualifying this inventory through an inverse modelling exercise. For use as input to the air quality models in this study, onsite meteorological data has been generated using the Weather Research Forecasting (WRF) model. The regional background concentration from regional sources is transported in the atmosphere from outside of the study domain. The regional background concentrations of particulate matter were obtained from model calculations with the Danish Eulerian Hemisphere Model (DEHM) for regional sources. The regional background concentrations obtained from DEHM were then used as boundary concentrations in AERMOD calculations of the contribution from local urban sources. The results from the AERMOD calculations were subsequently compared with observed concentrations and emission correction factors obtained by best fit of the model results to the observed concentrations. The study showed that emissions had to be up-scaled by between 14 and 55% in order to fit the observed concentrations; this is of course when assuming that the DEHM model describes the background concentration level of the right magnitude.
Quantum key distribution with entangled photon sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.
2007-07-15
A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDCmore » source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses.« less
Lisbon 1755, a multiple-rupture earthquake
NASA Astrophysics Data System (ADS)
Fonseca, J. F. B. D.
2017-12-01
The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.
Multi-Wavelength Study of W40 HII Region
NASA Astrophysics Data System (ADS)
Shenoy, Sachindev S.; Shuping, R.; Vacca, W. D.
2013-01-01
W40 is an HII region (Sh2-64) within the Serpens molecular cloud in the Aquila rift region. Recent near infrared spectroscopic observations of the brightest members of the central cluster of W40 reveal that the region is powered by at least three early B-type stars and one late O-type star. Near and mid-infrared spectroscopy and photometry, combined with SED modeling of these sources, suggest that the distance to the cluster is between 455 and 535 pc, with about 10 mag of visual extinction. Velocity and extinction measurement of all the nearby regions i.e. Serpens main, Aquila rift, and MWC297 suggest that the entire system (including the W40 extended emission) is associated with the extinction wall at 260 pc. Here we present some preliminary results of a multi-wavelength study of the central cluster and the extended emission of W40. We used Spitzer IRAC data to measure accurate photometry of all the point sources within 4.32 pc of W40 via PRF fitting. This will provide us with a complete census of YSOs in the W40 region. The Spitzer data are combined with publicly available data in 2MASS, WISE and Hershel archives and used to model YSOs in the region. The SEDs and near-IR colors of all the point sources should allow us to determine the age of the central cluster of W40. The results from this work will put W40 in a proper stellar evolutionary context. After subtracting the point sources from the IRAC images, we are able to study the extended emission free from point source contamination. We choose a few morphologically interesting regions in W40 and use the data to model the dust emission. The results from this effort will allow us to study the correlation between dust properties and the large scale physical properties of W40.
THE HIGH-ENERGY, ARCMINUTE-SCALE GALACTIC CENTER GAMMA-RAY SOURCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chernyakova, M.; Malyshev, D.; Aharonian, F. A.
2011-01-10
Employing data collected during the first 25 months of observations by the Fermi-LAT, we describe and subsequently seek to model the very high energy (>300 MeV) emission from the central few parsecs of our Galaxy. We analyze the morphological, spectral, and temporal characteristics of the central source, 1FGL J1745.6-2900. The data show a clear, statistically significant signal at energies above 10 GeV, where the Fermi-LAT has angular resolution comparable to that of HESS at TeV energies. This makes a meaningful joint analysis of the data possible. Our analysis of the Fermi data (alone) does not uncover any statistically significant variabilitymore » of 1FGL J1745.6-2900 at GeV energies on the month timescale. Using the combination of Fermi data on 1FGL J1745.6-2900 and HESS data on the coincident, TeV source HESS J1745-290, we show that the spectrum of the central gamma-ray source is inflected with a relatively steep spectral region matching between the flatter spectrum found at both low and high energies. We model the gamma-ray production in the inner 10 pc of the Galaxy and examine cosmic ray (CR) proton propagation scenarios that reproduce the observed spectrum of the central source. We show that a model that instantiates a transition from diffusive propagation of the CR protons at low energy to almost rectilinear propagation at high energies can explain well the spectral phenomenology. We find considerable degeneracy between different parameter choices which will only be broken with the addition of morphological information that gamma-ray telescopes cannot deliver given current angular resolution limits. We argue that a future analysis performed in combination with higher-resolution radio continuum data holds out the promise of breaking this degeneracy.« less
X-Ray Dust Scattering At Small Angles: The Complete Halo Around GX13+1
NASA Technical Reports Server (NTRS)
Smith, Randall K.
2007-01-01
The exquisite angular resolution available with Chandra should allow precision measurements of faint diffuse emission surrounding bright sources, such as the X-ray scattering halos created by interstellar dust. However, the ACIS CCDs suffer from pileup when observing bright sources, and this creates difficulties when trying to extract the scattered halo near the source. The initial study of the X-ray halo around GX13+1 using only the ACIS-I detector done by Smith, Edgar & Shafer (2002) suffered from a lack of sensitivity within 50" of the source, limiting what conclusions could be drawn. To address this problem, observations of GX13+1 were obtained with the Chandra HRC-I and simultaneously with the RXTE PCA. Combined with the existing ACIS-I data, this allowed measurements of the X-ray halo between 2-1000". After considering a range of dust models, each assumed to be smoothly distributed with or without a dense cloud along the line of sight, the results show that there is no evidence in this data for a dense cloud near the source, as suggested by Xiang et al. (2005). In addition, although no model leads to formally acceptable results, the Weingartner & Draine (2001) and all but one of the composite grain models from Zubko, Dwek & Arendt (2004) give particularly poor fits.
Application of fuzzy set and Dempster-Shafer theory to organic geochemistry interpretation
NASA Technical Reports Server (NTRS)
Kim, C. S.; Isaksen, G. H.
1993-01-01
An application of fuzzy sets and Dempster Shafter Theory (DST) in modeling the interpretational process of organic geochemistry data for predicting the level of maturities of oil and source rock samples is presented. This was accomplished by (1) representing linguistic imprecision and imprecision associated with experience by a fuzzy set theory, (2) capturing the probabilistic nature of imperfect evidences by a DST, and (3) combining multiple evidences by utilizing John Yen's generalized Dempster-Shafter Theory (GDST), which allows DST to deal with fuzzy information. The current prototype provides collective beliefs on the predicted levels of maturity by combining multiple evidences through GDST's rule of combination.
Automatic protein structure solution from weak X-ray data
NASA Astrophysics Data System (ADS)
Skubák, Pavol; Pannu, Navraj S.
2013-11-01
Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.
Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo
NASA Astrophysics Data System (ADS)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.
2015-10-01
Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.
NASA Astrophysics Data System (ADS)
Latcharote, Panon; Suppasri, Anawat; Imamura, Fumihiko; Aytore, Betul; Yalciner, Ahmet Cevdet
2016-12-01
This study evaluates tsunami hazards in the Marmara Sea from possible worst-case tsunami scenarios that are from submarine earthquakes and landslides. In terms of fault-generated tsunamis, seismic ruptures can propagate along the North Anatolian Fault (NAF), which has produced historical tsunamis in the Marmara Sea. Based on the past studies, which consider fault-generated tsunamis and landslide-generated tsunamis individually, future scenarios are expected to generate tsunamis, and submarine landslides could be triggered by seismic motion. In addition to these past studies, numerical modeling has been applied to tsunami generation and propagation from combined earthquake and landslide sources. In this study, tsunami hazards are evaluated from both individual and combined cases of submarine earthquakes and landslides through numerical tsunami simulations with a grid size of 90 m for bathymetry and topography data for the entire Marmara Sea region and validated with historical observations from the 1509 and 1894 earthquakes. This study implements TUNAMI model with a two-layer model to conduct numerical tsunami simulations, and the numerical results show that the maximum tsunami height could reach 4.0 m along Istanbul shores for a full submarine rupture of the NAF, with a fault slip of 5.0 m in the eastern and western basins of the Marmara Sea. The maximum tsunami height for landslide-generated tsunamis from small, medium, and large of initial landslide volumes (0.15, 0.6, and 1.5 km3, respectively) could reach 3.5, 6.0, and 8.0 m, respectively, along Istanbul shores. Possible tsunamis from submarine landslides could be significantly higher than those from earthquakes, depending on the landslide volume significantly. These combined earthquake and landslide sources only result in higher tsunami amplitudes for small volumes significantly because of amplification within the same tsunami amplitude scale (3.0-4.0 m). Waveforms from all the coasts around the Marmara Sea indicate that other residential areas might have had a high risk of tsunami hazards from submarine landslides, which can generate higher tsunami amplitudes and shorter arrival times, compared to Istanbul.
Humpback whale-generated ambient noise levels provide insight into singers' spatial densities.
Seger, Kerri D; Thode, Aaron M; Urbán-R, Jorge; Martínez-Loustalot, Pamela; Jiménez-López, M Esther; López-Arzate, Diana
2016-09-01
Baleen whale vocal activity can be the dominant underwater ambient noise source for certain locations and seasons. Previous wind-driven ambient-noise formulations have been adjusted to model ambient noise levels generated by random distributions of singing humpback whales in ocean waveguides and have been combined to a single model. This theoretical model predicts that changes in ambient noise levels with respect to fractional changes in singer population (defined as the noise "sensitivity") are relatively unaffected by the source level distributions and song spectra of individual humpback whales (Megaptera novaeangliae). However, the noise "sensitivity" does depend on frequency and on how the singers' spatial density changes with population size. The theoretical model was tested by comparing visual line transect surveys with bottom-mounted passive acoustic data collected during the 2013 and 2014 humpback whale breeding seasons off Los Cabos, Mexico. A generalized linear model (GLM) estimated the noise "sensitivity" across multiple frequency bands. Comparing the GLM estimates with the theoretical predictions suggests that humpback whales tend to maintain relatively constant spacing between one another while singing, but that individual singers either slightly increase their source levels or song duration, or cluster more tightly as the singing population increases.
Near Real-Time Imaging of the Galactic Plane with BATSE
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Zhang, S. N.; Robinson, C. R.; Paciesas, W. S.; Barret, D.; Grindlay, J.; Bloser, P.; Monnelly, C.
1997-01-01
The discovery of new transient or persistent sources in the hard X-ray regime with the BATSE Earth occultation Technique has been limited previously to bright sources of about 200 mCrab or more. While monitoring known source locations is not a problem to a daily limiting sensitivity of about 75 mCrab, the lack of a reliable background model forces us to use more intensive computer techniques to find weak, previously unknown emission from hard X-ray/gamma sources. The combination of Radon transform imaging of the galactic plane in 10 by 10 degree fields and the Harvard/CFA-developed Image Search (CBIS) allows us to straightforwardly search the sky for candidate sources in a +/- 20 degree latitude band along the plane. This procedure has been operating routinely on a weekly basis since spring 1997. We briefly describe the procedure, then concentrate on the performance aspects of the technique and candidate source results from the search.
Implications from XMM and Chandra Source Catalogs for Future Studies with Lynx
NASA Astrophysics Data System (ADS)
Ptak, Andrew
2018-01-01
Lynx will perform extremely sensitive X-ray surveys by combining very high-resolution imaging over a large field of view with a high effective area. These will include deep planned surveys and serendipitous source surveys. Here we discuss implications that can be gleaned from current Chandra and XMM-Newton serendipitous source surveys. These current surveys have discovered novel sources such as tidal disruption events, binary AGN, and ULX pulsars. In addition these surveys have detected large samples of normal galaxies, low-luminosity AGN and quasars due to the wide-area coverage of the Chandra and XMM-Newton source catalogs, allowing the evolution of these phenonema to be explored. The wide area Lynx surveys will probe down further in flux and will be coupled with very sensitive wide-area surveys such as LSST and SKA, allowing for detailed modeling of their SEDs and the discovery of rare, exotic sources and transient events.
Testing of a spacecraft model in a combined environment simulator
NASA Technical Reports Server (NTRS)
Staskus, J. V.; Roche, J. C.
1981-01-01
A scale model of a satellite was tested in a large vacuum facility under electron bombardment and vacuum ultraviolet radiation to investigate the charging of dielectric materials on curved surfaces. The model was tested both stationary and rotating relative to the electron sources as well as grounded through one megohm and floating relative to the chamber. Surface potential measurements are presented and compared with the predictions of computer modelling of the stationary tests. Discharge activity observed during the stationary tests is discussed and signals from sensing devices located inside and outside of the model are presented.
NASA Astrophysics Data System (ADS)
Smart, Philip D.; Quinn, Jonathan A.; Jones, Christopher B.
The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Yaosuo
The battery energy stored quasi-Z-source (BES-qZS) based photovoltaic (PV) power generation system combines advantages of the qZS inverter and the battery energy storage system. However, the second harmonic (2 ) power ripple will degrade the system's performance and affect the system's design. An accurate model to analyze the 2 ripple is very important. The existing models did not consider the battery, and with the assumption L1=L2 and C1=C2, which causes the non-optimized design for the impedance parameters of qZS network. This paper proposes a comprehensive model for single-phase BES-qZS-PV inverter system, where the battery is considered and without any restrictionmore » of L1, L2, C1, and C2. A BES-qZS impedance design method based on the built model is proposed to mitigate the 2 ripple. Simulation and experimental results verify the proposed 2 ripple model and design method.« less
The displacement of the sun from the galactic plane using IRAS and faust source counts
NASA Technical Reports Server (NTRS)
Cohen, Martin
1995-01-01
I determine the displacement of the Sun from the Galactic plane by interpreting IRAS point-source counts at 12 and 25 microns in the Galactic polar caps using the latest version of the SKY model for the point-source sky (Cohen 1994). A value of solar zenith = 15.5 +/- 0.7 pc north of the plane provides the best match to the ensemble of useful IRAS data. Shallow K counts in the north Galactic pole are also best fitted by this offset, while limited FAUST far-ultraviolet counts at 1660 A near the same pole favor a value near 14 pc. Combining the many IRAS determinations with the few FAUST values suggests that a value of solar zenith = 15.0 +/- 0.5 pc (internal error only) would satisfy these high-latitude sets of data in both wavelength regimes, within the context of the SKY model.
Power combination of a self-coherent high power microwave source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Xiaolu, E-mail: yanxl-dut@163.com; Zhang, Xiaoping; Li, Yangmei
2015-09-15
In our previous work, generating two phase-locked high power microwaves (HPMs) in a single self-coherent HPM device has been demonstrated. In this paper, after optimizing the structure of the previous self-coherent source, we design a power combiner with a folded phase-adjustment waveguide to realize power combination between its two sub-sources. Further particle-in-cell simulation of the combined source shows that when the diode voltage is 687 kV and the axial magnetic field is 0.8 T, a combined output microwave with 3.59 GW and 9.72 GHz is generated. The impedance of the combined device is 36 Ω and the total power conversion efficiency is 28%.
Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford
2015-06-01
A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.
CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.
Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso
2017-08-01
Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
From open source communications to knowledge
NASA Astrophysics Data System (ADS)
Preece, Alun; Roberts, Colin; Rogers, David; Webberley, Will; Innes, Martin; Braines, Dave
2016-05-01
Rapid processing and exploitation of open source information, including social media sources, in order to shorten decision-making cycles, has emerged as an important issue in intelligence analysis in recent years. Through a series of case studies and natural experiments, focussed primarily upon policing and counter-terrorism scenarios, we have developed an approach to information foraging and framing to inform decision making, drawing upon open source intelligence, in particular Twitter, due to its real-time focus and frequent use as a carrier for links to other media. Our work uses a combination of natural language (NL) and controlled natural language (CNL) processing to support information collection from human sensors, linking and schematising of collected information, and the framing of situational pictures. We illustrate the approach through a series of vignettes, highlighting (1) how relatively lightweight and reusable knowledge models (schemas) can rapidly be developed to add context to collected social media data, (2) how information from open sources can be combined with reports from trusted observers, for corroboration or to identify con icting information; and (3) how the approach supports users operating at or near the tactical edge, to rapidly task information collection and inform decision-making. The approach is supported by bespoke software tools for social media analytics and knowledge management.
On the usage of divergence nudging in the DMI nowcasting system
NASA Astrophysics Data System (ADS)
Korsholm, Ulrik; Petersen, Claus; Hansen Sass, Bent; Woetmann Nielsen, Niels; Getreuer Jensen, David; Olsen, Bjarke Tobias; Vedel, Henrik
2014-05-01
DMI has recently proposed a new method for nudging radar reflectivity CAPPI products into their operational nowcasting system. The system is based on rapid update cycles (with hourly frequency) with the High Resolution Limited Area Model combined with surface and upper air analysis at each initial time. During the first 1.5 hours of a simulation the model dynamical state is nudged in accordance with the CAPPI product after which a free forecast is produced with a forecast length of 12 hours. The nudging method is based on the assumption that precipitation is forced by low level moisture convergence and an enhanced moisture source will lead to convective triggering of the model cloud scheme. If the model under-predicts precipitation before cut-off horizontal low level divergence is nudged towards an estimated value. These pseudo observations are calculated from the CAPPI product by assuming a specific vertical profile of the change in divergence field. The strength of the nudging is proportional to the difference between observed and modelled precipitation. When over-predicting, the low level moisture source is reduced, and in-cloud moisture is nudged towards environmental values. Results have been analysed in terms of the fractions skill score and the ability of the nudging method to position the precipitation cells correctly is discussed. The ability of the model to retain memory of the precipitation systems in the free forecast has also been investigated and examples of combining the nudging method with extrapolated reflectivity fields are also shown.
Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study
NASA Astrophysics Data System (ADS)
O'Neill, B. C.
2015-12-01
Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.
Oil source bed distribution in upper Tertiary of Gulf Coast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dow, W.G.
1985-02-01
Effective oil source beds have not been reported in Miocene and younger Gulf Coast sediments and the organic matter present is invariably immature and oxidized. Crude oil composition, however, indicates origin from mature source beds containing reduced kerogen. Oil distribution suggests extensive vertical migration through fracture systems from localized sources in deeply buried, geopressured shales. A model is proposed in which oil source beds were deposited in intraslope basins that formed behind salt ridges. The combination of silled basin topography, rapid sedimentation, and enhanced oxygen-minimum zones during global warmups resulted in periodic anoxic environments and preservation of oil-generating organic matter.more » Anoxia was most widespread during the middle Miocene and Pliocene transgressions and rare during regressive cycles when anoxia occurred primarily in hypersaline conditions such as exist today in the Orca basin.« less
A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Aglietta, M.
Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. Lastly, these limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less
A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory
Aab, A.; Abreu, P.; Aglietta, M.; ...
2017-03-09
Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p-values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. Lastly, these limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less
A Targeted Search for Point Sources of EeV Photons with the Pierre Auger Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Aglietta, M.
Simultaneous measurements of air showers with the fluorescence and surface detectors of the Pierre Auger Observatory allow a sensitive search for EeV photon point sources. Several Galactic and extragalactic candidate objects are grouped in classes to reduce the statistical penalty of many trials from that of a blind search and are analyzed for a significant excess above the background expectation. The presented search does not find any evidence for photon emission at candidate sources, and combined p -values for every class are reported. Particle and energy flux upper limits are given for selected candidate sources. These limits significantly constrain predictionsmore » of EeV proton emission models from non-transient Galactic and nearby extragalactic sources, as illustrated for the particular case of the Galactic center region.« less
ISS Ambient Air Quality: Updated Inventory of Known Aerosol Sources
NASA Technical Reports Server (NTRS)
Meyer, Marit
2014-01-01
Spacecraft cabin air quality is of fundamental importance to crew health, with concerns encompassing both gaseous contaminants and particulate matter. Little opportunity exists for direct measurement of aerosol concentrations on the International Space Station (ISS), however, an aerosol source model was developed for the purpose of filtration and ventilation systems design. This model has successfully been applied, however, since the initial effort, an increase in the number of crewmembers from 3 to 6 and new processes on board the ISS necessitate an updated aerosol inventory to accurately reflect the current ambient aerosol conditions. Results from recent analyses of dust samples from ISS, combined with a literature review provide new predicted aerosol emission rates in terms of size-segregated mass and number concentration. Some new aerosol sources have been considered and added to the existing array of materials. The goal of this work is to provide updated filtration model inputs which can verify that the current ISS filtration system is adequate and filter lifetime targets are met. This inventory of aerosol sources is applicable to other spacecraft, and becomes more important as NASA considers future long term exploration missions, which will preclude the opportunity for resupply of filtration products.
Integrating advice and experience: learning and decision making with social and nonsocial cues.
Collins, Elizabeth C; Percy, Elise J; Smith, Eliot R; Kruschke, John K
2011-06-01
When making decisions, people typically gather information from both social and nonsocial sources, such as advice from others and direct experience. This research adapted a cognitive learning paradigm to examine the process by which people learn what sources of information are credible. When participants relied on advice alone to make decisions, their learning of source reliability proceeded in a manner analogous to traditional cue learning processes and replicated the established learning phenomena. However, when advice and nonsocial cues were encountered together as an established phenomenon, blocking (ignoring redundant information) did not occur. Our results suggest that extant cognitive learning models can accommodate either advice or nonsocial cues in isolation. However, the combination of advice and nonsocial cues (a context more typically encountered in daily life) leads to different patterns of learning, in which mutually supportive information from different types of sources is not regarded as redundant and may be particularly compelling. For these situations, cognitive learning models still constitute a promising explanatory tool but one that must be expanded. As such, these findings have important implications for social psychological theory and for cognitive models of learning. 2011 APA, all rights reserved
Modeling the Influence of Hemispheric Transport on Trends in ...
We describe the development and application of the hemispheric version of the CMAQ to examine the influence of long-range pollutant transport on trends in surface level O3 distributions. The WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations were recently performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the northern hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ is used to estimate the sensitivity of O3 to emissions from different source regions across the northern hemisphere. The seasonal variations in source region contributions to background O3 is then estimated from these sensitivity calculations and will be discussed. A reduced form model combining these source region sensitivities estimated from DDM with the multi-decadal simulations of O3 distributions and emissions trends, is then developed to characterize the changing contributions of different source regions to background O3 levels across North America. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas
Thermal Image Sensing Model for Robotic Planning and Search
Castro Jiménez, Lídice E.; Martínez-García, Edgar A.
2016-01-01
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image’s intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot’s course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach. PMID:27509510
A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software
NASA Technical Reports Server (NTRS)
Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.
2016-01-01
This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.
Browsing Space Weather Data and Models with the Integrated Space Weather Analysis (iSWA) System
NASA Technical Reports Server (NTRS)
Maddox, Marlo M.; Mullinix, Richard E.; Berrios, David H.; Hesse, Michael; Rastaetter, Lutz; Pulkkinen, Antti; Hourcle, Joseph A.; Thompson, Barbara J.
2011-01-01
The Integrated Space Weather Analysis (iSWA) System is a comprehensive web-based platform for space weather information that combines data from solar, heliospheric and geospace observatories with forecasts based on the most advanced space weather models. The iSWA system collects, generates, and presents a wide array of space weather resources in an intuitive, user-configurable, and adaptable format - thus enabling users to respond to current and future space weather impacts as well as enabling post-impact analysis. iSWA currently provides over 200 data and modeling products, and features a variety of tools that allow the user to browse, combine, and examine data and models from various sources. This presentation will consist of a summary of the iSWA products and an overview of the customizable user interfaces, and will feature several tutorial demonstrations highlighting the interactive tools and advanced capabilities.
Climate Sensitivity Controls Uncertainty in Future Terrestrial Carbon Sink
NASA Astrophysics Data System (ADS)
Schurgers, Guy; Ahlström, Anders; Arneth, Almut; Pugh, Thomas A. M.; Smith, Benjamin
2018-05-01
For the 21st century, carbon cycle models typically project an increase of terrestrial carbon with increasing atmospheric CO2 and a decrease with the accompanying climate change. However, these estimates are poorly constrained, primarily because they typically rely on a limited number of emission and climate scenarios. Here we explore a wide range of combinations of CO2 rise and climate change and assess their likelihood with the climate change responses obtained from climate models. Our results demonstrate that the terrestrial carbon uptake depends critically on the climate sensitivity of individual climate models, representing a large uncertainty of model estimates. In our simulations, the terrestrial biosphere is unlikely to become a strong source of carbon with any likely combination of CO2 and climate change in the absence of land use change, but the fraction of the emissions taken up by the terrestrial biosphere will decrease drastically with higher emissions.
Pey, Jorge; Alastuey, Andrés; Querol, Xavier
2013-07-01
PM₁₀ and PM₂.₅ chemical composition has been determined at a suburban insular site in the Balearic Islands (Spain) during almost one and a half year. As a result, 200 samples with more than 50 chemical parameters analyzed have been obtained. The whole database has been analyzed by two receptor modelling techniques (Principal Component Analysis and Positive Matrix Factorisation) in order to identify the main PM sources. After that, regression analyses with respect to the PM mass concentrations were conducted to quantify the daily contributions of each source. Four common sources were identified by both receptor models: secondary nitrate coupled with vehicular emissions, secondary sulphate influenced by fuel-oil combustion, aged marine aerosols and mineral dust. In addition, PCA isolated harbour emissions and a mixed anthropogenic factor containing industrial emissions; whereas PMF isolated an additional mineral factor interpreted as road dust+harbour emissions, and a vehicular abrasion products factor. The use of both methodologies appeared complementary. Nevertheless, PMF sources by themselves were better differentiated. Besides these receptor models, a specific methodology to quantify African dust was also applied. The combination of these three source apportionment tools allowed the identification of 8 sources, being 4 of them mineral (African, regional, urban and harbour dusts). As a summary, 29% of PM₁₀ was attributed to natural sources (African dust, regional dust and sea spray), whereas the proportion diminished to 11% in PM₂.₅. Furthermore, the secondary sulphate source, which accounted for about 22 and 32% of PM₁₀ and PM₂.₅, is strongly linked to the aged polluted air masses residing over the western Mediterranean in the warm period. Copyright © 2013 Elsevier B.V. All rights reserved.
Considerations for Creating Multi-Language Personality Norms: A Three-Component Model of Error
ERIC Educational Resources Information Center
Meyer, Kevin D.; Foster, Jeff L.
2008-01-01
With the increasing globalization of human resources practices, a commensurate increase in demand has occurred for multi-language ("global") personality norms for use in selection and development efforts. The combination of data from multiple translations of a personality assessment into a single norm engenders error from multiple sources. This…
The global reach of the 26 December 2004 Sumatra tsunami.
Titov, Vasily; Rabinovich, Alexander B; Mofjeld, Harold O; Thomson, Richard E; González, Frank I
2005-09-23
Numerical model simulations, combined with tide-gauge and satellite altimetry data, reveal that wave amplitudes, directionality, and global propagation patterns of the 26 December 2004 Sumatra tsunami were primarily determined by the orientation and intensity of the offshore seismic line source and subsequently by the trapping effect of mid-ocean ridge topographic waveguides.
76 FR 20493 - Airworthiness Directives; Fokker Services B.V. Model F.27 Mark 050 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... INFORMATION: Discussion The European Aviation Safety Agency (EASA), which is the Technical Agent for the... are exposed to flammable conditions is one of these criteria. The other three criteria address the... ignition sources inside fuel tanks, which, in combination with flammable fuel vapors, could result in fuel...
The 14,582 km2 Neuse River Basin in North Carolina was characterized based on a user defined land-cover (LC) classification system developed specifically to support spatially explicit, non-point source nitrogen allocation modeling studies. Data processing incorporated both spect...
Use of artificial landscapes to isolate controls on burn probability
Marc-Andre Parisien; Carol Miller; Alan A. Ager; Mark A. Finney
2010-01-01
Techniques for modeling burn probability (BP) combine the stochastic components of fire regimes (ignitions and weather) with sophisticated fire growth algorithms to produce high-resolution spatial estimates of the relative likelihood of burning. Despite the numerous investigations of fire patterns from either observed or simulated sources, the specific influence of...
Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...
Constructing Data Albums for Significant Severe Weather Events
NASA Technical Reports Server (NTRS)
Greene, Ethan; Zavodsky, Bradley; Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Bakare, Rohan; Basyal, Sabin; Conover, Helen
2014-01-01
Data Albums provide a one-stop-shop combining datasets from NASA, NWS, online new sources, and social media. Data Albums will help meteorologists better understand severe weather events to improve predictive models. Developed a new ontology for severe weather based off current hurricane Data Album and selected relevant NASA datasets for inclusion.
SOURCE APPORTIONMENT OF SEATTLE PM 2.5: A COMPARISON OF IMPROVE AND ENHANCED STN DATA SETS
Seattle, WA, STN and IMPROVE data sets with STN temperature resolved carbon peaks were analyzed with both the PMF and Unmix receptor models. In addition, the IMPROVE trace element data was combined with the major STN species to examine the role of IMPROVE metals. To compare the ...
Theoretical and Numerical Modeling of Transport of Land Use-Specific Fecal Source Identifiers
NASA Astrophysics Data System (ADS)
Bombardelli, F. A.; Sirikanchana, K. J.; Bae, S.; Wuertz, S.
2008-12-01
Microbial contamination in coastal and estuarine waters is of particular concern to public health officials. In this work, we advocate that well-formulated and developed mathematical and numerical transport models can be combined with modern molecular techniques in order to predict continuous concentrations of microbial indicators under diverse scenarios of interest, and that they can help in source identification of fecal pollution. As a proof of concept, we present initially the theory, numerical implementation and validation of one- and two-dimensional numerical models aimed at computing the distribution of fecal source identifiers in water bodies (based on Bacteroidales marker DNA sequences) coming from different land uses such as wildlife, livestock, humans, dogs or cats. These models have been developed to allow for source identification of fecal contamination in large bodies of water. We test the model predictions using diverse velocity fields and boundary conditions. Then, we present some preliminary results of an application of a three-dimensional water quality model to address the source of fecal contamination in the San Pablo Bay (SPB), United States, which constitutes an important sub-embayment of the San Francisco Bay. The transport equations for Bacteroidales include the processes of advection, diffusion, and decay of Bacteroidales. We discuss the validation of the developed models through comparisons of numerical results with field campaigns developed in the SPB. We determine the extent and importance of the contamination in the bay for two decay rates obtained from field observations, corresponding to total host-specific Bacteroidales DNA and host-specific viable Bacteroidales cells, respectively. Finally, we infer transport conditions in the SPB based on the numerical results, characterizing the fate of outflows coming from the Napa, Petaluma and Sonoma rivers.
Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model
NASA Astrophysics Data System (ADS)
Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.
2000-02-01
The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.
MP-Pic simulation of CFB riser with EMMS-based drag model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F.; Song, F.; Benyahia, S.
2012-01-01
MP-PIC (multi-phase particle in cell) method combined with the EMMS (energy minimization multi- scale) drag force model was implemented with the open source program MFIX to simulate the gas–solid flows in CFB (circulatingfluidizedbed) risers. Calculated solid flux by the EMMS drag agrees well with the experimental value; while the traditional homogeneous drag over-predicts this value. EMMS drag force model can also predict the macro-and meso-scale structures. Quantitative comparison of the results by the EMMS drag force model and the experimental measurements show high accuracy of the model. The effects of the number of particles per parcel and wall conditions onmore » the simulation results have also been investigated in the paper. This work proved that MP-PIC combined with the EMMS drag model can successfully simulate the fluidized flows in CFB risers and it serves as a candidate to realize real-time simulation of industrial processes in the future.« less
NASA Astrophysics Data System (ADS)
Clark, D.
2012-12-01
In the future, acquisition of magnetic gradient tensor data is likely to become routine. New methods developed for analysis of magnetic gradient tensor data can also be applied to high quality conventional TMI surveys that have been processed using Fourier filtering techniques, or otherwise, to calculate magnetic vector and tensor components. This approach is, in fact, the only practical way at present to analyze vector component data, as measurements of vector components are seriously afflicted by motion noise, which is not as serious a problem for gradient components. In many circumstances, an optimal approach to extracting maximum information from magnetic surveys would be to combine analysis of measured gradient tensor data with vector components calculated from TMI measurements. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for a number of elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, horizontal line current and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalized source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetization direction for these sources (and only very weakly dependent on magnetization direction in general). In combination the NSS and its vector gradient enable estimation of the Euler structural index, thereby constraining source geometry, and determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Once source locations are determined, information of source magnetizations can be obtained by simple linear inversion of measured or calculated vector and/or tensor data. Inversions based on the vector gradient of the NSS over the Tallawang magnetite deposit in central New South Wales obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Inverted magnetizations are consistent with magnetic property measurements on drill core samples from this deposit. Similarly, inversions of calculated tensor data over the Mount Leyshold gold-mineralized porphyry system in Queensland yield good estimates of the centroid location, total magnetic moment and magnetization direction of the magnetite-bearing potassic alteration zone that are consistent with geological and petrophysical information.
Modeling post-eruptive deformation at Okmok volcano from GPS and InSAR using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Xue, X.; Freymueller, J. T.
2017-12-01
Okmok, occupies most of northeastern Umnak Island in the Aleutian arc, started inflating soon after the 2008 eruption. Seven GPS sites have been operated after the eruption. Two of them are located within the caldera, three are around the rim of the caldera and two are out of the caldera. The InSAR timeseries have been generated using data from the C-band Envisat and X-band TerraSAR-X satellites (Qu et al., 2015). Both GPS and InSAR indicate more than 0.6 m uplift within the caldera and subtle subsidence outside the caldera. Based on single Mogi source, an unscented Kalman filter was successfully used to model the deformation at Okmok detected by GPS during 2000-2007. We have expanded it to be able to model multiple Mogi sources at different depths and integrate the InSAR observations. Before applying the Kalman filter, We remove a time-independent Mogi source and phase ramp from each InSAR image and obtain its variance-covariance information from the residual. We also determine the relative weight between GPS and InSAR data using variance component estimation. The GPS and InSAR timeseries can then be combined for the Kalman filter. Preliminary results show that two Mogi sources are more likely beneath Okmok volcano. The deep source is located at 8.5 km depth which deflated 0.016 km3 during the first 3 years after the eruption then reached a stable state. The deflating source explains the subsidence outside the caldera which can not be modeled with only one inflating source in any way. The shallow source, migrating 0.5 km from north to south, is located at 2 km depth within the caldera where is close to the source position before the eruption (Freymueller et al., 2010). The magma volume accumulation of the shallow source in the following 7 years from the 2008 eruption is 0.035 km3.
Numerical simulation and experimental verification of extended source interferometer
NASA Astrophysics Data System (ADS)
Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong
2013-12-01
Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.
Highlights on gamma rays, neutrinos and antiprotons from TeV Dark Matter
NASA Astrophysics Data System (ADS)
Gammaldi, Viviana
2016-07-01
It has been shown that the gamma-ray flux observed by HESS from the J1745-290 Galactic Center source is well fitted as the secondary gamma-rays photons generated from Dark Matter annihilating into Standard Model particles in combination with a simple power law background. The neutrino flux expected from such Dark Matter source has been also analyzed. The main results of such analyses for 50 TeV Dark Matter annihilating into W+W- gauge boson and preliminary results for antiprotons are presented.
Mikkelson, Daniel; Chang, Chih -Wei; Cetiner, Sacit M.; ...
2015-10-01
Here, the U.S. Department of Energy (DOE) supports research and development (R&D) that could lead to more efficient utilization of clean energy generation sources, including renewable and nuclear options, to meet grid demand and industrial thermal energy needs [1]. One hybridization approach being investigated by the DOE Offices of Nuclear Energy (NE) and the DOE Energy Efficiency and Renewable Energy (EERE) is tighter coupling of nuclear and renewable energy sources to better manage overall energy use for the combined electricity, industrial manufacturing, and transportation sectors.
The ALMA-PILS survey: 3D modeling of the envelope, disks and dust filament of IRAS 16293-2422
NASA Astrophysics Data System (ADS)
Jacobsen, S. K.; Jørgensen, J. K.; van der Wiel, M. H. D.; Calcutt, H.; Bourke, T. L.; Brinch, C.; Coutens, A.; Drozdovskaya, M. N.; Kristensen, L. E.; Müller, H. S. P.; Wampfler, S. F.
2018-04-01
Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims: The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods: We present 13CO, C17O and C18O J = 3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results: Radiative transfer modeling in our sampled parameter space suggests that, while the disk around source A could not be constrained, the disk around source B has to be vertically extended. This puffed-up structure can be obtained with both a protoplanetary disk model with an unexpectedly high scale-height and with the density solution from an infalling, rotating collapse. Combined constraints on our 3D model, from observed dust continuum and CO isotopologue emission between the sources, corroborate that source A should be at least six times more luminous than source B. We also demonstrate that the volume of high-temperature regions where complex organic molecules arise is sensitive to whether or not the total luminosity is in a single radiation source or distributed into two sources, affecting the interpretation of earlier chemical modeling efforts of the IRAS 16293-2422 hot corino which used a single-source approximation. Conclusions: Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA 18 L⊙ and LB 3 L⊙.
Combining Radiography and Passive Measurements for Radiological Threat Detection in Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.
Abstract Radiography is widely understood to provide information complimentary to passive detection: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions which may mask a passive radiological signal. We present a method for combining radiographic and passive data which uses the radiograph to provide an estimate of scatter and attenuation for possible sources. This approach allows quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present first results for this method for a simple modeled test case of a cargo container drivingmore » through a PVT portal. With this inversion approach, we address criteria for an integrated passive and radiographic screening system and how detection of SNM threats might be improved in such a system.« less
Sterba, Sonya K; Rights, Jason D
2016-01-01
Item parceling remains widely used under conditions that can lead to parcel-allocation variability in results. Hence, researchers may be interested in quantifying and accounting for parcel-allocation variability within sample. To do so in practice, three key issues need to be addressed. First, how can we combine sources of uncertainty arising from sampling variability and parcel-allocation variability when drawing inferences about parameters in structural equation models? Second, on what basis can we choose the number of repeated item-to-parcel allocations within sample? Third, how can we diagnose and report proportions of total variability per estimate arising due to parcel-allocation variability versus sampling variability? This article addresses these three methodological issues. Developments are illustrated using simulated and empirical examples, and software for implementing them is provided.
Broadband radio spectro-polarimetric observations of high-Faraday-rotation-measure AGN
NASA Astrophysics Data System (ADS)
Pasetto, Alice; Carrasco-González, Carlos; O'Sullivan, Shane; Basu, Aritra; Bruni, Gabriele; Kraus, Alex; Curiel, Salvador; Mack, Karl-Heinz
2018-06-01
We present broadband polarimetric observations of a sample of high-Faraday-rotation-measure (high-RM) active galactic nuclei (AGN) using the Karl. G. Jansky Very Large Array (JVLA) telescope from 1 to 2 GHz, and 4 to 12 GHz. The sample (14 sources) consists of very compact sources (linear resolution smaller than ≈5 kpc) that are unpolarized at 1.4 GHz in the NRAO VLA Sky Survey (NVSS). Total intensity data have been modeled using a combination of synchrotron components, revealing complex structure in their radio spectra. Depolarization modeling, through the so-called qu-fitting (the modeling of the fractional quantities of the Stokes Q and U parameters), has been performed on the polarized data using an equation that attempts to simplify the process of fitting many different depolarization models. These models can be divided into two major categories: external depolarization (ED) and internal depolarization (ID) models. Understanding which of the two mechanisms is the most representative would help the qualitative understanding of the AGN jet environment and whether it is embedded in a dense external magneto-ionic medium or if it is the jet-wind that causes the high RM and strong depolarization. This could help to probe the jet magnetic field geometry (e.g., helical or otherwise). This new high-sensitivity data shows a complicated behavior in the total intensity and polarization radio spectrum of individual sources. We observed the presence of several synchrotron components and Faraday components in their total intensity and polarized spectra. For the majority of our targets (12 sources), the depolarization seems to be caused by a turbulent magnetic field. Thus, our main selection criteria (lack of polarization at 1.4 GHz in the NVSS) result in a sample of sources with very large RMs and depolarization due to turbulent magnetic fields local to the source. These broadband JVLA data reveal the complexity of the polarization properties of this class of radio sources. We show how the new qu-fitting technique can be used to probe the magnetized radio source environment and to spectrally resolve the polarized components of unresolved radio sources.
Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk
2015-05-01
To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
MODEL-FREE MULTI-PROBE LENSING RECONSTRUCTION OF CLUSTER MASS PROFILES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umetsu, Keiichi
2013-05-20
Lens magnification by galaxy clusters induces characteristic spatial variations in the number counts of background sources, amplifying their observed fluxes and expanding the area of sky, the net effect of which, known as magnification bias, depends on the intrinsic faint-end slope of the source luminosity function. The bias is strongly negative for red galaxies, dominated by the geometric area distortion, whereas it is mildly positive for blue galaxies, enhancing the blue counts toward the cluster center. We generalize the Bayesian approach of Umetsu et al. for reconstructing projected cluster mass profiles, by incorporating multiple populations of background sources for magnification-biasmore » measurements and combining them with complementary lens-distortion measurements, effectively breaking the mass-sheet degeneracy and improving the statistical precision of cluster mass measurements. The approach can be further extended to include strong-lensing projected mass estimates, thus allowing for non-parametric absolute mass determinations in both the weak and strong regimes. We apply this method to our recent CLASH lensing measurements of MACS J1206.2-0847, and demonstrate how combining multi-probe lensing constraints can improve the reconstruction of cluster mass profiles. This method will also be useful for a stacked lensing analysis, combining all lensing-related effects in the cluster regime, for a definitive determination of the averaged mass profile.« less
Sarnat, Jeremy A.; Marmur, Amit; Klein, Mitchel; Kim, Eugene; Russell, Armistead G.; Sarnat, Stefanie E.; Mulholland, James A.; Hopke, Philip K.; Tolbert, Paige E.
2008-01-01
Background Interest in the health effects of particulate matter (PM) has focused on identifying sources of PM, including biomass burning, power plants, and gasoline and diesel emissions that may be associated with adverse health risks. Few epidemiologic studies, however, have included source-apportionment estimates in their examinations of PM health effects. We analyzed a time-series of chemically speciated PM measurements in Atlanta, Georgia, and conducted an epidemiologic analysis using data from three distinct source-apportionment methods. Objective The key objective of this analysis was to compare epidemiologic findings generated using both factor analysis and mass balance source-apportionment methods. Methods We analyzed data collected between November 1998 and December 2002 using positive-matrix factorization (PMF), modified chemical mass balance (CMB-LGO), and a tracer approach. Emergency department (ED) visits for a combined cardiovascular (CVD) and respiratory disease (RD) group were assessed as end points. We estimated the risk ratio (RR) associated with same day PM concentrations using Poisson generalized linear models. Results There were significant, positive associations between same-day PM2.5 (PM with aero-dynamic diameter ≤ 2.5 μm) concentrations attributed to mobile sources (RR range, 1.018–1.025) and biomass combustion, primarily prescribed forest burning and residential wood combustion, (RR range, 1.024–1.033) source categories and CVD-related ED visits. Associations between the source categories and RD visits were not significant for all models except sulfate-rich secondary PM2.5 (RR range, 1.012–1.020). Generally, the epidemiologic results were robust to the selection of source-apportionment method, with strong agreement between the RR estimates from the PMF and CMB-LGO models, as well as with results from models using single-species tracers as surrogates of the source-apportioned PM2.5 values. Conclusions Despite differences among the source-apportionment methods, these findings suggest that modeled source-apportioned data can produce robust estimates of acute health risk. In Atlanta, there were consistent associations across methods between PM2.5 from mobile sources and biomass burning with both cardiovascular and respiratory ED visits, and between sulfate-rich secondary PM2.5 with respiratory visits. PMID:18414627
Calibration artefacts in radio interferometry - II. Ghost patterns for irregular arrays
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.; Grobler, T. L.; Smirnov, O. M.
2016-04-01
Calibration artefacts, like the self-calibration bias, usually emerge when data are calibrated using an incomplete sky model. In the first paper of this series, in which we analysed calibration artefacts in data from the Westerbork Synthesis Radio Telescope, we showed that these artefacts take the form of spurious positive and negative sources, which we refer to as ghosts or ghost sources. We also developed a mathematical framework with which we could predict the ghost pattern of an east-west interferometer for a simple two-source test case. In this paper, we extend our analysis to more general array layouts. This provides us with a useful method for the analysis of ghosts that we refer to as extrapolation. Combining extrapolation with a perturbation analysis, we are able to (1) analyse the ghost pattern for a two-source test case with one modelled and one unmodelled source for an arbitrary array layout, (2) explain why some ghosts are brighter than others, (3) define a taxonomy allowing us to classify the different ghosts, (4) derive closed form expressions for the fluxes and positions of the brightest ghosts, and (5) explain the strange two-peak structure with which some ghosts manifest during imaging. We illustrate our mathematical predictions using simulations of the KAT-7 (seven-dish Karoo Array Telescope) array. These results show the explanatory power of our mathematical model. The insights gained in this paper provide a solid foundation to study calibration artefacts in arbitrary, I.e. more complicated than the two-source example discussed here, incomplete sky models or full synthesis observations including direction-dependent effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, Thomas E.; Castorina, Rosemary; Kuwabara, Yu
2006-06-01
By drawing on human biomonitoring data and limited environmental samples together with outputs from the CalTOX multimedia, multipathway source-to-dose model, we characterize cumulative intake of organophosphorous (OP) pesticides in an agricultural region of California. We assemble regional OP pesticide use, environmental sampling, and biological tissue monitoring data for a large and geographically dispersed population cohort of 592 pregnant Latina women in California (the CHAMACOS cohort). We then use CalTOX with regional pesticide usage data to estimate the magnitude and uncertainty of exposure and intake from local sources. We combine model estimates of intake from local sources with food intake basedmore » on national residue data to estimate for the CHAMACOS cohort cumulative median OP intake, which corresponds to expected levels of urinary dialkylphosphate (DAP) metabolite excretion for this cohort. From these results we develop premises about relative contributions from different sources and pathways of exposure. We evaluate these premises by comparing the magnitude and variation of DAPs in the CHAMACOS cohort with the whole U.S. population using data from the National Health and Nutrition Evaluation Survey (NHANES). This comparison supports the premise that in both populations diet is the common and dominant exposure pathway. Both the model results and biomarker comparison supports the observation that the CHAMACOS population has a statistically significant higher intake of OP pesticides that appears as an almost constant additional dose among all participants. We attribute the magnitude and small variance of this intake to non-dietary exposure in residences from local sources.« less
Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J
2018-03-01
Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.
Keegan, Ronan M.; McNicholas, Stuart J.; Thomas, Jens M. H.; Simpkin, Adam J.; Uski, Ville; Ballard, Charles C.
2018-01-01
Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case. PMID:29533225
Building an Open Source Framework for Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Jagers, B.; Meijers, E.; Villars, M.
2015-12-01
In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.
SynergyFinder: a web application for analyzing drug combination dose-response matrix data.
Ianevski, Aleksandr; He, Liye; Aittokallio, Tero; Tang, Jing
2017-08-01
Rational design of drug combinations has become a promising strategy to tackle the drug sensitivity and resistance problem in cancer treatment. To systematically evaluate the pre-clinical significance of pairwise drug combinations, functional screening assays that probe combination effects in a dose-response matrix assay are commonly used. To facilitate the analysis of such drug combination experiments, we implemented a web application that uses key functions of R-package SynergyFinder, and provides not only the flexibility of using multiple synergy scoring models, but also a user-friendly interface for visualizing the drug combination landscapes in an interactive manner. The SynergyFinder web application is freely accessible at https://synergyfinder.fimm.fi ; The R-package and its source-code are freely available at http://bioconductor.org/packages/release/bioc/html/synergyfinder.html . jing.tang@helsinki.fi. © The Author(s) 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Johnston, C. D.; Davis, G. B.; Bastow, T.; Annable, M. D.; Trefry, M. G.; Furness, A.; Geste, Y.; Woodbury, R.; Rhodes, S.
2011-12-01
Measures of the source mass and depletion characteristics of recalcitrant dense non-aqueous phase liquid (DNAPL) contaminants are critical elements for assessing performance of remediation efforts. This is in addition to understanding the relationships between source mass depletion and changes to dissolved contaminant concentration and mass flux in groundwater. Here we present results of applying analytical source-depletion concepts to pumping from within the DNAPL source zone of a 10-m thick heterogeneous layered aquifer to estimate the original source mass and characterise the time trajectory of source depletion and mass flux in groundwater. The multi-component, reactive DNAPL source consisted of the brominated solvent tetrabromoethane (TBA) and its transformation products (mostly tribromoethene - TriBE). Coring and multi-level groundwater sampling indicated the DNAPL to be mainly in lower-permeability layers, suggesting the source had already undergone appreciable depletion. Four simplified source dissolution models (exponential, power function, error function and rational mass) were able to describe the concentration history of the total molar concentration of brominated organics in extracted groundwater during 285 days of pumping. Approximately 152 kg of brominated compounds were extracted. The lack of significant kinetic mass transfer limitations in pumped concentrations was notable. This was despite the heterogeneous layering in the aquifer and distribution of DNAPL. There was little to choose between the model fits to pumped concentration time series. The variance of groundwater velocities in the aquifer determined during a partitioning inter-well tracer test (PITT) were used to parameterise the models. However, the models were found to be relatively insensitive to this parameter. All models indicated an initial source mass around 250 kg which compared favourably to an estimate of 220 kg derived from the PITT. The extrapolated concentrations from the dissolution models diverged, showing disparate approaches to possible remediation objectives. However, it also showed that an appreciable proportion of the source would need to be removed to discriminate between the models. This may limit the utility of such modelling early in the history of a DNAPL source. A further limitation is the simplified approach of analysing the combined parent/daughter compounds with different solubilities as a total molar concentration. Although the fitted results gave confidence to this approach, there were appreciable changes in relative abundance. The dissolution and partitioning processes are discussed in relation to the lower-solubility TBA becoming dominant in pumped groundwater over time, despite its known rapid transformation to TriBE. These processes are also related to the architecture of the depleting source as revealed by multi-level groundwater sampling under reversed pumping/injection conditions.
NASA Astrophysics Data System (ADS)
Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard
2016-10-01
A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.
LISA Sources in Milky Way Globular Clusters
NASA Astrophysics Data System (ADS)
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L.; Larson, Shane L.; Rasio, Frederic A.
2018-05-01
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ˜21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ˜7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
LISA Sources in Milky Way Globular Clusters.
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L; Larson, Shane L; Rasio, Frederic A
2018-05-11
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ∼21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ∼7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
NASA Technical Reports Server (NTRS)
Eales, S.; Dunne, L.; Clements, D.; Cooray, A.; De Zotti, G.; Dye, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Maddox, S.;
2010-01-01
The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 570 sq deg of the extragalactic sky, 4 times larger than all the other Herschel extragalactic surveys combined, in five far-infrared and submillimeter bands. We describe the survey, the complementary multiwavelength data sets that will be combined with the Herschel data, and the six major science programs we are undertaking. Using new models based on a previous submillimeter survey of galaxies, we present predictions of the properties of the ATLAS sources in other wave bands.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Chiral primordial blue tensor spectra from the axion-gauge couplings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obata, Ippei, E-mail: obata@tap.scphys.kyoto-u.ac.jp
We suggest the new feature of primordial gravitational waves sourced by the axion-gauge couplings, whose forms are motivated by the dimensional reduction of the form field in the string theory. In our inflationary model, as an inflaton we adopt two types of axion, dubbed the model-independent axion and the model-dependent axion, which couple with two gauge groups with different sign combination each other. Due to these forms both polarization modes of gauge fields are amplified and enhance both helicies of tensor modes during inflation. We point out the possibility that a primordial blue-tilted tensor power spectra with small chirality aremore » provided by the combination of these axion-gauge couplings, intriguingly both amplitudes and chirality are potentially testable by future space-based gravitational wave interferometers such as DECIGO and BBO project.« less
Analytical Modeling of Triple-Metal Hetero-Dielectric DG SON TFET
NASA Astrophysics Data System (ADS)
Mahajan, Aman; Dash, Dinesh Kumar; Banerjee, Pritha; Sarkar, Subir Kumar
2018-02-01
In this paper, a 2-D analytical model of triple-metal hetero-dielectric DG TFET is presented by combining the concepts of triple material gate engineering and hetero-dielectric engineering. Three metals with different work functions are used as both front- and back gate electrodes to modulate the barrier at source/channel and channel/drain interface. In addition to this, front gate dielectric consists of high-K HfO2 at source end and low-K SiO2 at drain side, whereas back gate dielectric is replaced by air to further improve the ON current of the device. Surface potential and electric field of the proposed device are formulated solving 2-D Poisson's equation and Young's approximation. Based on this electric field expression, tunneling current is obtained by using Kane's model. Several device parameters are varied to examine the behavior of the proposed device. The analytical model is validated with TCAD simulation results for proving the accuracy of our proposed model.
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.; Calvet, Nuria; Hartmann, Lee
1993-01-01
We describe radiative transfer calculations of infalling, dusty envelopes surrounding pre-main-sequence stars and use these models to derive physical properties for a sample of 21 heavily reddened young stars in the Taurus-Auriga molecular cloud. The density distributions needed to match the FIR peaks in the spectral energy distributions of these embedded sources suggest mass infall rates similar to those predicted for simple thermally supported clouds with temperatures about 10 K. Unless the dust opacities are badly in error, our models require substantial departures from spherical symmetry in the envelopes of all sources. These flattened envelopes may be produced by a combination of rotation and cavities excavated by bipolar flows. The rotating infall models of Terebey et al. (1984) models indicate a centrifugal radius of about 70 AU for many objects if rotation is the only important physical effect, and this radius is reasonably consistent with typical estimates for the sizes of circumstellar disks around T Tauri stars.
Prediction Model for Relativistic Electrons at Geostationary Orbit
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Lyatsky, Wladislaw
2008-01-01
We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.
Relativistic Electrons at Geostationary Orbit: Modeling Results
NASA Technical Reports Server (NTRS)
Khazanov, George V.; Lyatsky, Wladislaw
2008-01-01
We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
Effects of background noise on total noise annoyance
NASA Technical Reports Server (NTRS)
Willshire, K. F.
1987-01-01
Two experiments were conducted to assess the effects of combined community noise sources on annoyance. The first experiment baseline relationships between annoyance and noise level for three community noise sources (jet aircraft flyovers, traffic and air conditioners) presented individually. Forty eight subjects evaluated the annoyance of each noise source presented at four different noise levels. Results indicated the slope of the linear relationship between annoyance and noise level for the traffic noise was significantly different from that of aircraft and of air conditioner noise, which had equal slopes. The second experiment investigated annoyance response to combined noise sources, with aircraft noise defined as the major noise source and traffic and air conditioner noise as background noise sources. Effects on annoyance of noise level differences between aircraft and background noise for three total noise levels and for both background noise sources were determined. A total of 216 subjects were required to make either total or source specific annoyance judgements, or a combination of the two, for a wide range of combined noise conditions.
Spatial occupancy models for large data sets
Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.
2013-01-01
Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.