NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.
Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco
2018-01-01
Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Turbulence spectra in the noise source regions of the flow around complex surfaces
NASA Technical Reports Server (NTRS)
Olsen, W. A.; Boldman, D. R.
1983-01-01
The complex turbulent flow around three complex surfaces was measured in detail with a hot wire. The measured data include extensive spatial surveys of the mean velocity and turbulence intensity and measurements of the turbulence spectra and scale length at many locations. The publication of the turbulence data is completed by reporting a summary of the turbulence spectra that were measured within the noise source locations of the flow. The results suggest some useful simplifications in modeling the very complex turbulent flow around complex surfaces for aeroacoustic predictive models. The turbulence spectra also show that noise data from scale models of moderate size can be accurately scaled up to full size.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
VLA OH Zeeman Observations of the NGC 6334 Complex Source A
NASA Astrophysics Data System (ADS)
Mayo, E. A.; Sarma, A. P.; Troland, T. H.; Abel, N. P.
2004-12-01
We present a detailed analysis of the NGC 6334 complex source A, a compact continuum source in the SW region of the complex. Our intent is to determine the significance of the magnetic field in the support of the surrounding molecular cloud against gravitational collapse. We have performed OH 1665 and 1667 MHz observations taken with the Very Large Array in the BnA configuration and combined these data with the lower resolution CnB data of Sarma et al. (2000). These observations reveal magnetic fields with values of the order of 350 μ G toward source A, with maximum fields reaching 500 μ G. We have also theoretically modeled the molecular cloud surrounding source A using Cloudy, with the constraints to the model based on observation. This model provides significant information on the density of H2 through the cloud and also the relative density of H2 to OH which is important to our analysis of the region. We will combine the knowledge gained through the Cloudy modeling with Virial estimates to determine the significance of the magnetic field to the dynamics and evolution of source A.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Pallas, Benoît; Clément-Vidal, Anne; Rebolledo, Maria-Camila; Soulié, Jean-Christophe; Luquet, Delphine
2013-01-01
The ability to assimilate C and allocate non-structural carbohydrates (NSCs) to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm) were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyze such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed. PMID:24204372
NASA Astrophysics Data System (ADS)
Baker, Kirk R.; Hawkins, Andy; Kelly, James T.
2014-12-01
Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Source apportionment is challenging in urban environments with clustered sourceemissions that have similar chemical signatures. A field and inverse modeling studywas conducted in Elizabeth, New Jersey to observe gaseous and particulate pollutionnear the Port of New York and New J...
NASA Astrophysics Data System (ADS)
Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène
2016-04-01
The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Sparsity-promoting inversion for modeling of irregular volcanic deformation source
NASA Astrophysics Data System (ADS)
Zhai, G.; Shirzaei, M.
2016-12-01
Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
Hartzell, S.; Iida, M.
1990-01-01
Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
Application of hierarchical Bayesian unmixing models in river sediment source apportionment
NASA Astrophysics Data System (ADS)
Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice
2016-04-01
Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.
Chaotic Motions in the Real Fuzzy Electronic Circuits
2012-12-30
field of secure communications, the original source should be blended with other complex signals. Chaotic signals are one of the good sources to be...Takagi-Sugeno (T-S) fuzzy chaotic systems on electronic circuit. In the research field of secure communications, the original source should be blended ...model. The overall fuzzy model of the system is achieved by fuzzy blending of the linear system models. Consider a continuous-time nonlinear dynamic
Chang, Pao-Erh Paul; Yang, Jen-Chih Rena; Den, Walter; Wu, Chang-Fu
2014-09-01
Emissions of volatile organic compounds (VOCs) are most frequent environmental nuisance complaints in urban areas, especially where industrial districts are nearby. Unfortunately, identifying the responsible emission sources of VOCs is essentially a difficult task. In this study, we proposed a dynamic approach to gradually confine the location of potential VOC emission sources in an industrial complex, by combining multi-path open-path Fourier transform infrared spectrometry (OP-FTIR) measurement and the statistical method of principal component analysis (PCA). Close-cell FTIR was further used to verify the VOC emission source by measuring emitted VOCs from selected exhaust stacks at factories in the confined areas. Multiple open-path monitoring lines were deployed during a 3-month monitoring campaign in a complex industrial district. The emission patterns were identified and locations of emissions were confined by the wind data collected simultaneously. N,N-Dimethyl formamide (DMF), 2-butanone, toluene, and ethyl acetate with mean concentrations of 80.0 ± 1.8, 34.5 ± 0.8, 103.7 ± 2.8, and 26.6 ± 0.7 ppbv, respectively, were identified as the major VOC mixture at all times of the day around the receptor site. As the toxic air pollutant, the concentrations of DMF in air samples were found exceeding the ambient standard despite the path-average effect of OP-FTIR upon concentration levels. The PCA data identified three major emission sources, including PU coating, chemical packaging, and lithographic printing industries. Applying instrumental measurement and statistical modeling, this study has established a systematic approach for locating emission sources. Statistical modeling (PCA) plays an important role in reducing dimensionality of a large measured dataset and identifying underlying emission sources. Instrumental measurement, however, helps verify the outcomes of the statistical modeling. The field study has demonstrated the feasibility of using multi-path OP-FTIR measurement. The wind data incorporating with the statistical modeling (PCA) may successfully identify the major emission source in a complex industrial district.
Amanzi: An Open-Source Multi-process Simulator for Environmental Applications
NASA Astrophysics Data System (ADS)
Moulton, J. D.; Molins, S.; Johnson, J. N.; Coon, E.; Lipnikov, K.; Day, M.; Barker, E.
2014-12-01
The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments begin with simplified models, and add geometric and geologic complexity as understanding is gained. The Platform toolsets (Akuna) generates these conceptual models and Amanzi provides the computational engine to perform the simulations, returning the results for analysis and visualization. In this presentation we highlight key elements of the design, algorithms and implementations used in Amanzi. In particular, the hierarchical and modular design is aligned with the coupled processes being sumulated, and naturally supports a wide range of model complexity. This design leverages a dynamic data manager and the synergy of two graphs (one from the high-level perspective of the models the other from the dependencies of the variables in the model) to enable this flexible model configuration at run time. Moreover, to model sites with complex hydrostratigraphy, as well as engineered systems, we are developing a dual unstructured/structured capability. Recently, these capabilities have been collected in a framework named Arcos, and efforts have begun to improve interoperability between the unstructured and structured AMR approaches in Amanzi. To leverage a range of biogeochemistry capability from the community (e.g., CrunchFlow, PFLOTRAN, etc.), a biogeochemistry interface library was developed called Alquimia. To ensure that Amanzi is truly an open-source community code we require a completely open-source tool chain for our development. We will comment on elements of this tool chain, including the testing and documentation development tools such as docutils, and Sphinx. Finally, we will show simulation results from our phased demonstrations, including the geochemically complex Savannah River F-Area seepage basins.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Acoustic signatures of sound source-tract coupling.
Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B
2011-04-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society
Acoustic signatures of sound source-tract coupling
Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.
2014-01-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Methodology of decreasing software complexity using ontology
NASA Astrophysics Data System (ADS)
DÄ browska-Kubik, Katarzyna
2015-09-01
In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.
Fisher, Rohan; Lassa, Jonatan
2017-04-18
Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and responsive nature of the applications described has the potential to allow complex environmental social and political considerations to be incorporated and visualised. Through supporting evidence-based planning the innovative modelling practices described have the potential to help local health and emergency response planning in the developing world.
Combined analysis of modeled and monitored SO2 concentrations at a complex smelting facility.
Rehbein, Peter J G; Kennedy, Michael G; Cotsman, David J; Campeau, Madonna A; Greenfield, Monika M; Annett, Melissa A; Lepage, Mike F
2014-03-01
Vale Canada Limited owns and operates a large nickel smelting facility located in Sudbury, Ontario. This is a complex facility with many sources of SO2 emissions, including a mix of source types ranging from passive building roof vents to North America's tallest stack. In addition, as this facility performs batch operations, there is significant variability in the emission rates depending on the operations that are occurring. Although SO2 emission rates for many of the sources have been measured by source testing, the reliability of these emission rates has not been tested from a dispersion modeling perspective. This facility is a significant source of SO2 in the local region, making it critical that when modeling the emissions from this facility for regulatory or other purposes, that the resulting concentrations are representative of what would actually be measured or otherwise observed. To assess the accuracy of the modeling, a detailed analysis of modeled and monitored data for SO2 at the facility was performed. A mobile SO2 monitor sampled at five locations downwind of different source groups for different wind directions resulting in a total of 168 hr of valid data that could be used for the modeled to monitored results comparison. The facility was modeled in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model) using site-specific meteorological data such that the modeled periods coincided with the same times as the monitored events. In addition, great effort was invested into estimating the actual SO2 emission rates that would likely be occurring during each of the monitoring events. SO2 concentrations were modeled for receptors around each monitoring location so that the modeled data could be directly compared with the monitored data. The modeled and monitored concentrations were compared and showed that there were no systematic biases in the modeled concentrations. This paper is a case study of a Combined Analysis of Modelled and Monitored Data (CAMM), which is an approach promulgated within air quality regulations in the Province of Ontario, Canada. Although combining dispersion models and monitoring data to estimate or refine estimates of source emission rates is not a new technique, this study shows how, with a high degree of rigor in the design of the monitoring and filtering of the data, it can be applied to a large industrial facility, with a variety of emission sources. The comparison of modeled and monitored SO2 concentrations in this case study also provides an illustration of the AERMOD model performance for a large industrial complex with many sources, at short time scales in comparison with monitored data. Overall, this analysis demonstrated that the AERMOD model performed well.
A compact model for electroosmotic flows in microfluidic devices
NASA Astrophysics Data System (ADS)
Qiao, R.; Aluru, N. R.
2002-09-01
A compact model to compute flow rate and pressure in microfluidic devices is presented. The microfluidic flow can be driven by either an applied electric field or a combined electric field and pressure gradient. A step change in the ζ-potential on a channel wall is treated by a pressure source in the compact model. The pressure source is obtained from the pressure Poisson equation and conservation of mass principle. In the proposed compact model, the complex fluidic network is simplified by an electrical circuit. The compact model can predict the flow rate, pressure distribution and other basic characteristics in microfluidic channels quickly with good accuracy when compared to detailed numerical simulation. Using the compact model, fluidic mixing and dispersion control are studied in a complex microfluidic network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Vižintin, Goran; Ravbar, Nataša; Janež, Jože; Koren, Eva; Janež, Naško; Zini, Luca; Treu, Francesco; Petrič, Metka
2018-04-01
Due to intrinsic characteristics of aquifers groundwater frequently passes between various types of aquifers without hindrance. The complex connection of underground water paths enables flow regardless of administrative boundaries. This can cause problems in water resources management. Numerical modelling is an important tool for the understanding, interpretation and management of aquifers. Useful and reliable methods of numerical modelling differ with regard to the type of aquifer, but their connections in a single hydrodynamic model are rare. The purpose of this study was to connect different models into an integrated system that enables determination of water travel time from the point of contamination to water sources. The worst-case scenario is considered. The system was applied in the Soča/Isonzo basin, a transboundary river in Slovenia and Italy, where there is a complex contact of karst and intergranular aquifers and surface flows over bedrock with low permeability. Time cell models were first elaborated separately for individual hydrogeological units. These were the result of numerical hydrological modelling (intergranular aquifer and surface flow) or complex GIS analysis taking into account the vulnerability map and tracer tests results (karst aquifer). The obtained cellular models present the basis of a contamination early-warning system, since it allows an estimation when contaminants can be expected to appear, and in which water sources. The system proves that the contaminants spread rapidly through karst aquifers and via surface flows, and more slowly through intergranular aquifers. For this reason, karst water sources are more at risk from one-off contamination incidents, while water sources in intergranular aquifers are more at risk in cases of long-term contamination. The system that has been developed is the basis for a single system of protection, action and quality monitoring in the areas of complex aquifer systems within or on the borders of administrative units. Copyright © 2017 Elsevier B.V. All rights reserved.
The ALMA-PILS survey: 3D modeling of the envelope, disks and dust filament of IRAS 16293-2422
NASA Astrophysics Data System (ADS)
Jacobsen, S. K.; Jørgensen, J. K.; van der Wiel, M. H. D.; Calcutt, H.; Bourke, T. L.; Brinch, C.; Coutens, A.; Drozdovskaya, M. N.; Kristensen, L. E.; Müller, H. S. P.; Wampfler, S. F.
2018-04-01
Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims: The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods: We present 13CO, C17O and C18O J = 3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results: Radiative transfer modeling in our sampled parameter space suggests that, while the disk around source A could not be constrained, the disk around source B has to be vertically extended. This puffed-up structure can be obtained with both a protoplanetary disk model with an unexpectedly high scale-height and with the density solution from an infalling, rotating collapse. Combined constraints on our 3D model, from observed dust continuum and CO isotopologue emission between the sources, corroborate that source A should be at least six times more luminous than source B. We also demonstrate that the volume of high-temperature regions where complex organic molecules arise is sensitive to whether or not the total luminosity is in a single radiation source or distributed into two sources, affecting the interpretation of earlier chemical modeling efforts of the IRAS 16293-2422 hot corino which used a single-source approximation. Conclusions: Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA 18 L⊙ and LB 3 L⊙.
A novel simulation methodology merging source-sink dynamics and landscape connectivity
Source-sink dynamics are an emergent property of complex species-landscape interactions. This study explores the patterns of source and sink behavior that become established across a large landscape, using a simulation model for the northern spotted owl (Strix occidentalis cauri...
Harvest: a web-based biomedical data discovery and reporting application development platform.
Italia, Michael J; Pennington, Jeffrey W; Ruth, Byron; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; Miller, Jeffrey; White, Peter S
2013-01-01
Biomedical researchers share a common challenge of making complex data understandable and accessible. This need is increasingly acute as investigators seek opportunities for discovery amidst an exponential growth in the volume and complexity of laboratory and clinical data. To address this need, we developed Harvest, an open source framework that provides a set of modular components to aid the rapid development and deployment of custom data discovery software applications. Harvest incorporates visual representations of multidimensional data types in an intuitive, web-based interface that promotes a real-time, iterative approach to exploring complex clinical and experimental data. The Harvest architecture capitalizes on standards-based, open source technologies to address multiple functional needs critical to a research and development environment, including domain-specific data modeling, abstraction of complex data models, and a customizable web client.
Multiscale Metabolic Modeling: Dynamic Flux Balance Analysis on a Whole-Plant Scale1[W][OPEN
Grafahrend-Belau, Eva; Junker, Astrid; Eschenröder, André; Müller, Johannes; Schreiber, Falk; Junker, Björn H.
2013-01-01
Plant metabolism is characterized by a unique complexity on the cellular, tissue, and organ levels. On a whole-plant scale, changing source and sink relations accompanying plant development add another level of complexity to metabolism. With the aim of achieving a spatiotemporal resolution of source-sink interactions in crop plant metabolism, a multiscale metabolic modeling (MMM) approach was applied that integrates static organ-specific models with a whole-plant dynamic model. Allowing for a dynamic flux balance analysis on a whole-plant scale, the MMM approach was used to decipher the metabolic behavior of source and sink organs during the generative phase of the barley (Hordeum vulgare) plant. It reveals a sink-to-source shift of the barley stem caused by the senescence-related decrease in leaf source capacity, which is not sufficient to meet the nutrient requirements of sink organs such as the growing seed. The MMM platform represents a novel approach for the in silico analysis of metabolism on a whole-plant level, allowing for a systemic, spatiotemporally resolved understanding of metabolic processes involved in carbon partitioning, thus providing a novel tool for studying yield stability and crop improvement. PMID:23926077
2010-01-01
Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271
Variations in recollection: the effects of complexity on source recognition.
Parks, Colleen M; Murray, Linda J; Elfman, Kane; Yonelinas, Andrew P
2011-07-01
Whether recollection is a threshold or signal detection process is highly controversial, and the controversy has centered in part on the shape of receiver operating characteristics (ROCs) and z-transformed ROCs (zROCs). U-shaped zROCs observed in tests thought to rely heavily on recollection, such as source memory tests, have provided evidence in favor of the threshold assumption, but zROCs are not always as U-shaped as threshold theory predicts. Source zROCs have been shown to become more linear when the contribution of familiarity to source discriminations is increased, and this may account for the existing results. However, another way in which source zROCs may become more linear is if the recollection threshold begins to break down and recollection becomes more graded and Gaussian. We tested the "graded recollection" account in the current study. We found that increasing stimulus complexity (i.e., changing from single words to sentences) or increasing source complexity (i.e., changing the sources from audio to videos of speakers) resulted in flatter source zROCs. In addition, conditions expected to reduce recollection (i.e., divided attention and amnesia) had comparable effects on source memory in simple and complex conditions, suggesting that differences between simple and complex conditions were due to differences in the nature of recollection, rather than differences in the utility of familiarity. The results suggest that under conditions of high complexity, recollection can appear more graded, and it can produce curved ROCs. The results have implications for measurement models and for current theories of recognition memory.
Near-optimal experimental design for model selection in systems biology.
Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M
2013-10-15
Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
The spectra of ten galactic X-ray sources in the southern sky
NASA Technical Reports Server (NTRS)
Cruddace, R.; Bowyer, S.; Lampton, M.; Mack, J. E., Jr.; Margon, B.
1971-01-01
Data on ten galactic X-ray sources were obtained during a rocket flight from Brazil in June 1969. Detailed spectra of these sources have been compared with bremsstrahlung, black body, and power law models, each including interstellar absorption. Six of the sources were fitted well by one or more of these models. In only one case were the data sufficient to distinguish the best model. Three of the sources were not fitted by any of the models, which suggests that more complex emission mechanisms are applicable. A comparison of our results with those of previous investigations provides evidence that five of the sources vary in intensity by a factor of 2 or more, and that three have variable spectra. New or substantially improved positions have been derived for four of the sources observed.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
Chen, Sheng-Po; Wang, Chieh-Heng; Lin, Wen-Dian; Tong, Yu-Huei; Chen, Yu-Chun; Chiu, Ching-Jui; Chiang, Hung-Chi; Fan, Chen-Lun; Wang, Jia-Lin; Chang, Julius S
2018-05-01
The present study combines high-resolution measurements at various distances from a world-class gigantic petrochemical complex with model simulations to test a method to assess industrial emissions and their effect on local air quality. Due to the complexity in wind conditions which were highly seasonal, the dominant wind flow patterns in the coastal region of interest were classified into three types, namely northeast monsoonal (NEM) flows, southwest monsoonal (SEM) flows and local circulation (LC) based on six years of monitoring data. Sulfur dioxide (SO 2 ) was chosen as an indicative pollutant for prominent industrial emissions. A high-density monitoring network of 12 air-quality stations distributed within a 20-km radius surrounding the petrochemical complex provided hourly measurements of SO 2 and wind parameters. The SO 2 emissions from major industrial sources registered by the monitoring network were then used to validate model simulations and to illustrate the transport of the SO 2 plumes under the three typical wind patterns. It was found that the coupling of observations and modeling was able to successfully explain the transport of the industrial plumes. Although the petrochemical complex was seemingly the only major source to affect local air quality, multiple prominent sources from afar also played a significant role in local air quality. As a result, we found that a more complete and balanced assessment of the local air quality can be achieved only after taking into account the wind characteristics and emission factors of a much larger spatial scale than the initial (20 km by 20 km) study domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
Enabling complex queries to drug information sources through functional composition.
Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier
2013-01-01
Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs.
NASA Astrophysics Data System (ADS)
Chang, Ni-Bin; Weng, Yu-Chi
2013-03-01
Short-term predictions of potential impacts from accidental release of various radionuclides at nuclear power plants are acutely needed, especially after the Fukushima accident in Japan. An integrated modeling system that provides expert services to assess the consequences of accidental or intentional releases of radioactive materials to the atmosphere has received wide attention. These scenarios can be initiated either by accident due to human, software, or mechanical failures, or from intentional acts such as sabotage and radiological dispersal devices. Stringent action might be required just minutes after the occurrence of accidental or intentional release. To fulfill the basic functions of emergency preparedness and response systems, previous studies seldom consider the suitability of air pollutant dispersion models or the connectivity between source term, dispersion, and exposure assessment models in a holistic context for decision support. Therefore, the Gaussian plume and puff models, which are only suitable for illustrating neutral air pollutants in flat terrain conditional to limited meteorological situations, are frequently used to predict the impact from accidental release of industrial sources. In situations with complex terrain or special meteorological conditions, the proposing emergency response actions might be questionable and even intractable to decisionmakers responsible for maintaining public health and environmental quality. This study is a preliminary effort to integrate the source term, dispersion, and exposure assessment models into a Spatial Decision Support System (SDSS) to tackle the complex issues for short-term emergency response planning and risk assessment at nuclear power plants. Through a series model screening procedures, we found that the diagnostic (objective) wind field model with the aid of sufficient on-site meteorological monitoring data was the most applicable model to promptly address the trend of local wind field patterns. However, most of the hazardous materials being released into the environment from nuclear power plants are not neutral pollutants, so the particle and multi-segment puff models can be regarded as the most suitable models to incorporate into the output of the diagnostic wind field model in a modern emergency preparedness and response system. The proposed SDSS illustrates the state-of-the-art system design based on the situation of complex terrain in South Taiwan. This system design of SDSS with 3-dimensional animation capability using a tailored source term model in connection with ArcView® Geographical Information System map layers and remote sensing images is useful for meeting the design goal of nuclear power plants located in complex terrain.
Mazurek, Monica A
2002-12-01
This article describes a chemical characterization approach for complex organic compound mixtures associated with fine atmospheric particles of diameters less than 2.5 m (PM2.5). It relates molecular- and bulk-level chemical characteristics of the complex mixture to atmospheric chemistry and to emission sources. Overall, the analytical approach describes the organic complex mixtures in terms of a chemical mass balance (CMB). Here, the complex mixture is related to a bulk elemental measurement (total carbon) and is broken down systematically into functional groups and molecular compositions. The CMB and molecular-level information can be used to understand the sources of the atmospheric fine particles through conversion of chromatographic data and by incorporation into receptor-based CMB models. Once described and quantified within a mass balance framework, the chemical profiles for aerosol organic matter can be applied to existing air quality issues. Examples include understanding health effects of PM2.5 and defining and controlling key sources of anthropogenic fine particles. Overall, the organic aerosol compositional data provide chemical information needed for effective PM2.5 management.
NASA Astrophysics Data System (ADS)
Greene, Casey S.; Hill, Douglas P.; Moore, Jason H.
The relationship between interindividual variation in our genomes and variation in our susceptibility to common diseases is expected to be complex with multiple interacting genetic factors. A central goal of human genetics is to identify which DNA sequence variations predict disease risk in human populations. Our success in this endeavour will depend critically on the development and implementation of computational intelligence methods that are able to embrace, rather than ignore, the complexity of the genotype to phenotype relationship. To this end, we have developed a computational evolution system (CES) to discover genetic models of disease susceptibility involving complex relationships between DNA sequence variations. The CES approach is hierarchically organized and is capable of evolving operators of any arbitrary complexity. The ability to evolve operators distinguishes this approach from artificial evolution approaches using fixed operators such as mutation and recombination. Our previous studies have shown that a CES that can utilize expert knowledge about the problem in evolved operators significantly outperforms a CES unable to use this knowledge. This environmental sensing of external sources of biological or statistical knowledge is important when the search space is both rugged and large as in the genetic analysis of complex diseases. We show here that the CES is also capable of evolving operators which exploit one of several sources of expert knowledge to solve the problem. This is important for both the discovery of highly fit genetic models and because the particular source of expert knowledge used by evolved operators may provide additional information about the problem itself. This study brings us a step closer to a CES that can solve complex problems in human genetics in addition to discovering genetic models of disease.
NASA Astrophysics Data System (ADS)
Jameel, M. Y.; Brewer, S.; Fiorella, R.; Tipple, B. J.; Bowen, G. J.; Terry, S.
2017-12-01
Public water supply systems (PWSS) are complex distribution systems and critical infrastructure, making them vulnerable to physical disruption and contamination. Exploring the susceptibility of PWSS to such perturbations requires detailed knowledge of the supply system structure and operation. Although the physical structure of supply systems (i.e., pipeline connection) is usually well documented for developed cities, the actual flow patterns of water in these systems are typically unknown or estimated based on hydrodynamic models with limited observational validation. Here, we present a novel method for mapping the flow structure of water in a large, complex PWSS, building upon recent work highlighting the potential of stable isotopes of water (SIW) to document water management practices within complex PWSS. We sampled a major water distribution system of the Salt Lake Valley, Utah, measuring SIW of water sources, treatment facilities, and numerous sites within in the supply system. We then developed a hierarchical Bayesian (HB) isotope mixing model to quantify the proportion of water supplied by different sources at sites within the supply system. Known production volumes and spatial distance effects were used to define the prior probabilities for each source; however, we did not include other physical information about the supply system. Our results were in general agreement with those obtained by hydrodynamic models and provide quantitative estimates of contributions of different water sources to a given site along with robust estimates of uncertainty. Secondary properties of the supply system, such as regions of "static" and "dynamic" source (e.g., regions supplied dominantly by one source vs. those experiencing active mixing between multiple sources), can be inferred from the results. The isotope-based HB isotope mixing model offers a new investigative technique for analyzing PWSS and documenting aspects of supply system structure and operation that are otherwise challenging to observe. The method could allow water managers to document spatiotemporal variation in PWSS flow patterns, critical for interrogating the distribution system to inform operation decision making or disaster response, optimize water supply and, monitor and enforce water rights.
Martian methane plume models for defining Mars rover methane source search strategies
NASA Astrophysics Data System (ADS)
Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed
2018-07-01
The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.
Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U
2017-11-01
Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2013-12-01
This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.
NASA Astrophysics Data System (ADS)
Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.
2018-04-01
Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of sorption, especially for the case of non-ideal sorption, demonstrating the limitations of employing 2-D predictions for field-scale modeling.
Source-sink dynamics are an emergent property of complex species- landscape interactions. A better understanding of how human activities affect source-sink dynamics has the potential to inform and improve the management of species of conservation concern. Here we use a study of t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinches, A.; Pallent, L.J.
1986-10-01
Rate and yield information relating to biomass and product formation and to nitrogen, glucose and oxygen consumption are described for xanthan gum batch fermentations in which both chemically defined (glutamate nitrogen) and complex (peptone nitrogen) media are employed. Simple growth and product models are used for data interpretation. For both nitrogen sources, rate and yield parameter estimates are shown to be independent of initial nitrogen concentrations. For stationary phases, specific rates of gum production are shown to be independent of nitrogen source but dependent on initial nitrogen concentration. The latter is modeled empirically and suggests caution in applying simple productmore » models to xanthan gum fermentations. 13 references.« less
2011-09-01
24. Ferguson, J. F., A. H. Cogbill, and R. G. Warren (1994). A geophysical-geological transect of the Silent Canyon caldera complex, Pahute Mesa...and L. R. Johnson (1987). Velocity structure of Silent Canyon caldera , Nevada Test Site, Bull. Seismol. Soc. Am. 77: 597–613. Murphy J. R. (1996
Source analysis of MEG activities during sleep (abstract)
NASA Astrophysics Data System (ADS)
Ueno, S.; Iramina, K.
1991-04-01
The present study focuses on magnetic fields of the brain activities during sleep, in particular on K-complexes, vertex waves, and sleep spindles in human subjects. We analyzed these waveforms based on both topographic EEG (electroencephalographic) maps and magnetic fields measurements, called MEGs (magnetoencephalograms). The components of magnetic fields perpendicular to the surface of the head were measured using a dc SQUID magnetometer with a second derivative gradiometer. In our computer simulation, the head is assumed to be a homogeneous spherical volume conductor, with electric sources of brain activity modeled as current dipoles. Comparison of computer simulations with the measured data, particularly the MEG, suggests that the source of K-complexes can be modeled by two current dipoles. A source for the vertex wave is modeled by a single current dipole which orients along the body axis out of the head. By again measuring the simultaneous MEG and EEG signals, it is possible to uniquely determine the orientation of this dipole, particularly when it is tilted slightly off-axis. In sleep stage 2, fast waves of magnetic fields consistently appeared, but EEG spindles appeared intermittently. The results suggest that there exist sources which are undetectable by electrical measurement but are detectable by magnetic-field measurement. Such source can be described by a pair of opposing dipoles of which directions are oppositely oriented.
NASA Astrophysics Data System (ADS)
Bezruchko, Konstantin; Davidov, Albert
2009-01-01
In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.
NASA Technical Reports Server (NTRS)
Leake, M. A.
1982-01-01
Recent and more complex thermal models of Mercury and the terrestrial planets are discussed or noted. These models isolate a particular aspect of the planet's thermal history in an attempt to understand that parameter. Among these topics are thermal conductivity, convection, radiogenic sources of heat, other heat sources, and the problem of the molten core and regenerative dynamo.
Zaia Alves, Gustavo H; Hoeinghaus, David J; Manetta, Gislaine I; Benedito, Evanilde
2017-01-01
Studies in freshwater ecosystems are seeking to improve understanding of carbon flow in food webs and stable isotopes have been influential in this work. However, variation in isotopic values of basal production sources could either be an asset or a hindrance depending on study objectives. We assessed the potential for basin geology and local limnological conditions to predict stable carbon and nitrogen isotope values of six carbon sources at multiple locations in four Neotropical floodplain ecosystems (Paraná, Pantanal, Araguaia, and Amazon). Limnological conditions exhibited greater variation within than among systems. δ15N differed among basins for most carbon sources, but δ13C did not (though high within-basin variability for periphyton, phytoplankton and particulate organic carbon was observed). Although δ13C and δ15N values exhibited significant correlations with some limnological factors within and among basins, those relationships differed among carbon sources. Regression trees for both carbon and nitrogen isotopes for all sources depicted complex and in some cases nested relationships, and only very limited similarity was observed among trees for different carbon sources. Although limnological conditions predicted variation in isotope values of carbon sources, we suggest the resulting models were too complex to enable mathematical corrections of source isotope values among sites based on these parameters. The importance of local conditions in determining variation in source isotope values suggest that isotopes may be useful for examining habitat use, dispersal and patch dynamics within heterogeneous floodplain ecosystems, but spatial variability in isotope values needs to be explicitly considered when testing ecosystem models of carbon flow in these systems.
Hoeinghaus, David J.; Manetta, Gislaine I.; Benedito, Evanilde
2017-01-01
Studies in freshwater ecosystems are seeking to improve understanding of carbon flow in food webs and stable isotopes have been influential in this work. However, variation in isotopic values of basal production sources could either be an asset or a hindrance depending on study objectives. We assessed the potential for basin geology and local limnological conditions to predict stable carbon and nitrogen isotope values of six carbon sources at multiple locations in four Neotropical floodplain ecosystems (Paraná, Pantanal, Araguaia, and Amazon). Limnological conditions exhibited greater variation within than among systems. δ15N differed among basins for most carbon sources, but δ13C did not (though high within-basin variability for periphyton, phytoplankton and particulate organic carbon was observed). Although δ13C and δ15N values exhibited significant correlations with some limnological factors within and among basins, those relationships differed among carbon sources. Regression trees for both carbon and nitrogen isotopes for all sources depicted complex and in some cases nested relationships, and only very limited similarity was observed among trees for different carbon sources. Although limnological conditions predicted variation in isotope values of carbon sources, we suggest the resulting models were too complex to enable mathematical corrections of source isotope values among sites based on these parameters. The importance of local conditions in determining variation in source isotope values suggest that isotopes may be useful for examining habitat use, dispersal and patch dynamics within heterogeneous floodplain ecosystems, but spatial variability in isotope values needs to be explicitly considered when testing ecosystem models of carbon flow in these systems. PMID:28358822
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Sheldon, Kennon M; Sommet, Nicolas; Corcoran, Mike; Elliot, Andrew J
2018-04-01
We created a life-goal assessment drawing from self-determination theory and achievement goal literature, examining its predictive power regarding immoral behavior and subjective well-being. Our source items assessed direction and energization of motivation, via the distinction between intrinsic and extrinsic aims and between intrinsic and extrinsic reasons for acting, respectively. Fused source items assessed four goal complexes representing a combination of direction and energization. Across three studies ( Ns = 109, 121, and 398), the extrinsic aim/extrinsic reason complex was consistently associated with immoral and/or unethical behavior beyond four source and three other goal complex variables. This was consistent with the triangle model of responsibility's claim that immoral behaviors may result when individuals disengage the self from moral prescriptions. The extrinsic/extrinsic complex also predicted lower subjective well-being, albeit less consistently. Our goal complex approach sheds light on how self-determination theory's goal contents and organismic integration mini-theories interact, particularly with respect to unethical behavior.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Computer Analysis of Air Pollution from Highways, Streets, and Complex Interchanges
DOT National Transportation Integrated Search
1974-03-01
A detailed computer analysis of air quality for a complex highway interchange was prepared, using an in-house version of the Environmental Protection Agency's Gaussian Highway Line Source Model. This analysis showed that the levels of air pollution n...
NASA Astrophysics Data System (ADS)
Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.
2008-12-01
A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
Free and Open Source GIS Tools: Role and Relevance in the Environmental Assessment Community
The presence of an explicit geographical context in most environmental decisions can complicate assessment and selection of management options. These decisions typically involve numerous data sources, complex environmental and ecological processes and their associated models, ris...
The formulations of the AMS/EPA Regulatory Model Improvement Committee's applied air dispersion model (AERMOD) are described. This is the second in a series of three articles. Part I describes the model's methods for characterizing the atmospheric boundary layer and complex ter...
Building Blocks for Reliable Complex Nonlinear Numerical Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2002-01-01
This talk describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations
NASA Technical Reports Server (NTRS)
Yee, H. C.
2005-01-01
This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations.
Building Blocks for Reliable Complex Nonlinear Numerical Simulations. Chapter 2
NASA Technical Reports Server (NTRS)
Yee, H. C.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
This chapter describes some of the building blocks to ensure a higher level of confidence in the predictability and reliability (PAR) of numerical simulation of multiscale complex nonlinear problems. The focus is on relating PAR of numerical simulations with complex nonlinear phenomena of numerics. To isolate sources of numerical uncertainties, the possible discrepancy between the chosen partial differential equation (PDE) model and the real physics and/or experimental data is set aside. The discussion is restricted to how well numerical schemes can mimic the solution behavior of the underlying PDE model for finite time steps and grid spacings. The situation is complicated by the fact that the available theory for the understanding of nonlinear behavior of numerics is not at a stage to fully analyze the nonlinear Euler and Navier-Stokes equations. The discussion is based on the knowledge gained for nonlinear model problems with known analytical solutions to identify and explain the possible sources and remedies of numerical uncertainties in practical computations. Examples relevant to turbulent flow computations are included.
Bonomi, Massimiliano; Pellarin, Riccardo; Kim, Seung Joong; Russel, Daniel; Sundin, Bryan A.; Riffle, Michael; Jaschob, Daniel; Ramsden, Richard; Davis, Trisha N.; Muller, Eric G. D.; Sali, Andrej
2014-01-01
The use of in vivo Förster resonance energy transfer (FRET) data to determine the molecular architecture of a protein complex in living cells is challenging due to data sparseness, sample heterogeneity, signal contributions from multiple donors and acceptors, unequal fluorophore brightness, photobleaching, flexibility of the linker connecting the fluorophore to the tagged protein, and spectral cross-talk. We addressed these challenges by using a Bayesian approach that produces the posterior probability of a model, given the input data. The posterior probability is defined as a function of the dependence of our FRET metric FRETR on a structure (forward model), a model of noise in the data, as well as prior information about the structure, relative populations of distinct states in the sample, forward model parameters, and data noise. The forward model was validated against kinetic Monte Carlo simulations and in vivo experimental data collected on nine systems of known structure. In addition, our Bayesian approach was validated by a benchmark of 16 protein complexes of known structure. Given the structures of each subunit of the complexes, models were computed from synthetic FRETR data with a distance root-mean-squared deviation error of 14 to 17 Å. The approach is implemented in the open-source Integrative Modeling Platform, allowing us to determine macromolecular structures through a combination of in vivo FRETR data and data from other sources, such as electron microscopy and chemical cross-linking. PMID:25139910
Shallow seismicity in volcanic system: what role does the edifice play?
NASA Astrophysics Data System (ADS)
Bean, Chris; Lokmer, Ivan
2017-04-01
Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.
NASA Astrophysics Data System (ADS)
Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.
2017-11-01
The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr < 2 km/s) propagating rupture from south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
Chatterji, Madhabi
2016-12-01
This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas
2017-02-01
In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Callewaert, Raf; De Vuyst, Luc
2000-01-01
Amylovorin L471 is a small, heat-stable, and hydrophobic bacteriocin produced by Lactobacillus amylovorus DCE 471. The nutritional requirements for amylovorin L471 production were studied with fed-batch fermentations. A twofold increase in bacteriocin titer was obtained when substrate addition was controlled by the acidification rate of the culture, compared with the titers reached with constant substrate addition or pH-controlled batch cultures carried out under the same conditions. An interesting feature of fed-batch cultures observed under certain culture conditions (constant feed rate) is the apparent stabilization of bacteriocin activity after obtaining maximum production. Finally, a mathematical model was set up to simulate cell growth, glucose and complex nitrogen source consumption, and lactic acid and bacteriocin production kinetics. The model showed that bacterial growth was dependent on both the energy and the complex nitrogen source. Bacteriocin production was growth associated, with a simultaneous bacteriocin adsorption on the producer cells dependent on the lactic acid accumulated and hence the viability of the cells. Both bacteriocin production and adsorption were inhibited by high concentrations of the complex nitrogen source. PMID:10653724
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion.
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs
2016-05-20
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.
Adaptive evolution of complex innovations through stepwise metabolic niche expansion
Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs
2016-01-01
A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Lenstronomy: Multi-purpose gravitational lens modeling software package
NASA Astrophysics Data System (ADS)
Birrer, Simon; Amara, Adam
2018-04-01
Lenstronomy is a multi-purpose open-source gravitational lens modeling python package. Lenstronomy reconstructs the lens mass and surface brightness distributions of strong lensing systems using forward modelling and supports a wide range of analytic lens and light models in arbitrary combination. The software is also able to reconstruct complex extended sources as well as point sources. Lenstronomy is flexible and numerically accurate, with a clear user interface that could be deployed across different platforms. Lenstronomy has been used to derive constraints on dark matter properties in strong lenses, measure the expansion history of the universe with time-delay cosmography, measure cosmic shear with Einstein rings, and decompose quasar and host galaxy light.
NASA Astrophysics Data System (ADS)
Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl
2014-09-01
Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Swank, Jean (Technical Monitor)
2001-01-01
Analysis of 80 ks ASCA (Advanced Satellite for Cosmology and Astrophysics) and 60 ks ROSAT HRI (High Resolution Image) observations of the face-on spiral galaxy NGC 6946 are presented. The ASCA image is the first observation of this galaxy above approximately 2 keV. Diffuse emission may be present in the inner approximately 4' extending to energies above approximately 2-3 keV. In the HRI data, 14 pointlike sources are detected, the brightest two being a source very close to the nucleus and a source to the northeast that corresponds to a luminous complex of interacting supernova remnants (SNRs). We detect a point source that lies approximately 30" west of the SNR complex but with a luminosity -1115 of the SNR complex. None of the point sources show evidence of strong variability; weak variability would escape our detection. The ASCA spectrum of the SNR complex shows evidence for an emission line at approximately 0.9 keV that could be either Ne IX at approximately 0.915 keV or a blend of ion stages of Fe L-shell emission if the continuum is fitted with a power law. However, a two-component, Raymond-Smith thermal spectrum with no lines gives an equally valid continuum fit and may be more physically plausible given the observed spectrum below 3 keV. Adopting this latter model, we derive a density for the SNR complex of 10-35 cm(exp -3), consistent with estimates inferred from optical emission-line ratios. The complex's extraordinary X-ray luminosity may be related more to the high density of the surrounding medium than to a small but intense interaction region where two of the complex's SNRs are apparently colliding.
An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application
USDA-ARS?s Scientific Manuscript database
A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
NASA Technical Reports Server (NTRS)
Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul
2017-01-01
A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.
Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul
2018-01-01
A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events. PMID:29632432
Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul
2017-06-01
A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.
Characterization of an Explosion Source in a Complex Medium by Modeling and Wavelet Domain Inversion
2006-06-01
1 2. Mechanisms on Scattering due to an Explosive Source...the S wave at the tunnel. TRA has great potential for determining the seismic source properties. 2 2. Mechanisms on Scattering due to an Explosive...and prominent SH and Love waves. Various mechanisms have been proposed to explain the generation of these transverse waves. 2.2 Objectives of This
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
NASA Astrophysics Data System (ADS)
Zhang, Li; Wang, Tao; Zhang, Qiang; Zheng, Junyu; Xu, Zheng; Lv, Mengyao
2016-04-01
Current chemical transport models commonly undersimulate the atmospheric concentration of nitrous acid (HONO), which plays an important role in atmospheric chemistry, due to the lack or inappropriate representations of some sources in the models. In the present study, we parameterized up-to-date HONO sources into a state-of-the-art three-dimensional chemical transport model (Weather Research and Forecasting model coupled with Chemistry: WRF-Chem). These sources included (1) heterogeneous reactions on ground surfaces with the photoenhanced effect on HONO production, (2) photoenhanced reactions on aerosol surfaces, (3) direct vehicle and vessel emissions, (4) potential conversion of NO2 at the ocean surface, and (5) emissions from soil bacteria. The revised WRF-Chem was applied to explore the sources of the high HONO concentrations (0.45-2.71 ppb) observed at a suburban site located within complex land types (with artificial land covers, ocean, and forests) in Hong Kong. With the addition of these sources, the revised model substantially reproduced the observed HONO levels. The heterogeneous conversions of NO2 on ground surfaces dominated HONO sources contributing about 42% to the observed HONO mixing ratios, with emissions from soil bacterial contributing around 29%, followed by the oceanic source (~9%), photochemical formation via NO and OH (~6%), conversion on aerosol surfaces (~3%), and traffic emissions (~2%). The results suggest that HONO sources in suburban areas could be more complex and diverse than those in urban or rural areas and that the bacterial and/or ocean processes need to be considered in HONO production in forested and/or coastal areas. Sensitivity tests showed that the simulated HONO was sensitive to the uptake coefficient of NO2 on the surfaces. Incorporation of the aforementioned HONO sources significantly improved the simulations of ozone, resulting in increases of ground-level ozone concentrations by 6-12% over urban areas in Hong Kong and the Pearl River Delta region. This result highlights the importance of accurately representing HONO sources in simulations of secondary pollutants over polluted regions.
NASA Astrophysics Data System (ADS)
Izquierdo, Andrés F.; Galván-Madrid, Roberto; Maud, Luke T.; Hoare, Melvin G.; Johnston, Katharine G.; Keto, Eric R.; Zhang, Qizhou; de Wit, Willem-Jan
2018-05-01
We present a composite model and radiative transfer simulations of the massive star forming core W33A MM1. The model was tailored to reproduce the complex features observed with ALMA at ≈0.2 arcsec resolution in CH3CN and dust emission. The MM1 core is fragmented into six compact sources coexisting within ˜1000 au. In our models, three of these compact sources are better represented as disc-envelope systems around a central (proto)star, two as envelopes with a central object, and one as a pure envelope. The model of the most prominent object (Main) contains the most massive (proto)star (M⋆ ≈ 7 M⊙) and disc+envelope (Mgas ≈ 0.4 M⊙), and is the most luminous (LMain ˜ 104 L⊙). The model discs are small (a few hundred au) for all sources. The composite model shows that the elongated spiral-like feature converging to the MM1 core can be convincingly interpreted as a filamentary accretion flow that feeds the rising stellar system. The kinematics of this filament is reproduced by a parabolic trajectory with focus at the center of mass of the region. Radial collapse and fragmentation within this filament, as well as smaller filamentary flows between pairs of sources are proposed to exist. Our modelling supports an interpretation where what was once considered as a single massive star with a ˜103 au disc and envelope, is instead a forming stellar association which appears to be virialized and to form several low-mass stars per high-mass object.
Abby L. McQueen; Nicolas P. Zegre; Danny L. Welsch
2013-01-01
The integration of factors and processes responsible for streambank erosion is complex. To explore the influence of physical variables on streambank erosion, parameters for the bank assessment of nonpoint source consequences of sediment (BANCS) model were collected on a 1-km reach of Horseshoe Run in Tucker County, West Virginia. Cluster analysis was used to establish...
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon
Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208
A computational framework for modeling targets as complex adaptive systems
NASA Astrophysics Data System (ADS)
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
NASA Astrophysics Data System (ADS)
Pallant, Amy; Lee, Hee-Sun
2015-04-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.
1994-07-15
xi- ACKNOWLEDGMENTS The I/DBTWG co-chairs would like to thank Ms. Linda Quicker of RAND for her efforts in coordinating the I/DBTWG meeting and...Subgroup on Authoritative Data Sources: Mr. Bill Dunn 0930-0945 Report from M&S Complex Data Task Force Subgroup on Categorization: Mr. Len Seligman ...issues; and need to address maintenance of the Authoritative Data Source directory by DMSO/IAC. - 15- Mr. Len Seligman : Report from M&S Complex Data Tasm
An integrated modelling framework for neural circuits with multiple neuromodulators.
Joshi, Alok; Youssofzadeh, Vahab; Vemana, Vinith; McGinnity, T M; Prasad, Girijesh; Wong-Lin, KongFatt
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. © 2017 The Authors.
An integrated modelling framework for neural circuits with multiple neuromodulators
Vemana, Vinith
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828
NASA Astrophysics Data System (ADS)
Falta, R. W.
2004-05-01
Analytical solutions are developed that relate changes in the contaminant mass in a source area to the behavior of biologically reactive dissolved contaminant groundwater plumes. Based on data from field experiments, laboratory experiments, numerical streamtube models, and numerical multiphase flow models, the chemical discharge from a source region is assumed to be a nonlinear power function of the fraction of contaminant mass removed from the source zone. This function can approximately represent source zone mass discharge behavior over a wide range of site conditions ranging from simple homogeneous systems, to complex heterogeneous systems. A mass balance on the source zone with advective transport and first order decay leads to a nonlinear differential equation that is solved analytically to provide a prediction of the time-dependent contaminant mass discharge leaving the source zone. The solution for source zone mass discharge is coupled semi-analytically with a modified version of the Domenico (1987) analytical solution for three-dimensional reactive advective and dispersive transport in groundwater. The semi-analytical model then employs the BIOCHLOR (Aziz et al., 2000; Sun et al., 1999) transformations to model sequential first order parent-daughter biological decay reactions of chlorinated ethenes and ethanes in the groundwater plume. The resulting semi-analytic model thus allows for transient simulation of complex source zone behavior that is fully coupled to a dissolved contaminant plume undergoing sequential biological reactions. Analyses of several realistic scenarios show that substantial changes in the ground water plume can result from the partial removal of contaminant mass from the source zone. These results, however, are sensitive to the nature of the source mass reduction-source discharge reduction curve, and to the rates of degradation of the primary contaminant and its daughter products in the ground water plume. Aziz, C.E., C.J. Newell, J.R. Gonzales, P. Haas, T.P. Clement, and Y. Sun, 2000, BIOCHLOR Natural Attenuation Decision Support System User's Manual Version 1.0, US EPA Report EPA/600/R-00/008 Domenico, P.A., 1987, An analytical model for multidimensional transport of a decaying contaminant species, J. Hydrol., 91: 49-58. Sun, Y., J.N. Petersen, T.P. Clement, and R.S. Skeen, 1999, A new analytical solution for multi-species transport equations with serial and parallel reactions, Water Resour. Res., 35(1): 185-190.
Broadband radio spectro-polarimetric observations of high-Faraday-rotation-measure AGN
NASA Astrophysics Data System (ADS)
Pasetto, Alice; Carrasco-González, Carlos; O'Sullivan, Shane; Basu, Aritra; Bruni, Gabriele; Kraus, Alex; Curiel, Salvador; Mack, Karl-Heinz
2018-06-01
We present broadband polarimetric observations of a sample of high-Faraday-rotation-measure (high-RM) active galactic nuclei (AGN) using the Karl. G. Jansky Very Large Array (JVLA) telescope from 1 to 2 GHz, and 4 to 12 GHz. The sample (14 sources) consists of very compact sources (linear resolution smaller than ≈5 kpc) that are unpolarized at 1.4 GHz in the NRAO VLA Sky Survey (NVSS). Total intensity data have been modeled using a combination of synchrotron components, revealing complex structure in their radio spectra. Depolarization modeling, through the so-called qu-fitting (the modeling of the fractional quantities of the Stokes Q and U parameters), has been performed on the polarized data using an equation that attempts to simplify the process of fitting many different depolarization models. These models can be divided into two major categories: external depolarization (ED) and internal depolarization (ID) models. Understanding which of the two mechanisms is the most representative would help the qualitative understanding of the AGN jet environment and whether it is embedded in a dense external magneto-ionic medium or if it is the jet-wind that causes the high RM and strong depolarization. This could help to probe the jet magnetic field geometry (e.g., helical or otherwise). This new high-sensitivity data shows a complicated behavior in the total intensity and polarization radio spectrum of individual sources. We observed the presence of several synchrotron components and Faraday components in their total intensity and polarized spectra. For the majority of our targets (12 sources), the depolarization seems to be caused by a turbulent magnetic field. Thus, our main selection criteria (lack of polarization at 1.4 GHz in the NVSS) result in a sample of sources with very large RMs and depolarization due to turbulent magnetic fields local to the source. These broadband JVLA data reveal the complexity of the polarization properties of this class of radio sources. We show how the new qu-fitting technique can be used to probe the magnetized radio source environment and to spectrally resolve the polarized components of unresolved radio sources.
Testing the uniqueness of mass models using gravitational lensing
NASA Astrophysics Data System (ADS)
Walls, Levi; Williams, Liliya L. R.
2018-06-01
The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.
Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam
2016-01-01
Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.
NASA Astrophysics Data System (ADS)
Kaneko, Y.; Francois-Holden, C.; Hamling, I. J.; D'Anastasio, E.; Fry, B.
2017-12-01
The 2016 M7.8 Kaikōura (New Zealand) earthquake generated ground motions over 1g across a 200-km long region, resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution during the Kaikōura earthquake multiple kinematic modelling methods based on local strong-motion and high-rate GPS data. Our kinematic models constrained by near-source data capture, in detail, a complex pattern of slowly (Vr < 2km/s) propagating rupture from the south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 seconds after the origin time. Interestingly, both models indicate rupture re-activation on the Kekerengu fault with the time separation of 11 seconds. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.
Fundamental mass transfer modeling of emission of volatile organic compounds from building materials
NASA Astrophysics Data System (ADS)
Bodalal, Awad Saad
In this study, a mass transfer theory based model is presented for characterizing the VOC emissions from building materials. A 3-D diffusion model is developed to describe the emissions of volatile organic compounds (VOCs) from individual sources. Then the formulation is extended to include the emissions from composite sources (system comprising an assemblage of individual sources). The key parameters for the model (The diffusion coefficient of the VOC in the source material D, and the equilibrium partition coefficient k e) were determined independently (model parameters are determined without the use of chamber emission data). This procedure eliminated to a large extent the need for emission testing using environmental chambers, which is costly, time consuming, and may be subject to confounding sink effects. An experimental method is developed and implemented to measure directly the internal diffusion (D) and partition coefficients ( ke). The use of the method is illustrated for three types of VOC's: (i) Aliphatic Hydrocarbons, (ii) Aromatic Hydrocarbons and ( iii) Aldehydes, through typical dry building materials (carpet, plywood, particleboard, vinyl floor tile, gypsum board, sub-floor tile and OSB). Then correlations for predicting D and ke based solely on commonly available properties such as molecular weight and vapour pressure were proposed for each product and type of VOC. These correlations can be used to estimate the D and ke when direct measurement data are not available, and thus facilitate the prediction of VOC emissions from the building materials using mass transfer theory. The VOC emissions from a sub-floor material (made of the recycled automobile tires), and a particleboard are measured and predicted. Finally, a mathematical model to predict the diffusion coefficient through complex sources (floor adhesive) as a function of time was developed. Then this model (for diffusion coefficient in complex sources) was used to predict the emission rate from material system (namely, substrate//glue//vinyl tile).
CarbonSAFE Illinois - Macon County
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whittaker, Steve
CarbonSAFE Illinois is a a Feasibility study to develop an established geologic storage complex in Macon County, Illinois, for commercial-scale storage of industrially sourced CO2. Feasibility activities are focused on the Mt. Simon Storage Complex; a step-out well will be drilled near existing storage sites (i.e., the Midwest Geological Sequestration Consortium’s Illinois Basin – Decatur Project and the Illinois Industrial Carbon Capture and Storage Project) to further establish commercial viability of this complex and to evaluate EOR potential in a co-located oil-field trend. The Archer Daniels Midland facility (ethanol plant), City Water, Light, and Power in Springfield, Illinois (coal-fired powermore » station), and other regional industries are potential sources of anthropogenic CO2 for storage at this complex. Site feasibility will be evaluated through drilling results, static and dynamic modeling, and quantitative risk assessment. Both studies will entail stakeholder engagement, consideration of infrastructure requirements, existing policy, and business models. Project data will help calibrate the National Risk Assessment Partnership (NRAP) Toolkit to better understand the risks of commercial-scale carbon storage.« less
Ground-Based Aerosol Measurements | Science Inventory ...
Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomon et al. 2014) as well as in research studies. In this approach, air, at a specified flow rate and time period, is typically drawn through an inlet, usually a size selective inlet, and then drawn through filters, 1 INTRODUCTION Atmospheric particulate matter (PM) is a complex chemical mixture of liquid and solid particles suspended in air (Seinfeld and Pandis 2016). Measurements of this complex mixture form the basis of our knowledge regarding particle formation, source-receptor relationships, data to test and verify complex air quality models, and how PM impacts human health, visibility, global warming, and ecological systems (EPA 2009). Historically, PM samples have been collected on filters or other substrates with subsequent chemical analysis in the laboratory and this is still the major approach for routine networks (Chow 2005; Solomo
2.5D Modeling of TEM Data Applied to Hidrogeological Studies in PARANÁ Basin, Brazil
NASA Astrophysics Data System (ADS)
Bortolozo, C. A.; Porsani, J. L.; Santos, F. M.
2013-12-01
The transient electromagnetic method (TEM) is used all over the world and has shown great potential in hydrological, hazardous waste site characterization, mineral exploration, general geological mapping, and geophysical reconnaissance. However, the behavior of TEM fields are very complex and is not yet fully understood. Forward modeling is one of the most common and effective methods to understand the physical behavior and significance of the electromagnetics responses of a TEM sounding. Until now, there are a limited number of solutions for the 2D forward problem for TEM. More rare are the descriptions of a three-component response of a 3D source over 2D earth, which is the so-called 2.5D. The 2.5D approach is more realistic than the conventional 2D source previous used, once normally the source cannot be realistic represented for a 2D approximation (normally source are square loops). At present the 2.5D model represents the only way of interpreting TEM data in terms of a complex earth, due to the prohibitive amount of computer time and storage required for a full 3D model. In this work we developed a TEM modeling program for understanding the different responses and how the magnetic and electric fields, produced by loop sources at air-earth interface, behave in different geoelectrical distributions. The models used in the examples are proposed focusing hydrogeological studies, once the main objective of this work is for detecting different kinds of aquifers in Paraná sedimentary basin, in São Paulo State - Brazil. The program was developed in MATLAB, a widespread language very common in the scientific community.
Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.
Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J
2015-06-01
There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Multiphase Modelling of Bacteria Removal in a CSO Stream
Indicator bacteria are an important determinant of water quality in many water resources management situations. They are also one of the more complex phenomena to model and predict. Sources abound, the populations are dynamic and influenced by many factors, and mobility through...
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
EMITTING ELECTRONS AND SOURCE ACTIVITY IN MARKARIAN 501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo
2012-07-10
We study the variation of the broadband spectral energy distribution (SED) of the BL Lac object Mrk 501 as a function of source activity, from quiescent to flaring. Through {chi}{sup 2}-minimization we model eight simultaneous SED data sets with a one-zone synchrotron self-Compton (SSC) model, and examine how model parameters vary with source activity. The emerging variability pattern of Mrk 501 is complex, with the Compton component arising from {gamma}-e scatterings that sometimes are (mostly) Thomson and sometimes (mostly) extreme Klein-Nishina. This can be seen from the variation of the Compton to synchrotron peak distance according to source state. Themore » underlying electron spectra are faint/soft in quiescent states and bright/hard in flaring states. A comparison with Mrk 421 suggests that the typical values of the SSC parameters are different in the two sources: however, in both jets the energy density is particle-dominated in all states.« less
The central purpose of our study was to examine the performance of the United States Environmental Protection Agency's (EPA) nonreactive Gaussian air quality dispersion model, the Industrial Source Complex Short Term Model (ISCST3) Version 98226, in predicting polychlorinated dib...
USDA-ARS?s Scientific Manuscript database
Water quality models address nonpoint source pollution from agricultural land at a range of scales and complexities and involve a variety of input parameters. It is often difficult for conservationists and stakeholders to understand and reconcile water quality results from different models. However,...
USDA-ARS?s Scientific Manuscript database
Land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation describes a robust but relatively simple LS...
NASA Astrophysics Data System (ADS)
Box, Paul W.
GIS and spatial analysis is suited mainly for static pictures of the landscape, but many of the processes that need exploring are dynamic in nature. Dynamic processes can be complex when put in a spatial context; our ability to study such processes will probably come with advances in understanding complex systems in general. Cellular automata and agent-based models are two prime candidates for exploring complex spatial systems, but are difficult to implement. Innovative tools that help build complex simulations will create larger user communities, who will probably find novel solutions for understanding complexity. A significant source for such innovations is likely to be from the collective efforts of hobbyists and part-time programmers, who have been dubbed ``garage-band scientists'' in the popular press.
NASA Astrophysics Data System (ADS)
Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.
2008-12-01
Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.
NASA Astrophysics Data System (ADS)
Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.
2017-10-01
Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.
AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source
NASA Astrophysics Data System (ADS)
Nightingale, J. W.; Dye, S.; Massey, Richard J.
2018-05-01
This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
NASA Astrophysics Data System (ADS)
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.
NASA Astrophysics Data System (ADS)
Pournazeri, Sam; Princevac, Marko; Venkatram, Akula
2012-08-01
Field and laboratory studies have been conducted to investigate the effect of surrounding buildings on the plume rise from low-level buoyant sources, such as distributed power generators. The field experiments were conducted in Palm Springs, California, USA in November 2010 and plume rise from a 9.3 m stack was measured. In addition to the field study, a laboratory study was conducted in a water channel to investigate the effects of surrounding buildings on plume rise under relatively high wind-speed conditions. Different building geometries and source conditions were tested. The experiments revealed that plume rise from low-level buoyant sources is highly affected by the complex flows induced by buildings stationed upstream and downstream of the source. The laboratory results were compared with predictions from a newly developed numerical plume-rise model. Using the flow measurements associated with each building configuration, the numerical model accurately predicted plume rise from low-level buoyant sources that are influenced by buildings. This numerical plume rise model can be used as a part of a computational fluid dynamics model.
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
Inferring tidal wetland stability from channel sediment fluxes: observations and a conceptual model
Ganju, Neil K.; Nidzieko, Nicholas J.; Kirwan, Matthew L.
2013-01-01
Anthropogenic and climatic forces have modified the geomorphology of tidal wetlands over a range of timescales. Changes in land use, sediment supply, river flow, storminess, and sea level alter the layout of tidal channels, intertidal flats, and marsh plains; these elements define wetland complexes. Diagnostically, measurements of net sediment fluxes through tidal channels are high-temporal resolution, spatially integrated quantities that indicate (1) whether a complex is stable over seasonal timescales and (2) what mechanisms are leading to that state. We estimated sediment fluxes through tidal channels draining wetland complexes on the Blackwater and Transquaking Rivers, Maryland, USA. While the Blackwater complex has experienced decades of degradation and been largely converted to open water, the Transquaking complex has persisted as an expansive, vegetated marsh. The measured net export at the Blackwater complex (1.0 kg/s or 0.56 kg/m2/yr over the landward marsh area) was caused by northwesterly winds, which exported water and sediment on the subtidal timescale; tidally forced net fluxes were weak and precluded landward transport of suspended sediment from potential seaward sources. Though wind forcing also exported sediment at the Transquaking complex, strong tidal forcing and proximity to a turbidity maximum led to an import of sediment (0.031 kg/s or 0.70 kg/m2/yr). This resulted in a spatially averaged accretion of 3.9 mm/yr, equaling the regional relative sea level rise. Our results suggest that in areas where seaward sediment supply is dominant, seaward wetlands may be more capable of withstanding sea level rise over the short term than landward wetlands. We propose a conceptual model to determine a complex's tendency toward stability or instability based on sediment source, wetland channel location, and transport mechanisms. Wetlands with a reliable portfolio of sources and transport mechanisms appear better suited to offset natural and anthropogenic loss.
Nuclear Physics Meets the Sources of the Ultra-High Energy Cosmic Rays.
Boncioli, Denise; Fedynitch, Anatoli; Winter, Walter
2017-07-07
The determination of the injection composition of cosmic ray nuclei within astrophysical sources requires sufficiently accurate descriptions of the source physics and the propagation - apart from controlling astrophysical uncertainties. We therefore study the implications of nuclear data and models for cosmic ray astrophysics, which involves the photo-disintegration of nuclei up to iron in astrophysical environments. We demonstrate that the impact of nuclear model uncertainties is potentially larger in environments with non-thermal radiation fields than in the cosmic microwave background. We also study the impact of nuclear models on the nuclear cascade in a gamma-ray burst radiation field, simulated at a level of complexity comparable to the most precise cosmic ray propagation code. We conclude with an isotope chart describing which information is in principle necessary to describe nuclear interactions in cosmic ray sources and propagation.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
Modeling the Complexities of Water and Hygiene in Limpopo Province South Africa
NASA Astrophysics Data System (ADS)
Mellor, J. E.; Smith, J. A.; Learmonth, G.; Netshandama, V.; Dillingham, R.
2012-12-01
Access to sustainable water and sanitation services is one of the biggest challenges the developing world faces as an increasing number of people inhabit those areas. Inadequate access to water and sanitation infrastructure often leads children to drink poor quality water which can result in early childhood diarrhea (ECD). Repeated episodes of ECD can cause serious problems such as growth stunting, cognitive impairment, and even death. Although researchers have long studied the connection between poor access to water and hygiene facilities and ECD, most studies have relied on intervention-control methods to study the effects of singular interventions. Such studies are time-consuming, costly, and fail to acknowledge that the causes and prevention strategies for ECD are numerous and complex. An alternate approach is to think of a community as a complex system in which the engineered, natural and social environments interact in ways that are not easily predicted. Such complex systems have no central or coordinating mechanism and may exhibit emergent behavior which can be counterintuitive and lead to valuable insights. The goal of this research is to develop a robust, quantitative understanding of the complex pathogen transmission chain that leads to ECD. To realize this goal, we have developed an Agent-Based Model (ABM) which simulates individual community member behavior. We have validated this transdisciplinary model with four years of field data from a community in Limpopo Province, South Africa. Our model incorporates data such as household water source preferences, collection habits, household- and source-water quality, water-source reliability and biological regrowth. Our outcome measures are household water quality, ECD incidences, and child growth stunting. This technique allows us to test hypotheses on the computer. Future researchers can implement promising interventions with our partner institution, the University of Venda, and the model can be refined as the results of those interventions become available. Our model accurately reproduces current pathogen transport through the communities and child growth stunting. An intensive sensitivity analysis found that biological regrowth, biofilm layers and collection habits are all factors in pathogen transmission. We also report on the effects of multiple interventions and our exploration of emergent behavior. Our results indicate that the dominant source of fecal-oral transmission is through the contamination of drinking water after collection, but before consumption. Furthermore sub-optimal interventions such as improved, but still inconsistent water treatment have little protective effect against ECD. Finally, interventions such as the introduction of point-of-use water treatment technologies or improved water-storage practices are the best ECD prevention strategies. The complexities of the causes and prevention strategies of pathogen loading and ECD in the developing world are poorly understood. This project goes beyond previous studies through its ability to model the complex engineered/natural/social pathogen transmission chain using an ABM informed by field data. We hope that this and similar tools may be used by scientists, policy-makers and humanitarian organizations when designing community-level interventions to prevent ECD in similar settings around the world.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the effects of emissions on air quality, for example, an assessment using EPA's community multi-scale... Source Complex Model or Emission and Dispersion Model System) to determine the effects of emissions on... that it is not a significant precursor, and (iii) Volatile organic compounds (VOC) and ammonia (NH3...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the effects of emissions on air quality, for example, an assessment using EPA's community multi-scale... Source Complex Model or Emission and Dispersion Model System) to determine the effects of emissions on... that it is not a significant precursor, and (iii) Volatile organic compounds (VOC) and ammonia (NH3...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the effects of emissions on air quality, for example, an assessment using EPA's community multi-scale... Source Complex Model or Emission and Dispersion Model System) to determine the effects of emissions on... that it is not a significant precursor, and (iii) Volatile organic compounds (VOC) and ammonia (NH3...
Single ICMEs and Complex Transient Structures in the Solar Wind in 2010 - 2011
NASA Astrophysics Data System (ADS)
Rodkin, D.; Slemzin, V.; Zhukov, A. N.; Goryaev, F.; Shugay, Y.; Veselovsky, I.
2018-05-01
We analyze the statistics, solar sources, and properties of interplanetary coronal mass ejections (ICMEs) in the solar wind. The total number of coronal mass ejections (CMEs) registered in the Coordinated Data Analysis Workshops catalog (CDAW) during the first eight years of Cycle 24 was 61% larger than in the same period of Cycle 23, but the number of X-ray flares registered by the Geostationary Operational Environmental Satellite (GOES) was 20 % smaller because the solar activity was lower. The total number of ICMEs in the given period of Cycle 24 in the Richardson and Cane list was 29% smaller than in Cycle 23, which may be explained by a noticeable number of non-classified ICME-like events in the beginning of Cycle 24. For the period January 2010 - August 2011, we identify solar sources of the ICMEs that are included in the Richardson and Cane list. The solar sources of ICME were determined from coronagraph observations of the Earth-directed CMEs, supplemented by modeling of their propagation in the heliosphere using kinematic models (a ballistic and drag-based model). A detailed analysis of the ICME solar sources in the period under study showed that in 11 cases out of 23 (48%), the observed ICME could be associated with two or more sources. For multiple-source events, the resulting solar wind disturbances can be described as complex (merged) structures that are caused by stream interactions, with properties depending on the type of the participating streams. As a reliable marker to identify interacting streams and their sources, we used the plasma ion composition because it freezes in the low corona and remains unchanged in the heliosphere. According to the ion composition signatures, we classify these cases into three types: complex ejecta originating from weak and strong CME-CME interactions, as well as merged interaction regions (MIRs) originating from the CME high-speed stream (HSS) interactions. We describe temporal profiles of the ion composition for the single-source and multi-source solar wind structures and compared them with the ICME signatures determined from the kinematic and magnetic field parameters of the solar wind. In single-source events, the ion charge state, as a rule, has a one-peak enhancement with an average duration of about one day, which is similar to the mean ICME duration of 1.12 days derived from the Richardson and Cane list. In the multi-source events, the total profile of the ion charge state consists of a sequence of enhancements that is associated with the interaction between the participating streams. On average, the total duration of the complex structures that appear as a result of the CME-CME and CME-HSS interactions as determined from their ion composition is 2.4 days, which is more than twice longer than that of the single-source events.
Opportunities and Challenges in Supply-Side Simulation: Physician-Based Models
Gresenz, Carole Roan; Auerbach, David I; Duarte, Fabian
2013-01-01
Objective To provide a conceptual framework and to assess the availability of empirical data for supply-side microsimulation modeling in the context of health care. Data Sources Multiple secondary data sources, including the American Community Survey, Health Tracking Physician Survey, and SK&A physician database. Study Design We apply our conceptual framework to one entity in the health care market—physicians—and identify, assess, and compare data available for physician-based simulation models. Principal Findings Our conceptual framework describes three broad types of data required for supply-side microsimulation modeling. Our assessment of available data for modeling physician behavior suggests broad comparability across various sources on several dimensions and highlights the need for significant integration of data across multiple sources to provide a platform adequate for modeling. A growing literature provides potential estimates for use as behavioral parameters that could serve as the models' engines. Sources of data for simulation modeling that account for the complex organizational and financial relationships among physicians and other supply-side entities are limited. Conclusions A key challenge for supply-side microsimulation modeling is optimally combining available data to harness their collective power. Several possibilities also exist for novel data collection. These have the potential to serve as catalysts for the next generation of supply-side-focused simulation models to inform health policy. PMID:23347041
Matlab Geochemistry: An open source geochemistry solver based on MRST
NASA Astrophysics Data System (ADS)
McNeece, C. J.; Raynaud, X.; Nilsen, H.; Hesse, M. A.
2017-12-01
The study of geological systems often requires the solution of complex geochemical relations. To address this need we present an open source geochemical solver based on the Matlab Reservoir Simulation Toolbox (MRST) developed by SINTEF. The implementation supports non-isothermal multicomponent aqueous complexation, surface complexation, ion exchange, and dissolution/precipitation reactions. The suite of tools available in MRST allows for rapid model development, in particular the incorporation of geochemical calculations into transport simulations of multiple phases, complex domain geometry and geomechanics. Different numerical schemes and additional physics can be easily incorporated into the existing tools through the object-oriented framework employed by MRST. The solver leverages the automatic differentiation tools available in MRST to solve arbitrarily complex geochemical systems with any choice of species or element concentration as input. Four mathematical approaches enable the solver to be quite robust: 1) the choice of chemical elements as the basis components makes all entries in the composition matrix positive thus preserving convexity, 2) a log variable transformation is used which transfers the nonlinearity to the convex composition matrix, 3) a priori bounds on variables are calculated from the structure of the problem, constraining Netwon's path and 4) an initial guess is calculated implicitly by sequentially adding model complexity. As a benchmark we compare the model to experimental and semi-analytic solutions of the coupled salinity-acidity transport system. Together with the reservoir simulation capabilities of MRST the solver offers a promising tool for geochemical simulations in reservoir domains for applications in a diversity of fields from enhanced oil recovery to radionuclide storage.
Computationally efficient thermal-mechanical modelling of selective laser melting
NASA Astrophysics Data System (ADS)
Yang, Yabin; Ayas, Can
2017-10-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.
Spectral inversion of frequency-domain IP data obtained in Haenam, South Korea
NASA Astrophysics Data System (ADS)
Kim, B.; Nam, M. J.; Son, J. S.
2017-12-01
Spectral induced polarization (SIP) method using a range of source frequencies have been performed for not only exploring minerals resources, but also engineering or environmental application. SIP interpretation first makes inversion of individual frequency data to obtain complex resistivity structures, which will further analyzed employing Cole-Cole model to explain the frequency-dependent characteristics. However, due to the difficulty in fitting Cole-Cole model, there is a movement to interpret complex resistivity structure inverted only from a single frequency data: that is so-called "complex resistivity survey". Further, simultaneous inversion of multi-frequency SIP data, rather than making a single frequency SIP data, has been studied to improve ambiguity and artefacts of independent single frequency inversion in obtaining a complex resistivity structure, even though the dispersion characteristics of complex resistivity with respect to source frequency. Employing the simultaneous inversion method, this study makes inversion of field SIP data obtained over epithermal mineralized area, Haenam, in the southernmost tip of South Korea. The area has a polarizable structure because of extensive hydrothermal alteration, gold-silver deposits. After the inversion, we compare between inversion results considering multi-frequency data and single frequency data set to evaluate the performance of simultaneous inversion of multi-frequency SIP data.
Polarization and long-term variability of Sgr A* X-ray echo
NASA Astrophysics Data System (ADS)
Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.
2017-06-01
We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.
NASA Astrophysics Data System (ADS)
Lundgren, P.; Camacho, A.; Poland, M. P.; Miklius, A.; Samsonov, S. V.; Milillo, P.
2013-12-01
The availability of synthetic aperture radar (SAR) interferometry (InSAR) data has increased our awareness of the complexity of volcano deformation sources. InSAR's spatial completeness helps identify or clarify source process mechanisms at volcanoes (i.e. Mt. Etna east flank motion; Lazufre crustal magma body; Kilauea dike complexity) and also improves potential model realism. In recent years, Bayesian inference methods have gained widespread use because of their ability to constrain not only source model parameters, but also their uncertainties. They are computationally intensive, however, which tends to limit them to a few geometrically rather simple source representations (for example, spheres). An alternative approach involves solving for irregular pressure and/or density sources from a three-dimensional (3-D) grid of source/density cells. This method has the ability to solve for arbitrarily shaped bodies of constant absolute pressure/density difference. We compare results for both Bayesian (a Markov chain Monte Carlo algorithm) and the irregular source methods for two volcanoes: Kilauea, Hawaii, and Copahue, Argentina-Chile border. Kilauea has extensive InSAR and GPS databases from which to explore the results for the irregular method with respect to the Bayesian approach, prior models, and an extensive set of ancillary data. One caveat, however, is the current restriction in the irregular model inversion to volume-pressure sources (and at a single excess pressure change), which limits its application in cases where sources such as faults or dikes are present. Preliminary results for Kilauea summit deflation during the March 2011 Kamoamoa eruption suggests a northeast-elongated magma body lying roughly 1-1.5 km below the surface. Copahue is a southern Andes volcano that has been inflating since early 2012, with intermittent summit eruptive activity since late 2012. We have an extensive InSAR time series from RADARSAT-2 and COSMO-SkyMed data, although both are from descending tracks. Preliminary modeling suggests a very irregular magma body that extends from the volcanic edifice to less than 5 km depth and located slightly north of the summit at shallow depths but to the ENE at greater depths. In our preliminary analysis, we find that there are potential limitations and trade-offs in the Bayesian results that suggest the simplicity of the assumed analytic source may generate systematic biases in source parameters. Instead, the irregular 3-D solution appears to provide greater realism, but is limited in the number and type of sources that can be modeled.
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Transient pressure analysis of fractured well in bi-zonal gas reservoirs
NASA Astrophysics Data System (ADS)
Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo
2015-05-01
For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.
Simplified contaminant source depletion models as analogs of multiphase simulators
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-04-01
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Simplified contaminant source depletion models as analogs of multiphase simulators.
Basu, Nandita B; Fure, Adrian D; Jawitz, James W
2008-04-28
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field=0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-04-01
We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets
NASA Astrophysics Data System (ADS)
Drouart, G.; Falkendal, T.
2018-07-01
We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.
Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J. Kenneth
2000-10-15
A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.; Cohen, M.O.
1975-02-01
The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, T.C.; Rodi, W.L.; Mason, B.F.
1978-06-01
Comparing observed and synthetic seismograms, source amplitudes of NTS explosions are inferred from Rayleigh wave recordings from the WWSSN stations at Albuquerque, New Mexico (ALQ) and Tucson, Arizona (TUC). The potential influence of source complexities, particularly surface spallation and related phenomena, is studied in detail. As described in earlier work by Bache, Rodi and Harkrider, the earth model for the synthetic were converted from observations at ALQ and TUC. The agreement of observed and synthetic seismograms is quite good and is sensitive to important features of the source.
Legacy source of mercury in an urban stream-wetland ecosystem in central North Carolina, USA.
Deonarine, Amrika; Hsu-Kim, Heileen; Zhang, Tong; Cai, Yong; Richardson, Curtis J
2015-11-01
In the United States, aquatic mercury contamination originates from point and non-point sources to watersheds. Here, we studied the contribution of mercury in urban runoff derived from historically contaminated soils and the subsequent production of methylmercury in a stream-wetland complex (Durham, North Carolina), the receiving water of this runoff. Our results demonstrated that the mercury originated from the leachate of grass-covered athletic fields. A fraction of mercury in this soil existed as phenylmercury, suggesting that mercurial anti-fungal compounds were historically applied to this soil. Further downstream in the anaerobic sediments of the stream-wetland complex, a fraction (up to 9%) of mercury was converted to methylmercury, the bioaccumulative form of the metal. Importantly, the concentrations of total mercury and methylmercury were reduced to background levels within the stream-wetland complex. Overall, this work provides an example of a legacy source of mercury that should be considered in urban watershed models and watershed management. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Viner, Brian J.; Arritt, Raymond W.; Westgate, Mark E.
2017-08-01
Complex terrain creates small-scale circulations which affect pollen dispersion but may be missed by meteorological observing networks and coarse-grid meteorological models. On volcanic islands, these circulations result from differing rates of surface heating between land and sea as well as rugged terrain. We simulated the transport of bentgrass, ryegrass, and maize pollen from 30 sources within the agricultural regions of the Hawaiian island Kaua'i during climatological conditions spanning season conditions and the La Niña, El Niño, and neutral phases of the El Niño-Southern Oscillation. Both pollen size and source location had major effects on predicted dispersion over and near the island. Three patterns of pollen dispersion were identified in response to prevailing wind conditions: southwest winds transported pollen inland, funneling pollen grains through valleys; east winds transported pollen over the ocean, with dispersive tails for the smallest pollen grains following the mean wind and extending as far as the island of Ni'ihau 35 km away; and northeast winds moved pollen inland counter to the prevailing flow due to a sea breeze circulation that formed over the source region. These results are the first to predict the interactions between complex island terrain and local climatology on grass pollen dispersion. They demonstrate how numerical modeling can provide guidance for field trials by illustrating the common flow regimes present in complex terrain, allowing field trials to focus on areas where successful sampling is more likely to occur.
Meteorological and air pollution modeling for an urban airport
NASA Technical Reports Server (NTRS)
Swan, P. R.; Lee, I. Y.
1980-01-01
Results are presented of numerical experiments modeling meteorology, multiple pollutant sources, and nonlinear photochemical reactions for the case of an airport in a large urban area with complex terrain. A planetary boundary-layer model which predicts the mixing depth and generates wind, moisture, and temperature fields was used; it utilizes only surface and synoptic boundary conditions as input data. A version of the Hecht-Seinfeld-Dodge chemical kinetics model is integrated with a new, rapid numerical technique; both the San Francisco Bay Area Air Quality Management District source inventory and the San Jose Airport aircraft inventory are utilized. The air quality model results are presented in contour plots; the combined results illustrate that the highly nonlinear interactions which are present require that the chemistry and meteorology be considered simultaneously to make a valid assessment of the effects of individual sources on regional air quality.
An Ontology of Power: Perception and Reality in Conflict
2016-12-01
synthetic model was developed as the constant comparative analysis was resumed through the application of selected theory toward the original source...The synthetic model represents a series of maxims for the analysis of a complex social system, developed through a study of contemporary national...and categories. A model of strategic agency is proposed as an alternative framework for developing security strategy. The strategic agency model draws
Hoek, Milan J A van; Merks, Roeland M H
2017-05-16
The human gut contains approximately 10 14 bacteria, belonging to hundreds of different species. Together, these microbial species form a complex food web that can break down nutrient sources that our own digestive enzymes cannot handle, including complex polysaccharides, producing short chain fatty acids and additional metabolites, e.g., vitamin K. Microbial diversity is important for colonic health: Changes in the composition of the microbiota have been associated with inflammatory bowel disease, diabetes, obesity and Crohn's disease, and make the microbiota more vulnerable to infestation by harmful species, e.g., Clostridium difficile. To get a grip on the controlling factors of microbial diversity in the gut, we here propose a multi-scale, spatiotemporal dynamic flux-balance analysis model to study the emergence of metabolic diversity in a spatial gut-like, tubular environment. The model features genome-scale metabolic models (GEM) of microbial populations, resource sharing via extracellular metabolites, and spatial population dynamics and evolution. In this model, cross-feeding interactions emerge readily, despite the species' ability to metabolize sugars autonomously. Interestingly, the community requires cross-feeding for producing a realistic set of short-chain fatty acids from an input of glucose, If we let the composition of the microbial subpopulations change during invasion of adjacent space, a complex and stratified microbiota evolves, with subspecies specializing on cross-feeding interactions via a mechanism of compensated trait loss. The microbial diversity and stratification collapse if the flux through the gut is enhanced to mimic diarrhea. In conclusion, this in silico model is a helpful tool in systems biology to predict and explain the controlling factors of microbial diversity in the gut. It can be extended to include, e.g., complex nutrient sources, and host-microbiota interactions via the intestinal wall.
NASA Astrophysics Data System (ADS)
Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.
2014-12-01
The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.
Roy, Debananda; Singh, Gurdeep; Yadav, Pankaj
2016-10-01
Source apportionment study of PM 10 (Particulate Matter) in a critically polluted area of Jharia coalfield, India has been carried out using Dispersion model, Principle Component Analysis (PCA) and Chemical Mass Balance (CMB) techniques. Dispersion model Atmospheric Dispersion Model (AERMOD) was introduced to simplify the complexity of sources in Jharia coalfield. PCA and CMB analysis indicates that monitoring stations near the mining area were mainly affected by the emission from open coal mining and its associated activities such as coal transportation, loading and unloading of coal. Mine fire emission also contributed a considerable amount of particulate matters in monitoring stations. Locations in the city area were mostly affected by vehicular, Liquid Petroleum Gas (LPG) & Diesel Generator (DG) set emissions, residential, and commercial activities. The experimental data sampling and their analysis could aid understanding how dispersion based model technique along with receptor model based concept can be strategically used for quantitative analysis of Natural and Anthropogenic sources of PM 10 . Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.
2009-12-01
Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.
NASA Astrophysics Data System (ADS)
Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido
2017-04-01
Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the little detail about the chemical components of this source used in the models. The sensitivity tests show that chemical transport models show better performances when displaying a detailed set of sources (14) than when using a simplified one (only 8). It was also observed that an enhanced vertical profiling can improve the estimation of specific sources, such as industry, under complex meteorological conditions and that an insufficient spatial resolution in urban areas can impact on the capabilities of models to estimate the contribution of diffuse primary sources (e.g. traffic). Both families of models identify traffic and biomass burning as the first and second most contributing categories, respectively, to elemental carbon. The results of this study demonstrate that the source apportionment assessment methodology developed by the JRC is applicable to any kind of SA model. The same methodology is implemented in the on-line DeltaSA tool to support source apportionment model evaluation (http://source-apportionment.jrc.ec.europa.eu/).
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
Sun, Jin; Kelbert, Anna; Egbert, G.D.
2015-01-01
Long-period global-scale electromagnetic induction studies of deep Earth conductivity are based almost exclusively on magnetovariational methods and require accurate models of external source spatial structure. We describe approaches to inverting for both the external sources and three-dimensional (3-D) conductivity variations and apply these methods to long-period (T≥1.2 days) geomagnetic observatory data. Our scheme involves three steps: (1) Observatory data from 60 years (only partly overlapping and with many large gaps) are reduced and merged into dominant spatial modes using a scheme based on frequency domain principal components. (2) Resulting modes are inverted for corresponding external source spatial structure, using a simplified conductivity model with radial variations overlain by a two-dimensional thin sheet. The source inversion is regularized using a physically based source covariance, generated through superposition of correlated tilted zonal (quasi-dipole) current loops, representing ionospheric source complexity smoothed by Earth rotation. Free parameters in the source covariance model are tuned by a leave-one-out cross-validation scheme. (3) The estimated data modes are inverted for 3-D Earth conductivity, assuming the source excitation estimated in step 2. Together, these developments constitute key components in a practical scheme for simultaneous inversion of the catalogue of historical and modern observatory data for external source spatial structure and 3-D Earth conductivity.
Dealing with Diversity in Computational Cancer Modeling
Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.
2013-01-01
This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360
NASA Astrophysics Data System (ADS)
Tonini, R.; Maesano, F. E.; Tiberti, M. M.; Romano, F.; Scala, A.; Lorito, S.; Volpe, M.; Basili, R.
2017-12-01
The geometry of seismogenic sources could be one of the most important factors concurring to control the generation and the propagation of earthquake-generated tsunamis and their effects on the coasts. Since the majority of potentially tsunamigenic earthquakes occur offshore, the corresponding faults are generally poorly constrained and, consequently, their geometry is often oversimplified as a planar fault. The rupture area of mega-thrust earthquakes in subduction zones, where most of the greatest tsunamis have occurred, extends for tens to hundreds of kilometers both down dip and along strike, and generally deviates from the planar geometry. Therefore, the larger the earthquake size is, the weaker the planar fault assumption become. In this work, we present a sensitivity analysis aimed to explore the effects on modeled tsunamis generated by seismic sources with different degrees of geometric complexities. We focused on the Calabrian subduction zone, located in the Mediterranean Sea, which is characterized by the convergence between the African and European plates, with rates of up to 5 mm/yr. This subduction zone has been considered to have generated some past large earthquakes and tsunamis, despite it shows only in-slab significant seismic activity below 40 km depth and no relevant seismicity in the shallower portion of the interface. Our analysis is performed by defining and modeling an exhaustive set of tsunami scenarios located in the Calabrian subduction and using different models of the subduction interface with increasing geometrical complexity, from a planar surface to a highly detailed 3D surface. The latter was obtained from the interpretation of a dense network of seismic reflection profiles coupled with the analysis of the seismicity distribution. The more relevant effects due to the inclusion of 3D complexities in the seismic source geometry are finally highlighted in terms of the resulting tsunami impact.
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
USDA-ARS?s Scientific Manuscript database
Modelling the water and energy balance at the land surface is a crucial task for many applications related to crop production, water resources management, climate change studies, weather forecasting, and natural hazards assessment. To improve the modelling of evapotranspiration (ET) over structurall...
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources
NASA Astrophysics Data System (ADS)
Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato
2017-04-01
Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
The big data-big model (BDBM) challenges in ecological research
NASA Astrophysics Data System (ADS)
Luo, Y.
2015-12-01
The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple, heterogeneous data sets; intractability of structural complexity of big models; equifinality of model structure selection and parameter estimation; and computational demand of global optimization with Big Models.
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan
2017-06-01
We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.
Nicole Lautze
2015-12-15
Gravity model for the state of Hawaii. Data is from the following source: Flinders, A.F., Ito, G., Garcia, M.O., Sinton, J.M., Kauahikaua, J.P., and Taylor, B., 2013, Intrusive dike complexes, cumulate cores, and the extrusive growth of Hawaiian volcanoes: Geophysical Research Letters, v. 40, p. 3367–3373, doi:10.1002/grl.50633.
The value of qualitative conclusions for the interpretation of Super Soft Source grating spectra
NASA Astrophysics Data System (ADS)
Ness, J.
2017-10-01
High-resolution (grating) X-ray spectra of Super Soft Sources (SSS) contain a large amount of information. Main-stream interpretation approaches apply radiation transport models that, if uniquely constrained by the data, would provide information about temperature and mass of the underlying white dwarf and chemical composition of the ejecta. The complexity of the grating spectra has so far prohibited unique conclusions because realistic effects such as inhomogeneous density distribution, asymmetric ejecta, expansion etc open up an almost infinite number of dimensions to the problem. Further development of models are with no doubt needed, but unbiased inspection of the observed spectra is needed to narrow down where new developments are needed. In this presentation I illustrate how much we can already conclude without any models and remind of the value of qualitative conclusions. I show examples of past and recent observations and how comparisons with other observations help us to reveal common mechanisms. Albeit the high degree of complexity, some astonishing similarities between very different systems are found which can tailor the development of new models.
Chouet, B.
1988-01-01
A dynamic source model is presented, in which a 3-D crack containing a viscous compressible fluid is excited into resonance by an impulsive pressure transient applied over a small area DELTA S of the crack surface. The crack excitation depends critically on two dimensionless parameters called the crack stiffness and viscous damping loss. According to the model, the long-period event and harmonic tremor share the same source but differ in the boundary conditions for fluid flow and in the triggering mechanism setting up the resonance of the source, the former being viewed as the impulse response of the tremor generating system and the later representing the excitation due to more complex forcing functions.-from Author
On the role of the radiation directivity in noise reduction for STOL aircraft.
NASA Technical Reports Server (NTRS)
Gruschka, H. D.
1972-01-01
The radiation characteristics of distributed randomly fluctuating acoustic sources when shielded by finite surfaces are discussed briefly. A number of model tests using loudspeakers as artificial noise sources with a given broadband power density spectrum are used to demonstrate the effectiveness of reducing the radiated noise intensity in certain directions due to shielding. In the lateral direction of the source array noise reductions of 12 dB are observed with relatively small shields. The same shields reduce the backward radiation by approximately 20 dB. With the results obtained in these acoustic model tests the potentials of jet noise reduction of jet flap propulsion systems applicable in future STOL aircraft are discussed. The jet flap configuration as a complex aerodynamic noise source is described briefly.
Geophysical study of the San Juan Mountains batholith complex, southwestern Colorado
Drenth, Benjamin J.; Keller, G. Randy; Thompson, Ren A.
2012-01-01
One of the largest and most pronounced gravity lows over North America is over the rugged San Juan Mountains of southwestern Colorado (USA). The mountain range is coincident with the San Juan volcanic field (SJVF), the largest erosional remnant of a widespread mid-Cenozoic volcanic field that spanned much of the southern Rocky Mountains. A buried, low-density silicic batholith complex related to the volcanic field has been the accepted interpretation of the source of the gravity low since the 1970s. However, this interpretation was based on gravity data processed with standard techniques that are problematic in the SJVF region. The combination of high-relief topography, topography with low densities, and the use of a common reduction density of 2670 kg/m3produces spurious large-amplitude gravity lows that may distort the geophysical signature of deeper features such as a batholith complex. We applied an unconventional processing procedure that uses geologically appropriate densities for the uppermost crust and digital topography to mostly remove the effect of the low-density units that underlie the topography associated with the SJVF. This approach resulted in a gravity map that provides an improved representation of deeper sources, including reducing the amplitude of the anomaly attributed to a batholith complex. We also reinterpreted vintage seismic refraction data that indicate the presence of low-velocity zones under the SJVF. Assuming that the source of the gravity low on the improved gravity anomaly map is the same as the source of the low seismic velocities, integrated modeling corroborates the interpretation of a batholith complex and then defines the dimensions and overall density contrast of the complex. Models show that the thickness of the batholith complex varies laterally to a significant degree, with the greatest thickness (∼20 km) under the western SJVF, and lesser thicknesses (<10 km) under the eastern SJVF. The largest group of nested calderas on the surface of the SJVF, the central caldera cluster, is not correlated with the thickest part of the batholith complex. This result is consistent with petrologic interpretations from recent studies that the batholith complex continued to be modified after cessation of volcanism and therefore is not necessarily representative of synvolcanic magma chambers. The total volume of the batholith complex is estimated to be 82,000–130,000 km3. The formation of such a large felsic batholith complex would inevitably involve production of a considerably greater volume of residuum, which could be present in the lower crust or uppermost mantle. The interpreted vertically averaged density contrast (–60 to –110 kg/m3), density (2590–2640 kg/m3), and seismic expression of the batholith complex are consistent with results of geophysical studies of other large batholiths in the western United States.
Policy Transfer via Markov Logic Networks
NASA Astrophysics Data System (ADS)
Torrey, Lisa; Shavlik, Jude
We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.
NASA Astrophysics Data System (ADS)
Härer, Stefan; Bernhardt, Matthias; Gutmann, Ethan; Bauer, Hans-Stefan; Schulz, Karsten
2017-04-01
Until recently, a large gap existed in the atmospheric downscaling strategies. On the one hand, computationally efficient statistical approaches are widely used, on the other hand, dynamic but CPU-intensive numeric atmospheric models like the weather research and forecast (WRF) model exist. The intermediate complex atmospheric research (ICAR) model developed at NCAR (Boulder, Colorado, USA) addresses this gap by combining the strengths of both approaches: the process-based structure of a dynamic model and its applicability in a changing climate as well as the speed of a parsimonious modelling approach which facilitates the modelling of ensembles and a straightforward way to test new parametrization schemes as well as various input data sources. However, the ICAR model has not been tested in Europe and on slightly undulated terrain yet. This study now evaluates for the first time the ICAR model to WRF model runs in Central Europe comparing a complete year of model results in the mesoscale Attert catchment (Luxembourg). In addition to these modelling results, we also describe the first implementation of ICAR on an Intel Phi architecture and consequently perform speed tests between the Vienna cluster, a standard workstation and the use of an Intel Phi coprocessor. Finally, the study gives an outlook on sensitivity studies using slightly different input data sources.
NASA Astrophysics Data System (ADS)
Pitarka, Arben; Mellors, Robert; Rodgers, Arthur; Vorobiev, Oleg; Ezzedine, Souheil; Matzel, Eric; Ford, Sean; Walter, Bill; Antoun, Tarabay; Wagoner, Jeffery; Pasyanos, Mike; Petersson, Anders; Sjogreen, Bjorn
2014-05-01
We investigate the excitation and propagation of far-field (epicentral distance larger than 20 m) seismic waves by analyzing and modeling ground motion from an underground chemical explosion recorded during the Source Physics Experiment (SPE), Nevada. The far-field recorded ground motion is characterized by complex features, such as large azimuthal variations in P- and S-wave amplitudes, as well as substantial energy on the tangential component of motion. Shear wave energy is also observed on the tangential component of the near-field motion (epicentral distance smaller than 20 m) suggesting that shear waves were generated at or very near the source. These features become more pronounced as the waves propagate away from the source. We address the shear wave generation during the explosion by modeling ground motion waveforms recorded in the frequency range 0.01-20 Hz, at distances of up to 1 km. We used a physics based approach that combines hydrodynamic modeling of the source with anelastic modeling of wave propagation in order to separate the contributions from the source and near-source wave scattering on shear motion generation. We found that wave propagation scattering caused by the near-source geological environment, including surface topography, contributes to enhancement of shear waves generated from the explosion source. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-06NA25946/ NST11-NCNS-TM-EXP-PD15.
Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun
2018-09-01
Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
3D printing the pterygopalatine fossa: a negative space model of a complex structure.
Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis
2018-02-01
The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.
NASA Astrophysics Data System (ADS)
Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul; Spica, Zack; Legrand, Denis
2016-04-01
The Azufre-Lastarria volcanic complex in the central Andes has been recognized as a major region of magma intrusion. Both deep and shallow inflating reservoirs inferred through InSAR time series inversions, are the main sources of a multi-scale deformation accompanied by pronounced fumarolic activity. The possible interactions between these reservoirs, as well as the path of propagating fluids and the development of their pathways, however, have not been investigated. Results from recent seismic noise tomography in the area show localized zones of shear wave velocity anomalies, with a low shear wave velocity region at 1 km depth and another one at 4 km depth beneath Lastarria. Although the inferred shallow zone is in a good agreement with the location of the shallow deformation source, the deep zone does not correspond to any deformation source in the area. Here, using the boundary element method (BEM), we have performed an in-depth continuum mechanical investigation of the available ascending and descending InSAR data. We modelled the deep source, taking into account the effect of topography and complex source geometry on the inversion. After calculating the stress field induced by this source, we apply Paul's criterion (a variation on Mohr-Coulomb failure) to recognize locations that are liable for failure. We show that the locations of tensile and shear failure almost perfectly coincide with the shallow and deep anomalies as identified by shear wave velocity, respectively. Based on the stress-change models we conjecture that the deep reservoir controls the development of shallower hydrothermal fluids; a hypothesis that can be tested and applied to other volcanoes.
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
Model structures amplify uncertainty in predicted soil carbon responses to climate change.
Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien
2018-06-04
Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.
NASA Astrophysics Data System (ADS)
Wang, Guoqiang; A, Yinglan; Jiang, Hong; Fu, Qing; Zheng, Binghui
2015-01-01
Increasing water pollution in developing countries poses a significant threat to environmental health and human welfare. Understanding the spatial distribution and apportioning the sources of pollution are important for the efficient management of water resources. In this study, ten types of heavy metals were detected during 2010-2013 for all ambient samples and point sources samples. A pollution assessment based on the surficial sediment dataset by Enrichment Factor (EF) showed the surficial sediment was moderately contaminated. A comparison of the multivariate approach (principle components analysis/absolute principle component score, PCA/APCS) and the chemical mass balance model (CMB) shows that the identification of sources and calculation of source contribution based on the CMB were more objective and acceptable when source profiles were known and source composition was complex. The results of source apportionment for surficial heavy metals, both from PCA/APCS and CMB model, showed that the natural background (30%) was the most dominant contributor to the surficial heavy metals, followed by mining activities (29%). The contribution percentage of the natural background was negatively related to the degree of contamination. The peak concentrations of many heavy metals (Cu, Ba, Fe, As and Hg) were found in the middle layer of sediment, which is most likely due to the result of development of industry beginning in the 1970s. However, the highest concentration of Pb appeared in the surficial sediment layer, which was most likely due to the sharp increase in the traffic volume. The historical analysis of the sources based on the CMB showed that mining and the chemical industry are stable sources for all of the sections. The comparing of change rates of source contribution versus years indicated that the composition of the materials in estuary site (HF1) is sensitive to the input from the land, whereas center site (HF4) has a buffering effect on the materials from the land through a series of complex movements. These results provide information for the development of improved pollution control strategies for the lakes and reservoirs.
Exploring the effects of photon correlations from thermal sources on bacterial photosynthesis
NASA Astrophysics Data System (ADS)
Manrique, Pedro D.; Caycedo-Soler, Felipe; De Mendoza, Adriana; Rodríguez, Ferney; Quiroga, Luis; Johnson, Neil F.
Thermal light sources can produce photons with strong spatial correlations. We study the role that these correlations might potentially play in bacterial photosynthesis. Our findings show a relationship between the transversal distance between consecutive absorptions and the efficiency of the photosynthetic process. Furthermore, membranes where the clustering of core complexes (so-called RC-LH1) is high, display a range where the organism profits maximally from the spatial correlation of the incoming light. By contrast, no maximum is found for membranes with low core-core clustering. We employ a detailed membrane model with state-of-the-art empirical inputs. Our results suggest that the organization of the membrane's antenna complexes may be well-suited to the spatial correlations present in an natural light source. Future experiments will be needed to test this prediction.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).
On precisely modelling surface deformation due to interacting magma chambers and dykes
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora
2014-01-01
Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.
NASA Astrophysics Data System (ADS)
Rendón, A.; Posada, J. A.; Salazar, J. F.; Mejia, J.; Villegas, J.
2016-12-01
Precipitation in the complex terrain of the tropical Andes of South America can be strongly reduced during El Niño events, with impacts on numerous societally-relevant services, including hydropower generation, the main electricity source in Colombia. Simulating rainfall patterns and behavior in such areas of complex terrain has remained a challenge for regional climate models. Current data products such as ERA-Interim and other reanalysis and modelling products generally fail to correctly represent processes at scales that are relevant for these processes. Here we assess the added value to ERA-Interim by dynamical downscaling using the WRF regional climate model, including a comparison of different cumulus parameterization schemes. We found that WRF improves the representation of precipitation during the dry season of El Niño (DJF) events using a 1996-2014 observation period. Further, we use these improved capability to simulate an extreme deforestation scenario under El Niño conditions for an area in the central Andes of Colombia, where a big proportion of the country's hydropower is generated. Our results suggest that forests dampen the effects of El Niño on precipitation. In synthesis, our results illustrate the utility of regional modelling to improve data sources, as well as their potential for predicting the local-to-regional effects of global-change-type processes in regions with limited data availability.
Bassingthwaighte, James B; Raymond, Gary M; Dash, Ranjan K; Beard, Daniel A; Nolan, Margaret
2016-01-01
The 'Pathway for Oxygen' is captured in a set of models describing quantitative relationships between fluxes and driving forces for the flux of oxygen from the external air source to the mitochondrial sink at cytochrome oxidase. The intervening processes involve convection, membrane permeation, diffusion of free and heme-bound O2 and enzymatic reactions. While this system's basic elements are simple: ventilation, alveolar gas exchange with blood, circulation of the blood, perfusion of an organ, uptake by tissue, and consumption by chemical reaction, integration of these pieces quickly becomes complex. This complexity led us to construct a tutorial on the ideas and principles; these first PathwayO2 models are simple but quantitative and cover: (1) a 'one-alveolus lung' with airway resistance, lung volume compliance, (2) bidirectional transport of solute gasses like O2 and CO2, (3) gas exchange between alveolar air and lung capillary blood, (4) gas solubility in blood, and circulation of blood through the capillary syncytium and back to the lung, and (5) blood-tissue gas exchange in capillaries. These open-source models are at Physiome.org and provide background for the many respiratory models there.
The Pathway for Oxygen: Tutorial Modelling on Oxygen Transport from Air to Mitochondrion
Bassingthwaighte, James B.; Raymond, Gary M.; Dash, Ranjan K.; Beard, Daniel A.; Nolan, Margaret
2016-01-01
The ‘Pathway for Oxygen’ is captured in a set of models describing quantitative relationships between fluxes and driving forces for the flux of oxygen from the external air source to the mitochondrial sink at cytochrome oxidase. The intervening processes involve convection, membrane permeation, diffusion of free and heme-bound O2 and enzymatic reactions. While this system’s basic elements are simple: ventilation, alveolar gas exchange with blood, circulation of the blood, perfusion of an organ, uptake by tissue, and consumption by chemical reaction, integration of these pieces quickly becomes complex. This complexity led us to construct a tutorial on the ideas and principles; these first PathwayO2 models are simple but quantitative and cover: 1) a ‘one-alveolus lung’ with airway resistance, lung volume compliance, 2) bidirectional transport of solute gasses like O2 and CO2, 3) gas exchange between alveolar air and lung capillary blood, 4) gas solubility in blood, and circulation of blood through the capillary syncytium and back to the lung, and 5) blood-tissue gas exchange in capillaries. These open-source models are at Physiome.org and provide background for the many respiratory models there. PMID:26782201
Systems biology by the rules: hybrid intelligent systems for pathway modeling and discovery.
Bosl, William J
2007-02-15
Expert knowledge in journal articles is an important source of data for reconstructing biological pathways and creating new hypotheses. An important need for medical research is to integrate this data with high throughput sources to build useful models that span several scales. Researchers traditionally use mental models of pathways to integrate information and development new hypotheses. Unfortunately, the amount of information is often overwhelming and these are inadequate for predicting the dynamic response of complex pathways. Hierarchical computational models that allow exploration of semi-quantitative dynamics are useful systems biology tools for theoreticians, experimentalists and clinicians and may provide a means for cross-communication. A novel approach for biological pathway modeling based on hybrid intelligent systems or soft computing technologies is presented here. Intelligent hybrid systems, which refers to several related computing methods such as fuzzy logic, neural nets, genetic algorithms, and statistical analysis, has become ubiquitous in engineering applications for complex control system modeling and design. Biological pathways may be considered to be complex control systems, which medicine tries to manipulate to achieve desired results. Thus, hybrid intelligent systems may provide a useful tool for modeling biological system dynamics and computational exploration of new drug targets. A new modeling approach based on these methods is presented in the context of hedgehog regulation of the cell cycle in granule cells. Code and input files can be found at the Bionet website: www.chip.ord/~wbosl/Software/Bionet. This paper presents the algorithmic methods needed for modeling complicated biochemical dynamics using rule-based models to represent expert knowledge in the context of cell cycle regulation and tumor growth. A notable feature of this modeling approach is that it allows biologists to build complex models from their knowledge base without the need to translate that knowledge into mathematical form. Dynamics on several levels, from molecular pathways to tissue growth, are seamlessly integrated. A number of common network motifs are examined and used to build a model of hedgehog regulation of the cell cycle in cerebellar neurons, which is believed to play a key role in the etiology of medulloblastoma, a devastating childhood brain cancer.
Viner, Brian J.; Arritt, Raymond W.; Westgate, Mark E.
2017-03-29
Complex terrain creates small-scale circulations which affect pollen dispersion but may be missed by meteorological observing networks and coarse-grid meteorological models. On volcanic islands, these circulations result from differing rates of surface heating between land and sea as well as rugged terrain. We simulated the transport of bentgrass, ryegrass, and maize pollen from 30 sources within the agricultural regions of the Hawaiian island Kaua’i during climatological conditions spanning season conditions and the La Niña, El Niño, and neutral phases of the El Niño-Southern Oscillation. Both pollen size and source location had major effects on predicted dispersion over and near themore » island. Three patterns of pollen dispersion were identified in response to prevailing wind conditions: southwest winds transported pollen inland, funneling pollen grains through valleys; east winds transported pollen over the ocean, with dispersive tails for the smallest pollen grains following the mean wind and extending as far as the island of Ni’ihau 35 km away; and northeast winds moved pollen inland counter to the prevailing flow due to a sea breeze circulation that formed over the source region. These results are the first to predict the interactions between complex island terrain and local climatology on grass pollen dispersion. As a result, they demonstrate how numerical modeling can provide guidance for field trials by illustrating the common flow regimes present in complex terrain, allowing field trials to focus on areas where successful sampling is more likely to occur.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viner, Brian J.; Arritt, Raymond W.; Westgate, Mark E.
Complex terrain creates small-scale circulations which affect pollen dispersion but may be missed by meteorological observing networks and coarse-grid meteorological models. On volcanic islands, these circulations result from differing rates of surface heating between land and sea as well as rugged terrain. We simulated the transport of bentgrass, ryegrass, and maize pollen from 30 sources within the agricultural regions of the Hawaiian island Kaua’i during climatological conditions spanning season conditions and the La Niña, El Niño, and neutral phases of the El Niño-Southern Oscillation. Both pollen size and source location had major effects on predicted dispersion over and near themore » island. Three patterns of pollen dispersion were identified in response to prevailing wind conditions: southwest winds transported pollen inland, funneling pollen grains through valleys; east winds transported pollen over the ocean, with dispersive tails for the smallest pollen grains following the mean wind and extending as far as the island of Ni’ihau 35 km away; and northeast winds moved pollen inland counter to the prevailing flow due to a sea breeze circulation that formed over the source region. These results are the first to predict the interactions between complex island terrain and local climatology on grass pollen dispersion. As a result, they demonstrate how numerical modeling can provide guidance for field trials by illustrating the common flow regimes present in complex terrain, allowing field trials to focus on areas where successful sampling is more likely to occur.« less
NASA Astrophysics Data System (ADS)
Krechowicz, Maria
2017-10-01
Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
Role of Microenvironment in Glioma Invasion: What We Learned from In Vitro Models
Manini, Ivana; Caponnetto, Federica; Bartolini, Anna; Ius, Tamara; Mariuzzi, Laura; Di Loreto, Carla; Cesselli, Daniela
2018-01-01
The invasion properties of glioblastoma hamper a radical surgery and are responsible for its recurrence. Understanding the invasion mechanisms is thus critical to devise new therapeutic strategies. Therefore, the creation of in vitro models that enable these mechanisms to be studied represents a crucial step. Since in vitro models represent an over-simplification of the in vivo system, in these years it has been attempted to increase the level of complexity of in vitro assays to create models that could better mimic the behaviour of the cells in vivo. These levels of complexity involved: 1. The dimension of the system, moving from two-dimensional to three-dimensional models; 2. The use of microfluidic systems; 3. The use of mixed cultures of tumour cells and cells of the tumour micro-environment in order to mimic the complex cross-talk between tumour cells and their micro-environment; 4. And the source of cells used in an attempt to move from commercial lines to patient-based models. In this review, we will summarize the evidence obtained exploring these different levels of complexity and highlighting advantages and limitations of each system used. PMID:29300332
Phase-field simulations of GaN growth by selective area epitaxy on complex mask geometries
Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung; ...
2015-05-15
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
USDA-ARS?s Scientific Manuscript database
Large uncertainties for landfill CH4 emissions due to spatial and temporal variabilities remain unresolved by short-term field campaigns and historic GHG inventory models. Using four field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen
2017-05-22
This presentation is part of US-China Clean Coal project and describes the impact of power plant cycling, techno economic modeling of combined IGCC and CCS, integrated capacity generation decision making for power utilities, and a new decision support tool for integrated assessment of CCUS.
Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization
ERIC Educational Resources Information Center
Gelman, Andrew; Lee, Daniel; Guo, Jiqiang
2015-01-01
Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
2010-04-30
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this
NASA Astrophysics Data System (ADS)
Lutz, Stefanie; Van Breukelen, Boris
2014-05-01
Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.
Identifying the starting point of a spreading process in complex networks.
Comin, Cesar Henrique; Costa, Luciano da Fontoura
2011-11-01
When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.
Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling
2015-09-01
A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.
Implications of Biospheric Energization
NASA Astrophysics Data System (ADS)
Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet
2016-07-01
Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagesen, Larry K.; Thornton, Katsuyo, E-mail: kthorn@umich.edu; Coltrin, Michael E.
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. The model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. The model provides a route to optimize masks and processing conditions during materialsmore » synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
Multiple sparse volumetric priors for distributed EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-10-15
We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Simulation of tracer dispersion from elevated and surface releases in complex terrain
NASA Astrophysics Data System (ADS)
Hernández, J. F.; Cremades, L.; Baldasano, J. M.
A new version of an advanced mesoscale dispersion modeling system for simulating passive air pollutant dispersion in the real atmospheric planetary boundary layer (PBL), is presented. The system comprises a diagnostic mass-consistent meteorological model and a Lagrangian particle dispersion model (LADISMO). The former version of LADISMO, developed according to Zannetti (Air pollution modelling, 1990), was based on the Monte Carlo technique and included calculation of higher-order moments of vertical random forcing for convective conditions. Its ability to simulate complex flow dispersion has been stated in a previous paper (Hernández et al. 1995, Atmospheric Environment, 29A, 1331-1341). The new version follows Thomson's scheme (1984, Q. Jl Roy. Met. Soc.110, 1107-1120). It is also based on Langevin equation and follows the ideas given by Brusasca et al. (1992, Atmospheric Environment26A, 707-723) and Anfossi et al. (1992, Nuovo Cemento 15c, 139-158). The model is used to simulate the dispersion and predict the ground level concentration (g.l.c.) of a tracer (SF 6) released from both an elevated source ( case a) and a ground level source ( case b) in a highly complex mountainous terrain during neutral and synoptically dominated conditions ( case a) and light and apparently stable conditions ( case b). The last case is considered as being a specially difficult task to simulate. In fact, few works have reported situations with valley drainage flows in complex terrains and real stable atmospheric conditions with weak winds. The model assumes that nearly calm situations associated to strong stability and air stagnation, make the lowest layers of PBL poorly diffusive (Brusasca et al., 1992, Atmospheric Environment26A, 707-723). Model results are verified against experimental data from Guardo-90 tracer experiments, an intensive field campaign conducted in the Carrion river valley (Northern Spain) to study atmospheric diffusion within a steep walled valley in mountainous terrain (Ibarra, 1992, Energia, No. 1, 74-85).
Energy Spectral Behaviors of Communication Networks of Open-Source Communities
Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun
2015-01-01
Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Studying light-harvesting models with superconducting circuits.
Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas
2018-03-02
The process of photosynthesis, the main source of energy in the living world, converts sunlight into chemical energy. The high efficiency of this process is believed to be enabled by an interplay between the quantum nature of molecular structures in photosynthetic complexes and their interaction with the environment. Investigating these effects in biological samples is challenging due to their complex and disordered structure. Here we experimentally demonstrate a technique for studying photosynthetic models based on superconducting quantum circuits, which complements existing experimental, theoretical, and computational approaches. We demonstrate a high degree of freedom in design and experimental control of our approach based on a simplified three-site model of a pigment protein complex with realistic parameters scaled down in energy by a factor of 10 5 . We show that the excitation transport between quantum-coherent sites disordered in energy can be enabled through the interaction with environmental noise. We also show that the efficiency of the process is maximized for structured noise resembling intramolecular phononic environments found in photosynthetic complexes.
Ammonia formation by a thiolate-bridged diiron amide complex as a nitrogenase mimic
NASA Astrophysics Data System (ADS)
Li, Yang; Li, Ying; Wang, Baomin; Luo, Yi; Yang, Dawei; Tong, Peng; Zhao, Jinfeng; Luo, Lun; Zhou, Yuhan; Chen, Si; Cheng, Fang; Qu, Jingping
2013-04-01
Although nitrogenase enzymes routinely convert molecular nitrogen into ammonia under ambient temperature and pressure, this reaction is currently carried out industrially using the Haber-Bosch process, which requires extreme temperatures and pressures to activate dinitrogen. Biological fixation occurs through dinitrogen and reduced NxHy species at multi-iron centres of compounds bearing sulfur ligands, but it is difficult to elucidate the mechanistic details and to obtain stable model intermediate complexes for further investigation. Metal-based synthetic models have been applied to reveal partial details, although most models involve a mononuclear system. Here, we report a diiron complex bridged by a bidentate thiolate ligand that can accommodate HN=NH. Following reductions and protonations, HN=NH is converted to NH3 through pivotal intermediate complexes bridged by N2H3- and NH2- species. Notably, the final ammonia release was effected with water as the proton source. Density functional theory calculations were carried out, and a pathway of biological nitrogen fixation is proposed.
Alemi-Ardakani, M.; Milani, A. S.; Yannacopoulos, S.
2014-01-01
Impact modeling of fiber reinforced polymer composites is a complex and challenging task, in particular for practitioners with less experience in advanced coding and user-defined subroutines. Different numerical algorithms have been developed over the past decades for impact modeling of composites, yet a considerable gap often exists between predicted and experimental observations. In this paper, after a review of reported sources of complexities in impact modeling of fiber reinforced polymer composites, two simplified approaches are presented for fast simulation of out-of-plane impact response of these materials considering four main effects: (a) strain rate dependency of the mechanical properties, (b) difference between tensile and flexural bending responses, (c) delamination, and (d) the geometry of fixture (clamping conditions). In the first approach, it is shown that by applying correction factors to the quasistatic material properties, which are often readily available from material datasheets, the role of these four sources in modeling impact response of a given composite may be accounted for. As a result a rough estimation of the dynamic force response of the composite can be attained. To show the application of the approach, a twill woven polypropylene/glass reinforced thermoplastic composite laminate has been tested under 200 J impact energy and was modeled in Abaqus/Explicit via the built-in Hashin damage criteria. X-ray microtomography was used to investigate the presence of delamination inside the impacted sample. Finally, as a second and much simpler modeling approach it is shown that applying only a single correction factor over all material properties at once can still yield a reasonable prediction. Both advantages and limitations of the simplified modeling framework are addressed in the performed case study. PMID:25431787
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Nesting large-eddy simulations within mesoscale simulations for wind energy applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, J K; Mirocha, J D; Chow, F K
2008-09-08
With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES), which resolve individual atmospheric eddies on length scales smaller than turbine blades and account for complex terrain, are possible with a range of commercial and open-source software, including the Weather Research and Forecasting (WRF) model. In addition to 'local' sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting thatmore » a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecasting model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosovic (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain.« less
Pizzagalli, D; Lehmann, D; Gianotti, L; Koenig, T; Tanaka, H; Wackermann, J; Brugger, P
2000-12-22
The neurocognitive processes underlying the formation and maintenance of paranormal beliefs are important for understanding schizotypal ideation. Behavioral studies indicated that both schizotypal and paranormal ideation are based on an overreliance on the right hemisphere, whose coarse rather than focussed semantic processing may favor the emergence of 'loose' and 'uncommon' associations. To elucidate the electrophysiological basis of these behavioral observations, 35-channel resting EEG was recorded in pre-screened female strong believers and disbelievers during resting baseline. EEG data were subjected to FFT-Dipole-Approximation analysis, a reference-free frequency-domain dipole source modeling, and Regional (hemispheric) Omega Complexity analysis, a linear approach estimating the complexity of the trajectories of momentary EEG map series in state space. Compared to disbelievers, believers showed: more right-located sources of the beta2 band (18.5-21 Hz, excitatory activity); reduced interhemispheric differences in Omega complexity values; higher scores on the Magical Ideation scale; more general negative affect; and more hypnagogic-like reveries after a 4-min eyes-closed resting period. Thus, subjects differing in their declared paranormal belief displayed different active, cerebral neural populations during resting, task-free conditions. As hypothesized, believers showed relatively higher right hemispheric activation and reduced hemispheric asymmetry of functional complexity. These markers may constitute the neurophysiological basis for paranormal and schizotypal ideation.
Gravity profiles across the Uyaijah Ring structure, Kingdom of Saudi Arabia
Gettings, M.E.; Andreasen, G.E.
1987-01-01
The resulting structural model, based on profile fits to gravity responses of three-dimensional models and excess-mass calculations, gives a depth estimate to the base of the complex of 4.75 km. The contacts of the complex are inferred to be steeply dipping inward along the southwest margin of the structure. To the north and east, however, the basal contact of the complex dips more gently inward (about 30 degrees). The ring structure appears to be composed of three laccolith-shaped plutons; two are granitic in composition and make up about 85 percent of the volume of the complex, and one is granodioritic and comprises the remaining 15 percent. The source area for the plutons appears to be in the southwest quadrant of the Uyaijah ring structure. A northwest-trending shear zone cuts the northern half of the structure and contains mafic dikes that have a small but identifiable gravity-anomaly response. The structural model agrees with models derived from geological interpretation except that the estimated depth to which the structure extends is decreased considerably by the gravity results.
Ramachandran, Varun; Long, Suzanna K.; Shoberg, Thomas G.; Corns, Steven; Carlo, Hector J.
2016-01-01
The majority of restoration strategies in the wake of large-scale disasters have focused on short-term emergency response solutions. Few consider medium- to long-term restoration strategies to reconnect urban areas to national supply chain interdependent critical infrastructure systems (SCICI). These SCICI promote the effective flow of goods, services, and information vital to the economic vitality of an urban environment. To re-establish the connectivity that has been broken during a disaster between the different SCICI, relationships between these systems must be identified, formulated, and added to a common framework to form a system-level restoration plan. To accomplish this goal, a considerable collection of SCICI data is necessary. The aim of this paper is to review what data are required for model construction, the accessibility of these data, and their integration with each other. While a review of publically available data reveals a dearth of real-time data to assist modeling long-term recovery following an extreme event, a significant amount of static data does exist and these data can be used to model the complex interdependencies needed. For the sake of illustration, a particular SCICI (transportation) is used to highlight the challenges of determining the interdependencies and creating models capable of describing the complexity of an urban environment with the data publically available. Integration of such data as is derived from public domain sources is readily achieved in a geospatial environment, after all geospatial infrastructure data are the most abundant data source and while significant quantities of data can be acquired through public sources, a significant effort is still required to gather, develop, and integrate these data from multiple sources to build a complete model. Therefore, while continued availability of high quality, public information is essential for modeling efforts in academic as well as government communities, a more streamlined approach to a real-time acquisition and integration of these data is essential.
Debiased estimates for NEO orbits, absolute magnitudes, and source regions
NASA Astrophysics Data System (ADS)
Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce T.; Bottke, William; Beshore, Edward C.; Vokrouhlicky, David; Nesvorny, David; Michel, Patrick
2017-10-01
The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies on individual NEOs as well as on more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis (a), eccentricity (e), inclination (i), and absolute magnitude (H). We calibrate the model using NEO detections by the 703 and G96 stations of the Catalina Sky Survey (CSS) during 2005-2012 corresponding to objects with 17
Current Source Based on H-Bridge Inverter with Output LCL Filter
NASA Astrophysics Data System (ADS)
Blahnik, Vojtech; Talla, Jakub; Peroutka, Zdenek
2015-09-01
The paper deals with a control of current source with an LCL output filter. The controlled current source is realized as a single-phase inverter and output LCL filter provides low ripple of output current. However, systems incorporating LCL filters require more complex control strategies and there are several interesting approaches to the control of this type of converter. This paper presents the inverter control algorithm, which combines model based control with a direct current control based on resonant controllers and single-phase vector control. The primary goal is to reduce the current ripple and distortion under required limits and provides fast and precise control of output current. The proposed control technique is verified by measurements on the laboratory model.
Microcomputer pollution model for civilian airports and Air Force bases. Model description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segal, H.M.; Hamilton, P.L.
1988-08-01
This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less
Lattice Boltzmann formulation for conjugate heat transfer in heterogeneous media.
Karani, Hamid; Huber, Christian
2015-02-01
In this paper, we propose an approach for studying conjugate heat transfer using the lattice Boltzmann method (LBM). The approach is based on reformulating the lattice Boltzmann equation for solving the conservative form of the energy equation. This leads to the appearance of a source term, which introduces the jump conditions at the interface between two phases or components with different thermal properties. The proposed source term formulation conserves conductive and advective heat flux simultaneously, which makes it suitable for modeling conjugate heat transfer in general multiphase or multicomponent systems. The simple implementation of the source term approach avoids any correction of distribution functions neighboring the interface and provides an algorithm that is independent from the topology of the interface. Moreover, our approach is independent of the choice of lattice discretization and can be easily applied to different advection-diffusion LBM solvers. The model is tested against several benchmark problems including steady-state convection-diffusion within two fluid layers with parallel and normal interfaces with respect to the flow direction, unsteady conduction in a three-layer stratified domain, and steady conduction in a two-layer annulus. The LBM results are in excellent agreement with analytical solution. Error analysis shows that our model is first-order accurate in space, but an extension to a second-order scheme is straightforward. We apply our LBM model to heat transfer in a two-component heterogeneous medium with a random microstructure. This example highlights that the method we propose is independent of the topology of interfaces between the different phases and, as such, is ideally suited for complex natural heterogeneous media. We further validate the present LBM formulation with a study of natural convection in a porous enclosure. The results confirm the reliability of the model in simulating complex coupled fluid and thermal dynamics in complex geometries.
Reduced mercury deposition in New Hampshire from 1996 to 2002 due to changes in local sources.
Han, Young-Ji; Holsen, Thomas M; Evers, David C; Driscoll, Charles T
2008-12-01
Changes in deposition of gaseous divalent mercury (Hg(II)) and particulate mercury (Hg(p)) in New Hampshire due to changes in local sources from 1996 to 2002 were assessed using the Industrial Source Complex Short Term (ISCST3) model (regional and global sources and Hg atmospheric reactions were not considered). Mercury (Hg) emissions in New Hampshire and adjacent areas decreased significantly (from 1540 to 880 kg yr(-1)) during this period, and the average annual modeled deposition of total Hg also declined from 17 to 7.0 microg m(-2) yr(-1) for the same period. In 2002, the maximum amount of Hg deposition was modeled to be in southern New Hampshire, while for 1996 the maximum deposition occurred farther north and east. The ISCST3 was also used to evaluate two future scenarios. The average percent difference in deposition across all cells was 5% for the 50% reduction scenario and 9% for the 90% reduction scenario.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Topographic filtering simulation model for sediment source apportionment
NASA Astrophysics Data System (ADS)
Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin
2018-05-01
We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.
Gangadharan, R; Prasanna, G; Bhat, M R; Murthy, C R L; Gopalakrishnan, S
2009-11-01
Conventional analytical/numerical methods employing triangulation technique are suitable for locating acoustic emission (AE) source in a planar structure without structural discontinuities. But these methods cannot be extended to structures with complicated geometry, and, also, the problem gets compounded if the material of the structure is anisotropic warranting complex analytical velocity models. A geodesic approach using Voronoi construction is proposed in this work to locate the AE source in a composite structure. The approach is based on the fact that the wave takes minimum energy path to travel from the source to any other point in the connected domain. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. In this work, the geodesic approach is shown more suitable for a practicable source location solution in a composite structure with arbitrary surface containing finite discontinuities. Experiments have been conducted on composite plate specimens of simple and complex geometry to validate this method.
A COMPUTATIONAL FRAMEWORK FOR EVALUATION OF NPS MANAGEMENT SCENARIOS: ROLE OF PARAMETER UNCERTAINTY
Utility of complex distributed-parameter watershed models for evaluation of the effectiveness of non-point source sediment and nutrient abatement scenarios such as Best Management Practices (BMPs) often follows the traditional {calibrate ---> validate ---> predict} procedure. Des...
NASA Astrophysics Data System (ADS)
Simutė, S.; Fichtner, A.
2015-12-01
We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.
Strong Ground Motion Prediction By Composite Source Model
NASA Astrophysics Data System (ADS)
Burjanek, J.; Irikura, K.; Zahradnik, J.
2003-12-01
A composite source model, incorporating different sized subevents, provides a possible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock). The subevents are distributed randomly over the fault. Each subevent is modeled either as a finite or point source, differences between these choices are shown. The final slip and duration of each subevent is related to its characteristic dimension, using constant stress-drop scaling. Absolute value of subevents' stress drop is free parameter. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally layered crustal model. An estimation of subevents' stress drop is based on fitting empirical attenuation relations for PGA and PGV, as they represent robust information on strong ground motion caused by earthquakes, including both path and source effect. We use the 2000 M6.6 Western Tottori, Japan, earthquake as validation event, providing comparison between predicted and observed waveforms.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
Seismic noise frequency dependent P and S wave sources
NASA Astrophysics Data System (ADS)
Stutzmann, E.; Schimmel, M.; Gualtieri, L.; Farra, V.; Ardhuin, F.
2013-12-01
Seismic noise in the period band 3-10 sec is generated in the oceans by the interaction of ocean waves. Noise signal is dominated by Rayleigh waves but body waves can be extracted using a beamforming approach. We select the TAPAS array deployed in South Spain between June 2008 and September 2009 and we use the vertical and horizontal components to extract noise P and S waves, respectively. Data are filtered in narrow frequency bands and we select beam azimuths and slownesses that correspond to the largest continuous sources per day. Our procedure automatically discard earthquakes which are localized during short time durations. Using this approach, we detect many more noise P-waves than S-waves. Source locations are determined by back-projecting the detected slowness/azimuth. P and S waves are generated in nearby areas and both source locations are frequency dependent. Long period sources are dominantly in the South Atlantic and Indian Ocean whereas shorter period sources are rather in the North Atlantic Ocean. We further show that the detected S-waves are dominantly Sv-waves. We model the observed body waves using an ocean wave model that takes into account all possible wave interactions including coastal reflection. We use the wave model to separate direct and multiply reflected phases for P and S waves respectively. We show that in the South Atlantic the complex source pattern can be explained by the existence of both coastal and pelagic sources whereas in the North Atlantic most body wave sources are pelagic. For each detected source, we determine the equivalent source magnitude which is compared to the model.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Chengping; Ammon, Charles J.; Maceira, Monica
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...
2018-02-14
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
NAPL source zone depletion model and its application to railroad-tank-car spills.
Marruffo, Amanda; Yoon, Hongkyu; Schaeffer, David J; Barkan, Christopher P L; Saat, Mohd Rapik; Werth, Charles J
2012-01-01
We developed a new semi-analytical source zone depletion model (SZDM) for multicomponent light nonaqueous phase liquids (LNAPLs) and incorporated this into an existing screening model for estimating cleanup times for chemical spills from railroad tank cars that previously considered only single-component LNAPLs. Results from the SZDM compare favorably to those from a three-dimensional numerical model, and from another semi-analytical model that does not consider source zone depletion. The model was used to evaluate groundwater contamination and cleanup times for four complex mixtures of concern in the railroad industry. Among the petroleum hydrocarbon mixtures considered, the cleanup time of diesel fuel was much longer than E95, gasoline, and crude oil. This is mainly due to the high fraction of low solubility components in diesel fuel. The results demonstrate that the updated screening model with the newly developed SZDM is computationally efficient, and provides valuable comparisons of cleanup times that can be used in assessing the health and financial risk associated with chemical mixture spills from railroad-tank-car accidents. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Spiridonova, V. A.; Sizov, V. A.; Kuzmenko, E. O.; Melnichuk, A. V.; Oleinichenko, E. A.; Kudzhaev, A. M.; Rotanova, T. V.; Snigirev, O. V.
2017-07-01
The binding to Lon protease through biotinylated aptamers whose structures contain G-quadruplex fragments with magnetic nanoparticles (MNPs) functionalized by streptavidin was investigated. The conditions of binding of target aptamers to MNPs are met. The resulting complexes are proposed for detection of Lon protease in different biological sources and for constructing a novel biomagnetic nanosensor immunoassay system.
The AeroCom evaluation and intercomparison of organic aerosol in global models
Tsigaridis, K.; Daskalakis, N.; Kanakidou, M.; ...
2014-10-15
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemicalmore » and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a –1 (range 34–144 Tg a −1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a –1 (range 13–121 Tg a −1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a –1 (range 16–121 Tg a −1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a –1; range 13–20 Tg a –1, with one model at 37 Tg a −1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a –1 (range 28–209 Tg a −1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to –0.62 (–0.51) based on the comparison against OC (OA) urban data of all models at the surface, –0.15 (+0.51) when compared with remote measurements, and –0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. As a result, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.« less
Complex interactions between diapirs and 4-D subduction driven mantle wedge circulation.
NASA Astrophysics Data System (ADS)
Sylvia, R. T.; Kincaid, C. R.
2015-12-01
Analogue laboratory experiments generate 4-D flow of mantle wedge fluid and capture the evolution of buoyant mesoscale diapirs. The mantle is modeled with viscous glucose syrup with an Arrhenius type temperature dependent viscosity. To characterize diapir evolution we experiment with a variety of fluids injected from multiple point sources. Diapirs interact with kinematically induced flow fields forced by subducting plate motions replicating a range of styles observed in dynamic subduction models (e.g., rollback, steepening, gaps). Data is collected using high definition timelapse photography and quantified using image velocimetry techniques. While many studies assume direct vertical connections between the volcanic arc and the deeper mantle source region, our experiments demonstrate the difficulty of creating near vertical conduits. Results highlight extreme curvature of diapir rise paths. Trench-normal deflection occurs as diapirs are advected downward away from the trench before ascending into wedge apex directed return flow. Trench parallel deflections up to 75% of trench length are seen in all cases, exacerbated by complex geometry and rollback motion. Interdiapir interaction is also important; upwellings with similar trajectory coalesce and rapidly accelerate. Moreover, we observe a new mode of interaction whereby recycled diapir material is drawn down along the slab surface and then initiates rapid fluid migration updip along the slab-wedge interface. Variability in trajectory and residence time leads to complex petrologic inferences. Material from disparate source regions can surface at the same location, mix in the wedge, or become fully entrained in creeping flow adding heterogeneity to the mantle. Active diapirism or any other vertical fluid flux mechanism employing rheological weakening lowers viscosity in the recycling mantle wedge affecting both solid and fluid flow characteristics. Many interesting and insightful results have been presented based upon 2-D, steady-state thermal and flow regimes. We reiterate the importance of 4-D time evolution in subduction models. Analogue experiments allow added feedbacks and complexity improving intuition and providing insight for further investigation.
A computational microscopy study of nanostructural evolution in irradiated pressure vessel steels
NASA Astrophysics Data System (ADS)
Odette, G. R.; Wirth, B. D.
1997-11-01
Nanostructural features that form in reactor pressure vessel steels under neutron irradiation at around 300°C lead to significant hardening and embrittlement. Continuum thermodynamic-kinetic based rate theories have been very successful in modeling the general characteristics of the copper and manganese nickel rich precipitate evolution, often the dominant source of embrittlement. However, a more detailed atomic scale understanding of these features is needed to interpret experimental measurements and better underpin predictive embrittlement models. Further, other embrittling features, believed to be subnanometer defect (vacancy)-solute complexes and small regions of modest enrichment of solutes are not well understood. A general approach to modeling embrittlement nanostructures, based on the concept of a computational microscope, is described. The objective of the computational microscope is to self-consistently integrate atomic scale simulations with other sources of information, including a wide range of experiments. In this work, lattice Monte Carlo (LMC) simulations are used to resolve the chemically and structurally complex nature of CuMnNiSi precipitates. The LMC simulations unify various nanoscale analytical characterization methods and basic thermodynamics. The LMC simulations also reveal that significant coupled vacancy and solute clustering takes place during cascade aging. The cascade clustering produces the metastable vacancy-cluster solute complexes that mediate flux effects. Cascade solute clustering may also play a role in the formation of dilute atmospheres of solute enrichment and enhance the nucleation of manganese-nickel rich precipitates at low Cu levels. Further, the simulations suggest that complex, highly correlated processes (e.g. cluster diffusion, formation of favored vacancy diffusion paths and solute scavenging vacancy cluster complexes) may lead to anomalous fast thermal aging kinetics at temperatures below about 450°C. The potential technical significance of these phenomena is described.
de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul
2012-01-01
Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.
Fault trees and sequence dependencies
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Boyd, Mark A.; Bavuso, Salvatore J.
1990-01-01
One of the frequently cited shortcomings of fault-tree models, their inability to model so-called sequence dependencies, is discussed. Several sources of such sequence dependencies are discussed, and new fault-tree gates to capture this behavior are defined. These complex behaviors can be included in present fault-tree models because they utilize a Markov solution. The utility of the new gates is demonstrated by presenting several models of the fault-tolerant parallel processor, which include both hot and cold spares.
NASA Astrophysics Data System (ADS)
Hamdi, H.; Qausar, A. M.; Srigutomo, W.
2016-08-01
Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.
Physical Conditions of Eta Car Complex Environment Revealed From Photoionization Modeling
NASA Technical Reports Server (NTRS)
Verner, E. M.; Bruhweiler, F.; Nielsen, K. E.; Gull, T.; Kober, G. Vieira; Corcoran, M.
2006-01-01
The very massive star, Eta Carinae, is enshrouded in an unusual complex environment of nebulosities and structures. The circumstellar gas gives rise to distinct absorption and emission components at different velocities and distances from the central source(s). Through photoionization modeling, we find that the radiation field from the more massive B-star companion supports the low ionization structure throughout the 5.54 year period. The radiation field of an evolved O-star is required to produce the higher ionization . emission seen across the broad maximum. Our studies utilize the HST/STIS data and model calculations of various regimes from doubly ionized species (T= 10,000K) to the low temperature (T = 760 K) conditions conductive to molecule formation (CH and OH). Overall analysis suggests the high depletion in C and O and the enrichment in He and N. The sharp molecular and ionic absorptions in this extensively CNO - processed material offers a unique environment for studying the chemistry, dust formation processes, and nucleosynthesis in the ejected layers of a highly evolved massive star.
The Mock LISA Data Challenge Round 3: New and Improved Sources
NASA Technical Reports Server (NTRS)
Baker, John
2008-01-01
The Mock LISA Data Challenges are a program to demonstrate and encourage the development of data-analysis capabilities for LISA. Each round of challenges consists of several data sets containing simulated instrument noise and gravitational waves from sources of undisclosed parameters. Participants are asked to analyze the data sets and report the maximum information they can infer about the source parameters. The challenges are being released in rounds of increasing complexity and realism. Challenge 3. currently in progress, brings new source classes, now including cosmic-string cusps and primordial stochastic backgrounds, and more realistic signal models for supermassive black-hole inspirals and galactic double white dwarf binaries.
Meteoroid Environment Modeling: the Meteoroid Engineering Model and Shower Forecasting
NASA Technical Reports Server (NTRS)
Moorhead, Althea V.
2017-01-01
INTRODUCTION: The meteoroid environment is often divided conceptually into meteor showers and the sporadic meteor background. It is commonly but incorrectly assumed that meteoroid impacts primarily occur during meteor showers; instead, the vast majority of hazardous meteoroids belong to the sporadic complex. Unlike meteor showers, which persist for a few hours to a few weeks, sporadic meteoroids impact the Earth's atmosphere and spacecraft throughout the year. The Meteoroid Environment Office (MEO) has produced two environment models to handle these cases: the Meteoroid Engineering Model (MEM) and an annual meteor shower forecast. The sporadic complex, despite its year-round activity, is not isotropic in its directionality. Instead, their apparent points of origin, or radiants, are organized into groups called "sources". The speed, directionality, and size distribution of these sporadic sources are modeled by the Meteoroid Engineering Model (MEM), which is currently in its second major release version (MEMR2) [Moorhead et al., 2015]. MEM provides the meteoroid flux relative to a user-provided spacecraft trajectory; it provides the total flux as well as the flux per angular bin, speed interval, and on specific surfaces (ram, wake, etc.). Because the sporadic complex dominates the meteoroid flux, MEM is the most appropriate model to use in spacecraft design. Although showers make up a small fraction of the meteoroid environment, they can produce significant short-term enhancements of the meteoroid flux. Thus, it can be valuable to consider showers when assessing risks associated with vehicle operations that are brief in duration. To assist with such assessments, the MEO issues an annual forecast that reports meteor shower fluxes as a function of time and compares showers with the time-averaged total meteoroid flux. This permits missions to do quick assessments of the increase in risk posed by meteor showers.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2016-07-01
Kīlauea volcano, Hawai`i Island, has a complex magmatic system including summit reservoirs and rift zones. Kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with predetermined geometry. To explore the complex geometry and kinematics of the summit reservoir, we apply a multitrack wavelet-based InSAR (interferometric synthetic aperture radar) algorithm and a novel geometry-free time-dependent modeling scheme. To map spatiotemporally distributed surface deformation signals over Kīlauea's summit, we process synthetic aperture radar data sets from two overlapping tracks of the Envisat satellite, including 100 images during the period 2003-2010. Following validation against Global Positioning System data, we invert the surface deformation time series to constrain the spatiotemporal evolution of the magmatic system without any prior knowledge of the source geometry. The optimum model is characterized by a spheroidal and a tube-like zone of volume change beneath the summit and the southwest rift zone at 2-3 km depth, respectively. To reduce the model dimension, we apply a principal component analysis scheme, which allows for the identification of independent reservoirs. The first three PCs, explaining 99% (63.8%, 28.5%, and 6.6%, respectively) of the model, include six independent reservoirs with a complex interaction suggested by temporal analysis. The data and model presented here, in agreement with earlier studies, improve the understanding of Kīlauea's plumbing system through enhancing the knowledge of temporally variable magma supply, storage, and transport beneath the summit, and verify the link between summit magmatic activity, seismicity, and rift intrusions.
Three-dimensional wideband electromagnetic modeling on massively parallel computers
NASA Astrophysics Data System (ADS)
Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.
1996-01-01
A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.
New insight on petroleum system modeling of Ghadames basin, Libya
NASA Astrophysics Data System (ADS)
Bora, Deepender; Dubey, Siddharth
2015-12-01
Underdown and Redfern (2008) performed a detailed petroleum system modeling of the Ghadames basin along an E-W section. However, hydrocarbon generation, migration and accumulation changes significantly across the basin due to complex geological history. Therefore, a single section can't be considered representative for the whole basin. This study aims at bridging this gap by performing petroleum system modeling along a N-S section and provides new insights on source rock maturation, generation and migration of the hydrocarbons using 2D basin modeling. This study in conjunction with earlier work provides a 3D context of petroleum system modeling in the Ghadames basin. Hydrocarbon generation from the lower Silurian Tanezzuft formation and the Upper Devonian Aouinet Ouenine started during the late Carboniferous. However, high subsidence rate during middle to late Cretaceous and elevated heat flow in Cenozoic had maximum impact on source rock transformation and hydrocarbon generation whereas large-scale uplift and erosion during Alpine orogeny has significant impact on migration and accumulation. Visible migration observed along faults, which reactivated during Austrian unconformity. Peak hydrocarbon expulsion reached during Oligocene for both the Tanezzuft and the Aouinet Ouenine source rocks. Based on modeling results, capillary entry pressure driven downward expulsion of hydrocarbons from the lower Silurian Tanezzuft formation to the underlying Bir Tlacsin formation observed during middle Cretaceous. Kinetic modeling has helped to model hydrocarbon composition and distribution of generated hydrocarbons from both the source rocks. Application of source to reservoir tracking technology suggest some accumulations at shallow stratigraphic level has received hydrocarbons from both the Tanezzuft and Aouinet Ouenine source rocks, implying charge mixing. Five petroleum systems identified based on source to reservoir correlation technology in Petromod*. This Study builds upon the original work of Underdown and Redfern, 2008 and offers new insights and interpretation of the data.
Unmixing Magnetic Hysteresis Loops
NASA Astrophysics Data System (ADS)
Heslop, D.; Roberts, A. P.
2012-04-01
Magnetic hysteresis loops provide important information in rock and environmental magnetic studies. Natural samples often contain an assemblage of magnetic particles composed of components with different origins. Each component potentially carries important environmental information. Hysteresis loops, however, provide information concerning the bulk magnetic assemblage, which makes it difficult to isolate the specific contributions from different sources. For complex mineral assemblages an unmixing strategy with which to separate hysteresis loops into their component parts is therefore essential. Previous methods to unmix hysteresis data have aimed at separating individual loops into their constituent parts using libraries of type-curves thought to correspond to specific mineral types. We demonstrate an alternative approach, which rather than decomposing a single loop into monomineralic contributions, examines a collection of loops to determine their constituent source materials. These source materials may themselves be mineral mixtures, but they provide a genetically meaningful decomposition of a magnetic assemblage in terms of the processes that controlled its formation. We show how an empirically derived hysteresis mixing space can be created, without resorting to type-curves, based on the co-variation within a collection of measured loops. Physically realistic end-members, which respect the expected behaviour and symmetries of hysteresis loops, can then be extracted from the mixing space. These end-members allow the measured loops to be described as a combination of invariant parts that are assumed to represent the different sources in the mixing model. Particular attention is paid to model selection and estimating the complexity of the mixing model, specifically, how many end-members should be included. We demonstrate application of this approach using lake sediments from Butte Valley, northern California. Our method successfully separates the hysteresis loops into sources with a variety of terrigenous and authigenic origins.
NASA Astrophysics Data System (ADS)
Naren Athreyas, Kashyapa; Gunawan, Erry; Tay, Bee Kiat
2018-07-01
In recent years, the climate changes and weather have become a major concern which affects the daily life of a human being. Modelling and prediction of the complex atmospheric processes needs extensive theoretical studies and observational analyses to improve the accuracy of the prediction. The RADAGAST campaign was conducted by ARM climate research stationed at Niamey, Niger from January 2006 to January 2007, which was aimed to improve the west African climate studies have provided valuable data for research. In this paper, the characteristics and sources of inertia-gravity waves observed over Niamey during the campaign are investigated. The investigation focuses on highlighting the waves which are generated by thunderstorms which dominate the tropical region. The stratospheric energy densities spectrum is analysed for deriving the wave properties. The waves with Eulerian period from 20 to 50 h occupied most of the spectral power. It was found that the waves observed over Niamey had a dominant eastward propagation with horizontal wavelengths ranging from 350 to 1 400 km, and vertical wavelengths ranging from 0.9 to 3.6 km. GROGRAT model with ERA-Interim model data was used for establishing the background atmosphere to identify the source location of the waves. The waves generated by thunderstorms had propagation distances varying from 200 to 5 000 km and propagation duration from 2 to 4 days. The horizontal phase speeds varied from 2 to 20 m/s with wavelengths varying from 100 to 1 100 km, vertical phase speeds from 0.02 to 0.2 m/s and wavelengths from 2 to 15 km at the source point. The majority of sources were located in South Atlantic ocean and waves propagating towards northeast direction. This study demonstrated the complex large scale coupling in the atmosphere.
2010-02-01
reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of... data producer/consumer issue! Need to control the simulation reporting rates. 11 Lessons Learned from MSG-048 Requirements for BML-enabled...Simulation Model Requirements vary depending on: – Model domain – Echelon – Complexity – Level of automation – Level of detail – Nation-specific data
Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment
NASA Astrophysics Data System (ADS)
Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.
2008-05-01
The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.
Numerical comparisons of ground motion predictions with kinematic rupture modeling
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.
2017-12-01
Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.
A Web Interface for Eco System Modeling
NASA Astrophysics Data System (ADS)
McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.
2012-12-01
We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to download the results for further processing. The current workflow in the web interface is a simple linear workflow, but will be expanded to allow for more complex workflows. We are working with Kepler and Cyberintegrator to allow for these more complex workflows as well as collecting provenance of the workflow being executed. This provenance regarding model executions is stored in a database along with the derived results. All of this information is then accessible using the BETY database web frontend. The PEcAn interface.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.
2015-07-01
This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Biomass Scenario Model Documentation: Data and References
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y.; Newes, E.; Bush, B.
2013-05-01
The Biomass Scenario Model (BSM) is a system dynamics model that represents the entire biomass-to-biofuels supply chain, from feedstock to fuel use. The BSM is a complex model that has been used for extensive analyses; the model and its results can be better understood if input data used for initialization and calibration are well-characterized. It has been carefully validated and calibrated against the available data, with data gaps filled in using expert opinion and internally consistent assumed values. Most of the main data sources that feed into the model are recognized as baseline values by the industry. This report documentsmore » data sources and references in Version 2 of the BSM (BSM2), which only contains the ethanol pathway, although subsequent versions of the BSM contain multiple conversion pathways. The BSM2 contains over 12,000 total input values, with 506 distinct variables. Many of the variables are opportunities for the user to define scenarios, while others are simply used to initialize a stock, such as the initial number of biorefineries. However, around 35% of the distinct variables are defined by external sources, such as models or reports. The focus of this report is to provide insight into which sources are most influential in each area of the supply chain.« less
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
DOE Office of Scientific and Technical Information (OSTI.GOV)
CorAL is a software Library designed to aid in the analysis of femtoscipic data. Femtoscopic data are a class of measured quantities used in heavy-ion collisions to characterize particle emitting source sizes. The most common type of this data is two-particle correleations induced by the Hanbury-Brown/Twiss (HBT) Effect, but can also include correlations induced by final-state interactions between pairs of emitted particles in a heavy-ion collision. Because heavy-ion collisions are complex many particle systems, modeling hydrodynamical models or hybrid techniques. Using the CRAB module, CorAL can turn the output from these models into something that can be directley compared tomore » experimental data. CorAL can also take the raw experimentally measured correlation functions and image them by inverting the Koonin-Pratt equation to extract the space-time emission profile of the particle emitting source. This source function can be further analyzed or directly compared to theoretical calculations.« less
Bagley, Alexander F; Hill, Samuel; Rogers, Gary S; Bhatia, Sangeeta N
2013-09-24
Plasmonic nanomaterials including gold nanorods are effective agents for inducing heating in tumors. Because near-infrared (NIR) light has traditionally been delivered using extracorporeal sources, most applications of plasmonic photothermal therapy have focused on isolated subcutaneous tumors. For more complex models of disease such as advanced ovarian cancer, one of the primary barriers to gold nanorod-based strategies is the adequate delivery of NIR light to tumors located at varying depths within the body. To address this limitation, a series of implanted NIR illumination sources are described for the specific heating of gold nanorod-containing tissues. Through computational modeling and ex vivo studies, a candidate device is identified and validated in a model of orthotopic ovarian cancer. As the therapeutic, imaging, and diagnostic applications of plasmonic nanomaterials progress, effective methods for NIR light delivery to challenging anatomical regions will complement ongoing efforts to advance plasmonic photothermal therapy toward clinical use.
R-LINE: A Line Source Dispersion Model for Near-Surface Releases
Based on Science Advisory Board and the National Research Councilrecommendations, EPA-ORD initiated research on near-road air quality andhealth effects. Field measurements indicated that exposures to traffic-emitted air pollutants near roads can be influenced by complexities of r...
Modeling Watershed Mercury Response to Atmospheric Loadings: Response Time and Challenges
The relationship between sources of mercury to watersheds and its fate in surface waters is invariably complex. Large scale monitoring studies, such as the METAALICUS project, have advanced current understanding of the links between atmospheric deposition of mercury and accumulat...
A decade of Rossi X-ray Timing Explorer Seyfert observations: An RXTE Seyfert spectral database
NASA Astrophysics Data System (ADS)
Mattson, Barbara Jo
2008-10-01
With over forty years of X-ray observations, we should have a grasp on the X- ray nature of active galactic nuclei (AGN). The unification model of Antonucci and Miller (1985) offered a context for understanding observations by defining a "typical" AGN geometry, with observed spectral differences explained by line- of-sight effects. However, the emerging picture is that the central AGN is more complex than unification alone can describe. We explore the unified model with a systematic X-ray spectral study of bright Seyfert galaxies observed by the Rossi X-Ray Timing Explorer (RXTE) over its first 10 years. We develop a spectral-fit database of 821 time-resolved spectra from 39 Seyfert galaxies fitted to a model describing the effects of an X-ray power-law spectrum reprocessed and absorbed by material in the central AGN region. We observe a relationship between radio and X-ray properties for Seyfert 1s, with the spectral parameters differing between radio-loud and radio-quiet Seyfert 1s. We also find a complex relationship between the Fe K equivalent width ( EW ) and the power-law photon index (Gamma) for the Seyfert 1s, with a correlation for the radio-loud sources and an anti-correlation for the radio- quiet sources. These results can be explained if X-rays from the relativistic jet in radio-loud sources contribute significantly to the observed spectrum. We observe scatter in the EW-Gamma relationship for the Seyfert 2s, suggesting complex environments that unification alone cannot explain. We see a strong correlation between Gamma and the reflection fraction ( R ) in the Seyfert 1 and 2 samples, but modeling degeneracies are present, so this relationship cannot be trusted as instructive of the AGN physics. For the Seyfert 1 sample, we find an anticorrelation between EW and the 2 to 10 keV luminosity ( L x ), also known as the X-ray Baldwin effect. This may suggest that higher luminosity sources contain less material or may be due to a time-lag effect. We do not observe the previously reported relationship between Gamma and the ratio of L x to the Eddington luminosity.
NASA Astrophysics Data System (ADS)
Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.
2014-09-01
This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.
NASA Astrophysics Data System (ADS)
Nowak, W.; Koch, J.
2014-12-01
Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.
Propagation of Exploration Seismic Sources in Shallow Water
NASA Astrophysics Data System (ADS)
Diebold, J. B.; Tolstoy, M.; Barton, P. J.; Gulick, S. P.
2006-05-01
The choice of safety radii to mitigation the impact of exploration seismic sources upon marine mammals is typically based on measurement or modeling in deep water. In shallow water environments, rule-of-thumb spreading laws are often used to predict the falloff of amplitude with offset from the source, but actual measurements (or ideally, near-perfect modeling) are still needed to account for the effects of bathymetric changes and subseafloor characteristics. In addition, the question: "how shallow is 'shallow?'" needs an answer. In a cooperative effort by NSF, MMS, NRL, IAGC and L-DEO, a series of seismic source calibration studies was carried out in the Northern Gulf of Mexico during 2003. The sources used were the two-, six-, ten-, twelve-, and twenty-airgun arrays of R/V Ewing, and a 31-element, 3-string "G" gun array, deployed by M/V Kondor, an exploration industry source ship. The results of the Ewing calibrations have been published, documenting results in deep (3200m) and shallow (60m) water. Lengthy analysis of the Kondor results, presented here, suggests an approach to answering the "how shallow is shallow" question. After initially falling off steadily with source-receiver offset, the Kondor levels suddenly increased at a 4km offset. Ray-based modeling with a complex, realistic source, but with a simple homogeneous water column-over-elastic halfspace ocean shows that the observed pattern is chiefly due to geophysical effects, and not focusing within the water column. The same kind of modeling can be used to predict how the amplitudes will change with decreasing water depth, and when deep-water safety radii may need to be increased. Another set of data (see Barton, et al., this session) recorded in 20 meters of water during early 2005, however, shows that simple modeling may be insufficient when the geophysics becomes more complex. In this particular case, the fact that the seafloor was within the near field of the R/V Ewing source array seems to have given rise to seismic phases not normally seen in marine survey data acquired in deeper water. The associated partitioning of energy is likely to have caused the observed uncharacteristically rapid loss of energy with distance. It appears that in this case, the shallow-water marine mammal safety mitigation measures prescribed and followed were far more stringent than they needed to be. A new approach, wherein received levels detected by the towed 6-km multichannel hydrophone array may be used to modify safety radii has recently been proposed, based on these observations.
cellPACK: A Virtual Mesoscope to Model and Visualize Structural Systems Biology
Johnson, Graham T.; Autin, Ludovic; Al-Alusi, Mostafa; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
cellPACK assembles computational models of the biological mesoscale, an intermediate scale (10−7–10−8m) between molecular and cellular biology. cellPACK’s modular architecture unites existing and novel packing algorithms to generate, visualize and analyze comprehensive 3D models of complex biological environments that integrate data from multiple experimental systems biology and structural biology sources. cellPACK is currently available as open source code, with tools for validation of models and with recipes and models for five biological systems: blood plasma, cytoplasm, synaptic vesicles, HIV and a mycoplasma cell. We have applied cellPACK to model distributions of HIV envelope protein to test several hypotheses for consistency with experimental observations. Biologists, educators, and outreach specialists can interact with cellPACK models, develop new recipes and perform packing experiments through scripting and graphical user interfaces at http://cellPACK.org. PMID:25437435
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
NASA Astrophysics Data System (ADS)
Usman, M.; Furuya, M.
2014-12-01
The Quetta Syntaxis in the western Baluchistan, Pakistan, serves as a junction for different thrust faults. As this area also lays close to the left lateral strike slip Chaman fault, which is supposed to be marking the boundary between Indian and Eurasian plate, thus the resulting seismological behavior of this regime becomes much more complex. In the region of Quetta Syntaxis, below the fold and thrust belt of Suleiman and Kirthar ranges and on 28 October 2008, there stroke an earthquake of magnitude 6.4 (Mw) which was followed by a doublet on the very next day. In association with these major events, there have been four more shocks, one foreshock and three aftershocks that have moment magnitude greater than 5. On the basis of seismological, GPS and ENVISAT/ASAR InSAR data many researchers tried to explain the source of this sequence. The latest source modeling results, on the basis of ENVISAT/ASAR data has provided an insight about the complexity of tectonics in the study area. However, in comparison to ALOS/PALSAR InSAR data, ENVISAT/ASAR has lacked signals near the epicentral area because of the low coherence. Probably, it has led to different interpretations by different researchers even on the basis of same satellite data. By using ALOS/PALSAR data, we have suggested a four faults model, two left laterals and two right laterals, which also retains the most desirable features of previous models.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wei, S.; Wu, W.; Ni, S.
2017-12-01
Among various types of 3D heterogeneity in the Earth, trench might be the most complex systems, which includes rapidly varying bathymetry and usually thick sediment below water layer. These structure complexities can cause substantial waveform complexities on seismograms, but their corresponding impact on the earthquake source studies has not yet been well understood. Here we explore those effects via studies of two moderate aftershocks (one near the coast while the other close to the Peru-Chile trench axis) in the 2015 Illapel earthquake sequence. The horizontal locations and depths of these two events are poorly constrained and the reported results of various agencies display substantial variations. Thus, we first relocated the epicenters using the P-wave first arrivals and determined other parameters by waveform fitting. In a jackknifing way, we found that the trench event has large differences between regional and teleseismic solutions, in particular for depth, while the coastal event shows consistent results. The teleseismic P/Pdiff waves between these two events also display distinctly different features. More specifically, the trench event has more complex P/Pdiff waves and stronger coda waves, in terms of amplitude and duration (longer than 100s). The coda waves are coherent across stations at different distances and azimuths, indicating a more likely origin of scattering waves due to 3D heterogeneity near trench. To quantitatively model those 3D effects, we adopted a hybrid waveform simulation approach that computes the 3D wavefield in the source region by the Spectral Element Method (SEM) and then propagates the wavefield to teleseismic and shadow zone distances through the Direct Solution Method (DSM). We incorporated the GEBCO bathymetry and water layer into the SEM simulations and assumed the IASP91 1D model for DSM computation. Comparing with the poor 1D synthetics fitting to the data, we do obtain dramatic improvement in 3D waveform fittings across a series of frequency bands. With sensitivity tests of 3D waveform modeling, the centroid longitude and depth for the near trench event are refined. Our study suggests that the complex trench structure must be taken into account for a reliable analysis of shallow earthquake near trench, in particular for the shallowest tsunamigenic earthquakes.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2017-05-01
Existing approaches to derive decision models from plaintext clinical data frequently depend on medical dictionaries as the sources of potential features. Prior research suggests that decision models developed using non-dictionary based feature sourcing approaches and "off the shelf" tools could predict cancer with performance metrics between 80% and 90%. We sought to compare non-dictionary based models to models built using features derived from medical dictionaries. We evaluated the detection of cancer cases from free text pathology reports using decision models built with combinations of dictionary or non-dictionary based feature sourcing approaches, 4 feature subset sizes, and 5 classification algorithms. Each decision model was evaluated using the following performance metrics: sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using dictionary and non-dictionary feature sourcing approaches produced performance metrics between 70 and 90%. The source of features and feature subset size had no impact on the performance of a decision model. Our study suggests there is little value in leveraging medical dictionaries for extracting features for decision model building. Decision models built using features extracted from the plaintext reports themselves achieve comparable results to those built using medical dictionaries. Overall, this suggests that existing "off the shelf" approaches can be leveraged to perform accurate cancer detection using less complex Named Entity Recognition (NER) based feature extraction, automated feature selection and modeling approaches. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.
2015-02-01
The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.
Controlled-source seismic interferometry with one way wave fields
NASA Astrophysics Data System (ADS)
van der Neut, J.; Wapenaar, K.; Thorbecke, J. W.
2008-12-01
In Seismic Interferometry we generally cross-correlate registrations at two receiver locations and sum over an array of sources to retrieve a Green's function as if one of the receiver locations hosts a (virtual) source and the other receiver location hosts an actual receiver. One application of this concept is to redatum an area of surface sources to a downhole receiver location, without requiring information about the medium between the sources and receivers, thus providing an effective tool for imaging below complex overburden, which is also known as the Virtual Source method. We demonstrate how elastic wavefield decomposition can be effectively combined with controlled-source Seismic Interferometry to generate virtual sources in a downhole receiver array that radiate only down- or upgoing P- or S-waves with receivers sensing only down- or upgoing P- or S- waves. For this purpose we derive exact Green's matrix representations from a reciprocity theorem for decomposed wavefields. Required is the deployment of multi-component sources at the surface and multi- component receivers in a horizontal borehole. The theory is supported with a synthetic elastic model, where redatumed traces are compared with those of a directly modeled reflection response, generated by placing active sources at the virtual source locations and applying elastic wavefield decomposition on both source and receiver side.
Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki
2015-04-01
The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Quantitative Biofractal Feedback Part II ’Devices, Scalability & Robust Control’
2008-05-01
in the modelling of proton exchange membrane fuel cells ( PEMFC ) may work as a powerful tool in the development and widespread testing of alternative...energy sources in the next decade [9], where biofractal controllers will be used to control these complex systems. The dynamic model of PEMFC , is...dynamic response of the PEMFC . In the Iftukhar model, the fuel cell is represented by an equivalent circuit, whose components are identified with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J; Rodgers, A; Walter, W
2011-10-18
The Source Physics Experiment (SPE) is planning a 1000 kg (TNT equivalent) shot (SPE2) at the Nevada National Security Site (NNSS) in a granite borehole at a depth (canister centroid) of 45 meters. This shot follows an earlier shot of 100 kg in the same borehole at a depth 60 m. Surrounding the shotpoint is an extensive array of seismic sensors arrayed in 5 radial lines extending out 2 km to the north and east and approximately 10-15 to the south and west. Prior to SPE1, simulations using a finite difference code and a 3D numerical model based on themore » geologic setting were conducted, which predicted higher amplitudes to the south and east in the alluvium of Yucca Flat along with significant energy on the transverse components caused by scattering within the 3D volume along with some contribution by topographic scattering. Observations from the SPE1 shot largely confirmed these predictions although the ratio of transverse energy relative to the vertical and radial components was in general larger than predicted. A new set of simulations has been conducted for the upcoming SPE2 shot. These include improvements to the velocity model based on SPE1 observations as well as new capabilities added to the simulation code. The most significant is the addition of a new source model within the finite difference code by using the predicted ground velocities from a hydrodynamic code (GEODYN) as driving condition on the boundaries of a cube embedded within WPP which provides a more sophisticated source modeling capability linked directly to source site materials (e.g. granite) and type and size of source. Two sets of SPE2 simulations are conducted, one with a GEODYN source and 3D complex media (no topography node spacing of 5 m) and one with a standard isotropic pre-defined time function (3D complex media with topography, node spacing of 5 m). Results were provided as time series at specific points corresponding to sensor locations for both translational (x,y,z) and rotational components. Estimates of spectral scaling for SPE2 are provided using a modified version of the Mueller-Murphy model. An estimate of expected aftershock probabilities were also provided, based on the methodology of Ford and Walter, [2010].« less
Visual Modelling of Data Warehousing Flows with UML Profiles
NASA Astrophysics Data System (ADS)
Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan
Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
NASA Astrophysics Data System (ADS)
Erickson, M.; Olaguer, J.; Wijesinghe, A.; Colvin, J.; Neish, B.; Williams, J.
2014-12-01
It is becoming increasingly important to understand the emissions and health effects of industrial facilities. Many areas have no or limited sustained monitoring capabilities, making it difficult to quantify the major pollution sources affecting human health, especially in fence line communities. Developments in real-time monitoring and micro-scale modeling offer unique ways to tackle these complex issues. This presentation will demonstrate the capability of coupling real-time observations with micro-scale modeling to provide real-time information and near real-time source attribution. The Houston Advanced Research Center constructed the Mobile Acquisition of Real-time Concentrations (MARC) laboratory. MARC consists of a Ford E-350 passenger van outfitted with a Proton Transfer Reaction Mass Spectrometer (PTR-MS) and meteorological equipment. This allows for the fast measurement of various VOCs important to air quality. The data recorded from the van is uploaded to an off-site database and the information is broadcast to a website in real-time. This provides for off-site monitoring of MARC's observations, which allows off-site personnel to provide immediate input to the MARC operators on how to best achieve project objectives. The information stored in the database can also be used to provide near real-time source attribution. An inverse model has been used to ascertain the amount, location, and timing of emissions based on MARC measurements in the vicinity of industrial sites. The inverse model is based on a 3D micro-scale Eulerian forward and adjoint air quality model known as the HARC model. The HARC model uses output from the Quick Urban and Industrial Complex (QUIC) wind model and requires a 3D digital model of the monitored facility based on lidar or industrial permit data. MARC is one of the instrument platforms deployed during the 2014 Benzene and other Toxics Exposure Study (BEE-TEX) in Houston, TX. The main goal of the study is to quantify and explain the origin of ambient exposure to hazardous air pollutants in an industrial fence line community near the Houston Ship Channel. Preliminary results derived from analysis of MARC observations during the BEE-TEX experiment will be presented.
Magma Dynamics at Axial Seamount, Juan de Fuca Ridge, from Seafloor Deformation Data
NASA Astrophysics Data System (ADS)
Baumgardt, E.; Nooner, S. L.; Chadwick, W.
2014-12-01
Axial Seamount is located about 480 km west of the Oregon coast at the intersection of the Cobb hotspot and the Juan de Fuca Ridge. Two eruptions have been observed since routine observations began in the 1990's, one in January 1998 and the other in April 2011. Precise bottom pressure measurements have documented an inflation/deflation cycle within Axial's summit caldera. The slow inflation observed at the center of the caldera was punctuated by sudden rapid deflation of 3.2 m during the 1998 eruption and 2.4 m during the 2011 eruption. Pressure data collected in September 2013 from continuously recording bottom pressure recorders and campaign-style measurements with an ROV indicates that Axial Seamount inflated 1.57 m from April 2011 to September 2013 at an average inflation rate of 61 cm/yr, meaning it had already recovered more than 65% of the deflation from the 2011 eruption within just 2.4 years. The geometry and location of the deformation source is not well constrained by the spatially-sparse pressure data, particularly for the most recent co-eruption deflation and post-eruption inflation signals. Here, we use geodetic data collected in September 2013 to test the fit of multiple numerical models of increasing complexity. We show that for this time period (since April 2011) neither a simple point deformation source (Mogi model) nor an oblate spheroid (penny-shaped crack) provide a good fit to the data. We then use finite element models to build more complex inflation geometries, guided by recent seismically imaged magma reservoirs, in an attempt to understand the source(s) of the observed deformation pattern. The recent seismic data provide good constraints on magma reservoir geometry and show the most robust melt occurs under the southeast part of the caldera at Axial. However, previous geodetic measurements at Axial have consistently shown a deformation source near the caldera center. We use numerical modeling to attempt to reconcile these differences.
Nyhan, L; Begley, M; Mutel, A; Qu, Y; Johnson, N; Callanan, M
2018-09-01
The aim of this study was to develop a model to predict growth of Listeria in complex food matrices as a function of pH, water activity and undissociated acetic and propionic acid concentration i.e. common food hurdles. Experimental growth curves of Listeria in food products and broth media were collected from ComBase, the literature and industry sources from which a bespoke secondary gamma model was constructed. Model performance was evaluated by comparing predictions to measured growth rates in growth media (BHI broth) and two adjusted food matrices (zucchini purée and béarnaise sauce). In general, observed growth rates were higher in broth than in the food matrices which resulted in the model over-estimating growth in the adjusted food matrices. In addition, model outputs were more accurate for conditions without acids, indicating that the organic acid component of the model was a source of inaccuracy. In summary, a new predictive growth model for innovating or renovating food products that rely on multi-hurdle technology was created. This study is the first to report on modelling of propionic acid as an inhibitor of Listeria in combination with other hurdles. Our findings provide valuable insights into predictive model design and performance and highlight the importance of experimental validation of models in real food matrices rather than laboratory media alone. Copyright © 2018 Elsevier Ltd. All rights reserved.
Advanced Computational Framework for Environmental Management ZEM, Version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinov, Velimir V.; O'Malley, Daniel; Pandey, Sachin
2016-11-04
Typically environmental management problems require analysis of large and complex data sets originating from concurrent data streams with different data collection frequencies and pedigree. These big data sets require on-the-fly integration into a series of models with different complexity for various types of model analyses where the data are applied as soft and hard model constraints. This is needed to provide fast iterative model analyses based on the latest available data to guide decision-making. Furthermore, the data and model are associated with uncertainties. The uncertainties are probabilistic (e.g. measurement errors) and non-probabilistic (unknowns, e.g. alternative conceptual models characterizing site conditions).more » To address all of these issues, we have developed an integrated framework for real-time data and model analyses for environmental decision-making called ZEM. The framework allows for seamless and on-the-fly integration of data and modeling results for robust and scientifically-defensible decision-making applying advanced decision analyses tools such as Bayesian- Information-Gap Decision Theory (BIG-DT). The framework also includes advanced methods for optimization that are capable of dealing with a large number of unknown model parameters, and surrogate (reduced order) modeling capabilities based on support vector regression techniques. The framework is coded in Julia, a state-of-the-art high-performance programing language (http://julialang.org). The ZEM framework is open-source and can be applied to any environmental management site. The framework will be open-source and released under GPL V3 license.« less
3-D decoupled inversion of complex conductivity data in the real number domain
NASA Astrophysics Data System (ADS)
Johnson, Timothy C.; Thomle, Jonathan
2018-01-01
Complex conductivity imaging (also called induced polarization imaging or spectral induced polarization imaging when conducted at multiple frequencies) involves estimating the frequency-dependent complex electrical conductivity distribution of the subsurface. The superior diagnostic capabilities provided by complex conductivity spectra have driven advancements in mechanistic understanding of complex conductivity as well as modelling and inversion approaches over the past several decades. In this work, we demonstrate the theory and application for an approach to 3-D modelling and inversion of complex conductivity data in the real number domain. Beginning from first principles, we demonstrate how the equations for the real and imaginary components of the complex potential may be decoupled. This leads to a description of the real and imaginary source current terms, and a corresponding assessment of error arising from an assumption necessary to complete the decoupled modelling. We show that for most earth materials, which exhibit relatively small phases (e.g. less than 0.2 radians) in complex conductivity, these errors become insignificant. For higher phase materials, the errors may be quantified and corrected through an iterative procedure. We demonstrate the accuracy of numerical forward solutions by direct comparison to corresponding analytic solutions. We demonstrate the inversion using both synthetic and field examples with data collected over a waste infiltration trench, at frequencies ranging from 0.5 to 7.5 Hz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
NASA Technical Reports Server (NTRS)
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1977-01-01
A concise model has been developed to analyze uncertainties in stratospheric perturbations, yet uses a minimum of computer time and is complete enough to represent the results of more complex models. The steady state model applies iteration to achieve coupling between interacting species. The species are determined from diffusion equations with appropriate sources and sinks. Diurnal effects due to chlorine nitrate formation are accounted for by analytic approximation. The model has been used to evaluate steady state perturbations due to injections of chlorine and NO(X).
Creation of anatomical models from CT data
NASA Astrophysics Data System (ADS)
Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.
2018-04-01
Computed tomography is a great source of biomedical data because it allows a detailed exploration of complex anatomical structures. Some structures are not visible on CT scans, and some are hard to distinguish due to partial volume effect. CT datasets require preprocessing before using them as anatomical models in a simulation system. The work describes segmentation and data transformation methods for an anatomical model creation from the CT data. The result models may be used for visual and haptic rendering and drilling simulation in a virtual surgery system.
Shock Sensitivity of energetic materials
NASA Technical Reports Server (NTRS)
Kim, K.
1980-01-01
Viscoplastic deformation is examined as the principal source of hot energy. Some shock sensitivity data on a proposed model is explained. A hollow sphere model is used to approximate complex porous matrix of energetic materials. Two pieces of shock sensitivity data are qualitatively compared with results of the proposed model. The first is the p2 tau law. The second is the desensitization of energetic materials by a ramp wave applied stress. An approach to improve the model based on experimental observations is outlined.
Almendros, J.; Chouet, B.; Dawson, P.
2001-01-01
We present a probabilistic method to locate the source of seismic events using seismic antennas. The method is based on a comparison of the event azimuths and slownesses derived from frequency-slowness analyses of array data, with a slowness vector model. Several slowness vector models are considered including both homogeneous and horizontally layered half-spaces and also a more complex medium representing the actual topography and three-dimensional velocity structure of the region under study. In this latter model the slowness vector is obtained from frequency-slowness analyses of synthetic signals. These signals are generated using the finite difference method and include the effects of topography and velocity structure to reproduce as closely as possible the behavior of the observed wave fields. A comparison of these results with those obtained with a homogeneous half-space demonstrates the importance of structural and topographic effects, which, if ignored, lead to a bias in the source location. We use synthetic seismograms to test the accuracy and stability of the method and to investigate the effect of our choice of probability distributions. We conclude that this location method can provide the source position of shallow events within a complex volcanic structure such as Kilauea Volcano with an error of ??200 m. Copyright 2001 by the American Geophysical Union.
USDA-ARS?s Scientific Manuscript database
Many landscapes are comprised of a variety of vegetation types with different canopy structure, rooting depth, physiological characteristics, including response to environmental stressors, etc. Even in agricultural regions, different management practices, including crop rotations, irrigation schedu...
A Bivariate Space-time Downscaler Under Space and Time Misalignment
Ozone and particulate matter PM2:5 are co-pollutants that have long been associated with increased public health risks. Information on concentration levels for both pollutants come from two sources: monitoring sites and output from complex numerical models that produce...
Trends and New Directions in Software Architecture
2014-10-10
frameworks Open source Cloud strategies NoSQL Machine Learning MDD Incremental approaches Dashboards Distributed development...complexity grows NoSQL Models are not created equal 2014 Our Current Research Lightweight Evaluation and Architecture Prototyping for Big Data
Myerburg, Robert J; Ullmann, Steven G
2015-04-01
Although identification and management of cardiovascular risk markers have provided important population risk insights and public health benefits, individual risk prediction remains challenging. Using sudden cardiac death risk as a base case, the complex epidemiology of sudden cardiac death risk and the substantial new funding required to study individual risk are explored. Complex epidemiology derives from the multiple subgroups having different denominators and risk profiles, while funding limitations emerge from saturation of conventional sources of research funding without foreseeable opportunities for increases. A resolution to this problem would have to emerge from new sources of funding targeted to individual risk prediction. In this analysis, we explore the possibility of a research funding strategy that would offer business incentives to the insurance industries, while providing support for unresolved research goals. The model is developed for the case of sudden cardiac death risk, but the concept is applicable to other areas of the medical enterprise. © 2015 American Heart Association, Inc.
Compton Reflection in AGN with Simbol-X
NASA Astrophysics Data System (ADS)
Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.
2009-05-01
AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
BROADBAND RADIO POLARIMETRY AND FARADAY ROTATION OF 563 EXTRAGALACTIC RADIO SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. S.; Gaensler, B. M.; Feain, I. J.
2015-12-10
We present a broadband spectropolarimetric survey of 563 discrete, mostly unresolved radio sources between 1.3 and 2.0 GHz using data taken with the Australia Telescope Compact Array. We have used rotation-measure synthesis to identify Faraday-complex polarized sources, those objects whose frequency-dependent polarization behavior indicates the presence of material possessing complicated magnetoionic structure along the line of sight (LOS). For sources classified as Faraday-complex, we have analyzed a number of their radio and multiwavelength properties to determine whether they differ from Faraday-simple polarized sources (sources for which LOS magnetoionic structures are comparatively simple) in these properties. We use this information tomore » constrain the physical nature of the magnetoionic structures responsible for generating the observed complexity. We detect Faraday complexity in 12% of polarized sources at ∼1′ resolution, but we demonstrate that underlying signal-to-noise limitations mean the true percentage is likely to be significantly higher in the polarized radio source population. We find that the properties of Faraday-complex objects are diverse, but that complexity is most often associated with depolarization of extended radio sources possessing a relatively steep total intensity spectrum. We find an association between Faraday complexity and LOS structure in the Galactic interstellar medium (ISM) and claim that a significant proportion of the Faraday complexity we observe may be generated at interfaces of the ISM associated with ionization fronts near neutral hydrogen structures. Galaxy cluster environments and internally generated Faraday complexity provide possible alternative explanations in some cases.« less
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter
Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.
2010-01-01
Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233
Evaluating the effectiveness of the MASW technique in a geologically complex terrain
NASA Astrophysics Data System (ADS)
Anukwu, G. C.; Khalil, A. E.; Abdullah, K. B.
2018-04-01
MASW surveys carried at a number of sites in Pulau Pinang, Malaysia, showed complicated dispersion curves which consequently made the inversion into soil shear velocity model ambiguous. This research work details effort to define the source of these complicated dispersion curves. As a starting point, the complexity of the phase velocity spectrum is assumed to be due to either the surveying parameters or the elastic properties of the soil structures. For the former, the surveying was carried out using different parameters. The complexities were persistent for the different surveying parameters, an indication that the elastic properties of the soil structure could be the reason. In order to exploit this assumption, a synthetic modelling approach was adopted using information from borehole, literature and geologically plausible models. Results suggest that the presence of irregular variation in the stiffness of the soil layers, high stiffness contrast and relatively shallow bedrock, results in a quite complex f-v spectrum, especially at frequencies lower than 20Hz, making it difficult to accurately extract the dispersion curve below this frequency. As such, for MASW technique, especially in complex geological situations as demonstrated, great care should be taken during the data processing and inversion to obtain a model that accurately depicts the subsurface.
1983-09-01
F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through
Business intelligence tools for radiology: creating a prototype model using open-source tools.
Prevedello, Luciano M; Andriole, Katherine P; Hanson, Richard; Kelly, Pauline; Khorasani, Ramin
2010-04-01
Digital radiology departments could benefit from the ability to integrate and visualize data (e.g. information reflecting complex workflow states) from all of their imaging and information management systems in one composite presentation view. Leveraging data warehousing tools developed in the business world may be one way to achieve this capability. In total, the concept of managing the information available in this data repository is known as Business Intelligence or BI. This paper describes the concepts used in Business Intelligence, their importance to modern Radiology, and the steps used in the creation of a prototype model of a data warehouse for BI using open-source tools.
Ueda, Masanori; Iwaki, Masafumi; Nishihara, Tokihiro; Satoh, Yoshio; Hashimoto, Ken-ya
2008-04-01
This paper describes a circuit model for the analysis of nonlinearity in the filters based on radiofrequency (RF) bulk acoustic wave (BAW) resonators. The nonlinear output is expressed by a current source connected parallel to the linear resonator. Amplitude of the nonlinear current source is programmed proportional to the product of linear currents flowing in the resonator. Thus, the nonlinear analysis is performed by the common linear analysis, even for complex device structures. The analysis is applied to a ladder-type RF BAW filter, and frequency dependence of the nonlinear output is discussed. Furthermore, this analysis is verified through comparison with experiments.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
NASA Astrophysics Data System (ADS)
Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.
2016-11-01
The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.
Coalescent: an open-source and scalable framework for exact calculations in coalescent theory
2012-01-01
Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878
Distances, Kinematics, And Structure Of The Orion Complex
NASA Astrophysics Data System (ADS)
Kounkel, Marina; Hartmann, Lee
2018-01-01
I present an analysis of the structure and kinematics of the Orion Molecular Cloud Complex in an effort to better characterize the dynamical state of the closest region of ongoing massive star formation. I measured stellar parallax and proper motions with <5% uncertainty using radio VLBI observations of non-thermally-emitting sources located in various star forming regions within the Orion Complex. This includes the first direct distance measurements for sources that are located outside of the Orion Nebula. I identified a number of binary systems in the VLBI dataset and fitted their orbital motion, which allows for the direct measurement of the masses of the individual components. Additionally, I have identified several stars that have been ejected from the Orion Nebula due to strong gravitational interactions with the most massive members. I complemented the parallax and proper motion measurements with the observations of optical radial velocities of the stars toward the Orion Complex, probing the histories of both dynamic evolution and star formation in the region, providing a 6-dimensional model of the Complex. These observations can serve as a baseline for comparison of the upcoming results from the Gaia space telescope
pyNS: an open-source framework for 0D haemodynamic modelling.
Manini, Simone; Antiga, Luca; Botti, Lorenzo; Remuzzi, Andrea
2015-06-01
A number of computational approaches have been proposed for the simulation of haemodynamics and vascular wall dynamics in complex vascular networks. Among them, 0D pulse wave propagation methods allow to efficiently model flow and pressure distributions and wall displacements throughout vascular networks at low computational costs. Although several techniques are documented in literature, the availability of open-source computational tools is still limited. We here present python Network Solver, a modular solver framework for 0D problems released under a BSD license as part of the archToolkit ( http://archtk.github.com ). As an application, we describe patient-specific models of the systemic circulation and detailed upper extremity for use in the prediction of maturation after surgical creation of vascular access for haemodialysis.
Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames
NASA Astrophysics Data System (ADS)
Schlup, Jason; Blanquart, Guillaume
2018-03-01
The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.
An adaptable architecture for patient cohort identification from diverse data sources.
Bache, Richard; Miles, Simon; Taweel, Adel
2013-12-01
We define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them. The architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented. We show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts. Although the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor. The architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity.
Analysis of the Herschel/HIFI 1.2 THz Wide Spectral Survey of the Orion Kleinmann-Low Nebula
NASA Astrophysics Data System (ADS)
Crockett, Nathan R.
This dissertation presents a comprehensive analysis of a broad band spectral line survey of the Orion Kleinmann-Low nebula (Orion KL), one of the most chemically rich regions in the Galaxy, using the HIFI instrument on board the Herschel Space Observatory. This survey spans a frequency range from 480 to 1907 GHz at a resolution of 1.1 MHz. These observations thus encompass the largest spectral coverage ever obtained toward this massive star forming region in the sub-mm with high spectral resolution, and include frequencies >1 THz where the Earth's atmosphere prevents observations from the ground. In all, we detect emission from 36 molecules (76 isotopologues). Combining this dataset with ground based mm spectroscopy obtained with the IRAM 30 m telescope, we model the molecular emission assuming local thermodynamic equilibrium (LTE). Because of the wide frequency coverage, our models are constrained over an unprecedented range in excitation energy, including states at or close to ground up to energies where emission is no longer detected. A χ2 analysis indicates that most of our models reproduce the observed emission well. In particular complex organics, some with thousands of transitions, are well fit by LTE models implying that gas densities are high (>10^6 cm^-3) and excitation temperatures and column densities are well constrained. Molecular abundances are computed using H2 column densities also derived from the HIFI survey. The rotation temperature distribution of molecules detected toward the hot core is much wider relative to the compact ridge, plateau, and extended ridge. We find that complex N-bearing species, cyanides in particular, systematically probe hotter gas than complex O-bearing species. This indicates complex N-bearing molecules may be more difficult to remove from grain surfaces or that hot gas phase formation routes are important for these species. We also present a detailed non-LTE analysis of H2S emission toward the hot core which suggests this light hydride may probe heavily embedded gas in close proximity to a hidden self-luminous source (or sources), conceivably responsible for OrionKL's high luminosity. The abundances derived here, along with the publicly available data and molecular fits, represent a legacy for comparison to other sources and chemical models.
Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques
NASA Astrophysics Data System (ADS)
Basu, N. B.; Fure, A. D.; Jawitz, J. W.
2006-12-01
Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).
Integrated spatiotemporal characterization of dust sources and outbreaks in Central and East Asia
NASA Astrophysics Data System (ADS)
Darmenova, Kremena T.
The potential of atmospheric dust aerosols to modify the Earth's environment and climate has been recognized for some time. However, predicting the diverse impact of dust has several significant challenges. One is to quantify the complex spatial and temporal variability of dust burden in the atmosphere. Another is to quantify the fraction of dust originating from human-made sources. This thesis focuses on the spatiotemporal characterization of sources and dust outbreaks in Central and East Asia by integrating ground-based data, satellite multisensor observations, and modeling. A new regional dust modeling system capable of operating over a span of scales was developed. The modeling system consists of a dust module DuMo, which incorporates several dust emission schemes of different complexity, and the PSU/NCAR mesoscale model MM5, which offers a variety of physical parameterizations and flexible nesting capability. The modeling system was used to perform for the first time a comprehensive study of the timing, duration, and intensity of individual dust events in Central and East Asia. Determining the uncertainties caused by the choice of model physics, especially the boundary layer parameterization, and the dust production scheme was the focus of our study. Implications to assessments of the anthropogenic dust fraction in these regions were also addressed. Focusing on Spring 2001, an analysis of routine surface meteorological observations and satellite multi-sensor data was carried out in conjunction with modeling to determine the extent to which integrated data set can be used to characterize the spatiotemporal distribution of dust plumes at a range of temporal scales, addressing the active dust sources in China and Mongolia, mid-range transport and trans-Pacific, long-range transport of dust outbreaks on a case-by-case basis. This work demonstrates that adequate and consistent characterization of individual dust events is central to establishing a reliable climatology, ultimately leading to improved assessments of dust impacts on the environment and climate. This will also help to identify the appropriate temporal and spatial scales for adequate intercomparison between model results and observational data as well as for developing an integrated analysis methodology for dust studies.
Strategy for Texture Management in Metals Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
Strategy for Texture Management in Metals Additive Manufacturing
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...
2017-01-31
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
Global high-frequency source imaging accounting for complexity in Green's functions
NASA Astrophysics Data System (ADS)
Lambert, V.; Zhan, Z.
2017-12-01
The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise
2013-01-01
1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
Modeling Events in the Lower Imperial Valley Basin
NASA Astrophysics Data System (ADS)
Tian, X.; Wei, S.; Zhan, Z.; Fielding, E. J.; Helmberger, D. V.
2010-12-01
The Imperial Valley below the US-Mexican border has few seismic stations but many significant earthquakes. Many of these events, such as the recent El Mayor-Cucapah event, have complex mechanisms involving a mixture of strike-slip and normal slip patterns with now over 30 aftershocks with magnitude over 4.5. Unfortunately, many earthquake records from the Southern Imperial Valley display a great deal of complexity, ie., strong Rayleigh wave multipathing and extended codas. In short, regional recordings in the US are too complex to easily separate source properties from complex propagation. Fortunately, the Dec 30 foreshock (Mw=5.9) has excellent recordings teleseismically and regionally, and moreover is observed with InSAR. We use this simple strike-slip event to calibrate paths. In particular, we are finding record segments involving Pnl (including depth phases) and some surface waves (mostly Love waves) that appear well behaved, ie., can be approximated by synthetics from 1D local models and events modeled with the Cut-and-Paste (CAP) routine. Simple events can then be identified along with path calibration. Modeling the more complicated paths can be started with known mechanisms. We will report on both the aftershocks and historic events.
Topics in Complexity: Dynamical Patterns in the Cyberworld
NASA Astrophysics Data System (ADS)
Qi, Hong
Quantitative understanding of mechanism in complex systems is a common "difficult" problem across many fields such as physical, biological, social and economic sciences. Investigation on underlying dynamics of complex systems and building individual-based models have recently been fueled by big data resulted from advancing information technology. This thesis investigates complex systems in social science, focusing on civil unrests on streets and relevant activities online. Investigation consists of collecting data of unrests from open digital source, featuring dynamical patterns underlying, making predictions and constructing models. A simple law governing the progress of two-sided confrontations is proposed with data of activities at micro-level. Unraveling the connections between activity of organizing online and outburst of unrests on streets gives rise to a further meso-level pattern of human behavior, through which adversarial groups evolve online and hyper-escalate ahead of real-world uprisings. Based on the patterns found, noticeable improvement of prediction of civil unrests is achieved. Meanwhile, novel model created from combination of mobility dynamics in the cyberworld and a traditional contagion model can better capture the characteristics of modern civil unrests and other contagion-like phenomena than the original one.
Yang, Guanxue; Wang, Lin; Wang, Xiaofan
2017-06-07
Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.
NASA Astrophysics Data System (ADS)
Ayda Ustaömer, Petek; Ustaömer, Timur; Gerdes, Axel; Robertson, Alastair H. F.; Zulauf, Gernold
2014-05-01
The Permo-Triassic Karakaya Complex is well explained by northward subduction of Palaeotethys but until now no corresponding magmatic arc has been identified in the region. With the aim of determining the compositions and ages of the source units, ten sandstone samples were collected from the mappably distinct Ortaoba, Hodul, Kendirli and Orhanlar Units. Zircon grains were extracted from these sandstones and >1300 were dated by the U-Pb method and subsequently analysed for the Lu-Hf isotopic compositions by LA-MC-ICPMS at Goethe University, Frankfurt. The U-Pb-Hf isotope systematics are indicative of two different sediment provenances. The first, represented by the Ortaoba, Hodul and Kendirli Units, is dominated by igneous rocks of Triassic (250-220 Ma), Early Carboniferous-Early Permian (290-340 Ma) and Early to Mid-Devonian (385-400 Ma) ages. The second provenance, represented by the Orhanlar Unit, is indicative of derivation from a peri-Gondwanan terrane. In case of the first provenance, the Devonian and Carboniferous source rocks exibit intermediate eHf(t) values (-11 to -3), consistent with the formation at a continental margin where juvenile mantle-derived magmas mixed with (recycled) old crust having Palaeoproterozoic Hf model ages. In contrast, the Triassic arc magma exhibits higher eHf(t) values (-6 to +6), consistent with the mixing of juvenile mantle-derived melts with (recycled) old crust perhaps somewhat rejuvanated during the Cadomian period. We have therefore identified a Triassic magmatic arc as predicted by the interpretation of the Karakaya Complex as an accretionary complex related to northward subduction (Carboniferous and Devonian granites are already well documented in NW Turkey). Possible explanations for the lack of any outcrop of the source magmatic arc are that it was later subducted or the Karakaya Complex was displaced laterally from its source arc (both post 220 Ma). Strike-slip displacement (driven by oblique subduction?) can also explain the presence of two different sandstone source areas as indicated by the combined U-Pb-Hf isotope and supporting petrographic data. This study was supported by TUBITAK, Project no: 111R015
Downhole seismic monitoring with Virtual Sources
NASA Astrophysics Data System (ADS)
Bakulin, A.; Calvert, R.
2005-12-01
Huge quantities of remaining oil and gas reserves are located in very challenging geological environments covered by salt, basalt or other complex overburdens. Conventional surface seismology struggles to deliver images necessary to economically explore them. Even if those reserves are found by drilling successful production critically depends on our ability to ``see" in real time where fluids are drawn from and how pressure changes throughout the reservoirs. For relatively simple overburdens surface time-lapse (4D) seismic monitoring became industry choice for aerial reservoir surveillance. For complex overburdens, 4D seismic does not have enough resolution and repeatability to answer the questions of reservoir engineers. For instance, often reservoir changes are too small to be detected from surface or these changes occur in such pace that all wells will be placed before we can detect them which greatly reduces the economical impact. Two additional challenges are present in real life that further complicate active monitoring: first, near-surface condition do change between the surveys (water level movement, freezing/thawing, tide variations etc) and second, repeating exact same acquisition geometry at the surface is difficult in practice. Both of these things may lead to false 4D response unrelated to reservoir changes. Virtual Source method (VSM) has been recently proposed as a way to eliminate overburden distortions for imaging and monitoring. VSM acknowledges upfront that our data inversion techniques are unable to unravel the details of the complex overburdens to the extent necessary to remove the distortions caused by them. Therefore VSM advocates placing permanent downhole geophones below that most complex overburden while still exciting signals with a surface sources. For instance, first applications include drilling instrumented wells below complicated near-surface, basalt or salt layer. Of course, in an ideal world we would prefer to have both downhole sources and receivers (e.g. in-situ 4D seismic), but for now VSM may be the most economical alternative. By performing data-driven redatuming with measured Green's functions, these data can be recast into complete downhole dataset with buried Virtual Sources located at each downhole geophone. This step can be effectively thought of as a time reversal and it's remarkable feature is that velocity model between sources and receivers is not required to perform it. We will show various applications of the VSM method to several synthetic and real time-lapse datasets to illustrate the following advantages: 1) ability of VSM to eliminate overburden distortions without knowing velocity model between surface sources and downhole receivers, 2) greater quality of Virtual Sources in strongly scattering environment, 3) beneficial downward only radiation pattern on the Virtual Sources, 4) ability to correct non-repeatability caused by slight changes in acquisition geometry and temporal changes in the near surface, 5) ability to create P-wave Virtual Sources without shear radiation and S-sources without P-waves. Versatility of VSM to handle 1D, 2D and 3D situations and its ability to handle overburdens of any complexity makes it an indispensable tool for the active geophysical monitoring in a challenging geological environments. Although examples presented all come from an oilfield, it is straightforward to envision analogous applications in many other fields ranging from global geophysics to monitoring man-made structures.
Visualization, documentation, analysis, and communication of large scale gene regulatory networks
Longabaugh, William J.R.; Davidson, Eric H.; Bolouri, Hamid
2009-01-01
Summary Genetic regulatory networks (GRNs) are complex, large-scale, and spatially and temporally distributed. These characteristics impose challenging demands on computational GRN modeling tools, and there is a need for custom modeling tools. In this paper, we report on our ongoing development of BioTapestry, an open source, freely available computational tool designed specifically for GRN modeling. We also outline our future development plans, and give some examples of current applications of BioTapestry. PMID:18757046
Gas Diffusion in Fluids Containing Bubbles
NASA Technical Reports Server (NTRS)
Zak, M.; Weinberg, M. C.
1982-01-01
Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.
In-vehicle group activity modeling and simulation in sensor-based virtual environment
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.
Development and application of computational fluid dynamics (CFD) simulations are being advanced through case studies for simulating air pollutant concentrations from sources within open fields and within complex urban building environments. CFD applications have been under deve...
A dynamic nitrogen budget model of a Pacific Northwest salt marsh
The role of salt marshes as either nitrogen sinks or sources in relation to their adjacent estuaries has been a focus of ecosystem service research for many decades. The complex hydrology of these systems is driven by tides, upland surface runoff, precipitation, evapotranspirati...
THE TOP 10 SPITZER YOUNG STELLAR OBJECTS IN 30 DORADUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walborn, Nolan R.; Barba, Rodolfo H.; Sewilo, Marta M., E-mail: walborn@stsci.edu, E-mail: rbarba@dfuls.cl, E-mail: mmsewilo@pha.jhu.edu
2013-04-15
The most luminous Spitzer point sources in the 30 Doradus triggered second generation are investigated coherently in the 3-8 {mu}m region. Remarkable diversity and complexity in their natures are revealed. Some are also among the brightest JHK sources, while others are not. Several of them are multiple when examined at higher angular resolutions with Hubble Space Telescope NICMOS and WFPC2/WFC3 as available, or with VISTA/VMC otherwise. One is a dusty compact H II region near the far northwestern edge of the complex, containing a half-dozen bright I-band sources. Three others appear closely associated with luminous WN stars and causal connectionsmore » are suggested. Some are in the heads of dust pillars oriented toward R136, as previously discussed from the NICMOS data. One resides in a compact cluster of much fainter sources, while another appears monolithic at the highest resolutions. Surprisingly, one is the brighter of the two extended ''mystery spots'' associated with Knot 2 of Walborn et al. Masses are derived from young stellar object models for unresolved sources and lie in the 10-30 M{sub Sun} range. Further analysis of the IR sources in this unique region will advance understanding of triggered massive star formation, perhaps in some unexpected and unprecedented ways.« less
Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.
Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun
2018-05-08
Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Open Source GIS based integrated watershed management
NASA Astrophysics Data System (ADS)
Byrne, J. M.; Lindsay, J.; Berg, A. A.
2013-12-01
Optimal land and water management to address future and current resource stresses and allocation challenges requires the development of state-of-the-art geomatics and hydrological modelling tools. Future hydrological modelling tools should be of high resolution, process based with real-time capability to assess changing resource issues critical to short, medium and long-term enviromental management. The objective here is to merge two renowned, well published resource modeling programs to create an source toolbox for integrated land and water management applications. This work will facilitate a much increased efficiency in land and water resource security, management and planning. Following an 'open-source' philosophy, the tools will be computer platform independent with source code freely available, maximizing knowledge transfer and the global value of the proposed research. The envisioned set of water resource management tools will be housed within 'Whitebox Geospatial Analysis Tools'. Whitebox, is an open-source geographical information system (GIS) developed by Dr. John Lindsay at the University of Guelph. The emphasis of the Whitebox project has been to develop a user-friendly interface for advanced spatial analysis in environmental applications. The plugin architecture of the software is ideal for the tight-integration of spatially distributed models and spatial analysis algorithms such as those contained within the GENESYS suite. Open-source development extends knowledge and technology transfer to a broad range of end-users and builds Canadian capability to address complex resource management problems with better tools and expertise for managers in Canada and around the world. GENESYS (Generate Earth Systems Science input) is an innovative, efficient, high-resolution hydro- and agro-meteorological model for complex terrain watersheds developed under the direction of Dr. James Byrne. GENESYS is an outstanding research and applications tool to address challenging resource management issues in industry, government and nongovernmental agencies. Current research and analysis tools were developed to manage meteorological, climatological, and land and water resource data efficiently at high resolution in space and time. The deliverable for this work is a Whitebox-GENESYS open-source resource management capacity with routines for GIS based watershed management including water in agriculture and food production. We are adding urban water management routines through GENESYS in 2013-15 with an engineering PhD candidate. Both Whitebox-GAT and GENESYS are already well-established tools. The proposed research will combine these products to create an open-source geomatics based water resource management tool that is revolutionary in both capacity and availability to a wide array of Canadian and global users
Italian Case Studies Modelling Complex Earthquake Sources In PSHA
NASA Astrophysics Data System (ADS)
Gee, Robin; Peruzza, Laura; Pagani, Marco
2017-04-01
This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data and field evidence associated with the August mainshock. Earthquake activity rates during the very first weeks after the deadly earthquake were used to calibrated an Omori-Utsu decay curve, and the magnitude distribution of aftershocks is assumed to follow a Gutenberg-Richter distribution. We apply uniform and non-uniform spatial distribution of the seismicity across the fault source, by modulating the rates as a decreasing function of distance from the mainshock. The hazard results are computed for short-exposure periods (1 month, before the occurrences of October earthquakes) and compared to the background hazard given by law (MPS04), and to observations at some reference sites. We also show the results of disaggregation computed for the city of Amatrice. Finally, we attempt to update the results in light of the new "main" events that occurred afterwards in the region. All source modeling and hazard calculations are performed using the OpenQuake engine. We discuss the novelties of these works, and the benefits and limitations of both analyses, particularly in such different contexts of seismic hazard.
DigitalHuman (DH): An Integrative Mathematical Model ofHuman Physiology
NASA Technical Reports Server (NTRS)
Hester, Robert L.; Summers, Richard L.; lIescu, Radu; Esters, Joyee; Coleman, Thomas G.
2010-01-01
Mathematical models and simulation are important tools in discovering the key causal relationships governing physiological processes and improving medical intervention when physiological complexity is a central issue. We have developed a model of integrative human physiology called DigitalHuman (DH) consisting of -5000 variables modeling human physiology describing cardiovascular, renal, respiratory, endocrine, neural and metabolic physiology. Users can view time-dependent solutions and interactively introduce perturbations by altering numerical parameters to investigate new hypotheses. The variables, parameters and quantitative relationships as well as all other model details are described in XML text files. All aspects of the model, including the mathematical equations describing the physiological processes are written in XML open source, text-readable files. Model structure is based upon empirical data of physiological responses documented within the peer-reviewed literature. The model can be used to understand proposed physiological mechanisms and physiological interactions that may not be otherwise intUitively evident. Some of the current uses of this model include the analyses of renal control of blood pressure, the central role of the liver in creating and maintaining insulin resistance, and the mechanisms causing orthostatic hypotension in astronauts. Additionally the open source aspect of the modeling environment allows any investigator to add detailed descriptions of human physiology to test new concepts. The model accurately predicts both qualitative and more importantly quantitative changes in clinically and experimentally observed responses. DigitalHuman provides scientists a modeling environment to understand the complex interactions of integrative physiology. This research was supported by.NIH HL 51971, NSF EPSCoR, and NASA
Behrendt, John C.; Finn, C.; Morse, D.L.; Blankenship, D.D.
2006-01-01
Mt. Resnik is one of the previously reported 18 subaerially erupted volcanoes (in the West Antarctic rift system), which have high elevation and high bed relief beneath the WAIS in the Central West Antarctica (CWA) aerogeophysical survey. Mt. Resnik lies 300 m below the surface of the West Antarctic Ice Sheet (WAIS); it has 1.6 km topographic relief, and a conical form defined by radar ice-sounding of bed topography. It has an associated complex negative magnetic anomaly revealed by the CWA survey. We calculated and interpreted magnetic models fit to the Mt. Resnik anomaly as a volcanic source comprising both reversely and normally magnetized (in the present field direction) volcanic flows, 0.5-2.5-km thick, erupted subaerially during a time of magnetic field reversal. The Mt. Resnik 305-nT anomaly is part of an approximately 50- by 40-km positive anomaly complex extending about 30 km to the west of the Mt. Resnik peak, associated with an underlying source complex of about the same area, whose top is at the bed of the WAIS. The bed relief of this shallow source complex has a maximum of only about 400 m, whereas the modeled source is >3 km thick. From the spatial relationship we interpret that this source and Mt Resnik are approximately contemporaneous. Any subglacially (older?) erupted edifices comprising hyaloclastite or other volcanic debris, which formerly overlaid the source to the west, were removed by the moving WAIS into which they were injected as is the general case for the ???1000 volcanic centers at the base of the WAIS. The presence of the magnetic field reversal modeled for Mt. Resnik may represent the Bruhnes-Matayama reversal at 780 ka (or an earlier reversal). There are ???100 short-wavelength, steep-gradient, negative magnetic anomalies observed over the West Antarctic Ice Sheet (WAIS), or about 10% of the approximately 1000 short-wavelength, shallow-source, high-amplitude (50- >1000 nT) "volcanic" magnetic anomalies in the CWA survey. These negative anomalies indicate volcanic activity during a period of magnetic reversal and therefore must also be at least 780 ka. The spatial extent and volume of volcanism can now be reassessed for the 1.2 ?? 106 km2 region of the WAIS characterized by magnetic anomalies defining interpreted volcanic centers associated with the West Antarctic rift system. The CWA covers an area of 3.54 ?? 105 km2; forty-four percent of that area exhibits short-wavelength, high-amplitude anomalies indicative of volcanic centers and subvolcanic intrusions. This equates to an area of 0.51 ?? 105 km2 and a volume of 106 km3 beneath the ice-covered West Antarctic rift system, of sufficient extent to be classified as a large igneous province interpreted to be of Oligocene to recent age.
Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...
2015-04-01
Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less
NASA Astrophysics Data System (ADS)
Adar, E. M.; Rosenthal, E.; Issar, A. S.; Batelaan, O.
1992-08-01
This paper demonstrates the implementation of a novel mathematical model to quantify subsurface inflows from various sources into the arid alluvial basin of the southern Arava Valley divided between Israel and Jordan. The model is based on spatial distribution of environmental tracers and is aimed for use on basins with complex hydrogeological structure and/or with scarce physical hydrologic information. However, a sufficient qualified number of wells and springs are required to allow water sampling for chemical and isotopic analyses. Environmental tracers are used in a multivariable cluster analysis to define potential sources of recharge, and also to delimit homogeneous mixing compartments within the modeled aquifer. Six mixing cells were identified based on 13 constituents. A quantitative assessment of 11 significant subsurface inflows was obtained. Results revealed that the total recharge into the southern Arava basin is around 12.52 × 10 6m3year-1. The major source of inflow into the alluvial aquifer is from the Nubian sandstone aquifer which comprises 65-75% of the total recharge. Only 19-24% of the recharge, but the most important source of fresh water, originates over the eastern Jordanian mountains and alluvial fans.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.
Toxic metals in Venics lagoon sediments: Model, observation, an possible removal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, A.; Molinaroli, E.
1994-11-01
We have modeled the distribution of nine toxic metals in the surface sediments from 163 stations in the Venice lagoon using published data. Three entrances from the Adriatic Sea control the circulation in the lagoon and divide it into three basins. We assume, for purposes of modeling, that Porto Marghera at the head of the Industrial Zone area is the single source of toxic metals in the Venice lagoon. In a standing body of lagoon water, concentration of pollutants at distance x from the source (C{sub 0}) may be given by C=C{sub 0}e{sup -kx} where k is the rate constantmore » of dispersal. We calculated k empirically using concentrations at the source, and those farthest from it, that is the end points of the lagoon. Average k values (ppm/km) in the lagoon are: Zn 0.165, Cd 0.116, Hg 0.110, Cu 0.105, Co 0.072, Pb 0.058, Ni 0.008, Cr (0.011) and Fe (0.018 percent/km), and they have complex distributions. Given the k values, concentration at source (C{sub 0}), and the distance x of any point in the lagoon from the source, we have calculated the model concentrations of the nine metals at each sampling station. Tides, currents, floor morphology, additional sources, and continued dumping perturb model distributions causing anomalies (observed minus model concentrations). Positive anomalies are found near the source, where continued dumping perturbs initial boundary conditions, and in areas of sluggish circulation. Negative anomalies are found in areas with strong currents that may flush sediments out of the lagoon. We have thus identified areas in the lagoon where higher rate of sediment removal and exchange may lesson pollution. 41 refs., 4 figs., 3 tabs.« less
Rodriguez-Falces, Javier
2015-03-01
A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.
Fabian, P; Adamkiewicz, G; Levy, J I
2012-02-01
Residents of low-income multifamily housing can have elevated exposures to multiple environmental pollutants known to influence asthma. Simulation models can characterize the health implications of changing indoor concentrations, but quantifying the influence of interventions on concentrations is challenging given complex airflow and source characteristics. In this study, we simulated concentrations in a prototype multifamily building using CONTAM, a multizone airflow and contaminant transport program. Contaminants modeled included PM(2.5) and NO(2) , and parameters included stove use, presence and operability of exhaust fans, smoking, unit level, and building leakiness. We developed regression models to explain variability in CONTAM outputs for individual sources, in a manner that could be utilized in simulation modeling of health outcomes. To evaluate our models, we generated a database of 1000 simulated households with characteristics consistent with Boston public housing developments and residents and compared the predicted levels of NO(2) and PM(2.5) and their correlates with the literature. Our analyses demonstrated that CONTAM outputs could be readily explained by available parameters (R(2) between 0.89 and 0.98 across models), but that one-compartment box models would mischaracterize concentrations and source contributions. Our study quantifies the key drivers for indoor concentrations in multifamily housing and helps to identify opportunities for interventions. Many low-income urban asthmatics live in multifamily housing that may be amenable to ventilation-related interventions such as weatherization or air sealing, wall and ceiling hole repairs, and exhaust fan installation or repair, but such interventions must be designed carefully given their cost and their offsetting effects on energy savings as well as indoor and outdoor pollutants. We developed models to take into account the complex behavior of airflow patterns in multifamily buildings, which can be used to identify and evaluate environmental and non-environmental interventions targeting indoor air pollutants which can trigger asthma exacerbations. © 2011 John Wiley & Sons A/S.
Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity
NASA Astrophysics Data System (ADS)
Lin, Y. Y.; Lapusta, N.
2017-12-01
Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.
Respiratory health effects of air pollution: update on biomass smoke and traffic pollution.
Laumbach, Robert J; Kipen, Howard M
2012-01-01
Mounting evidence suggests that air pollution contributes to the large global burden of respiratory and allergic diseases, including asthma, chronic obstructive pulmonary disease, pneumonia, and possibly tuberculosis. Although associations between air pollution and respiratory disease are complex, recent epidemiologic studies have led to an increased recognition of the emerging importance of traffic-related air pollution in both developed and less-developed countries, as well as the continued importance of emissions from domestic fires burning biomass fuels, primarily in the less-developed world. Emissions from these sources lead to personal exposures to complex mixtures of air pollutants that change rapidly in space and time because of varying emission rates, distances from source, ventilation rates, and other factors. Although the high degree of variability in personal exposure to pollutants from these sources remains a challenge, newer methods for measuring and modeling these exposures are beginning to unravel complex associations with asthma and other respiratory tract diseases. These studies indicate that air pollution from these sources is a major preventable cause of increased incidence and exacerbation of respiratory disease. Physicians can help to reduce the risk of adverse respiratory effects of exposure to biomass and traffic air pollutants by promoting awareness and supporting individual and community-level interventions. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Lorenz, D.L.; Stark, J.R.
1990-01-01
Model simulations also indicated that drawdown caused by pumping two wells, each pumping at 75 gallons per minute and located about 1 mile southeast of the source of contamination, would be effective in controlling movement and volume of contaminated ground water in the immediate area of the source of contamination. Some contamination may already have moved beyond the influence of these wells, however, because of a complex set of hydraulic conditions.
Empirical tests of Zipf's law mechanism in open source Linux distribution.
Maillart, T; Sornette, D; Spaeth, S; von Krogh, G
2008-11-21
Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.
2012-09-01
State Award Nos. DE-AC52-07NA27344/24.2.3.2 and DOS_SIAA-11-AVC/NMA-1 ABSTRACT The Middle East is a tectonically complex and seismically...active region. The ability to accurately locate earthquakes and other seismic events in this region is complicated by tectonics , the uneven...and seismic source parameters show that this activity comes from tectonic events. This work is informed by continuous or event-based regional
Application of Δ- and λ-isomerism of octahedral metal complexes for inducing chiral nematic phases.
Sato, Hisako; Yamagishi, Akihiko
2009-11-20
The Delta- and Lambda-isomerism of octahedral metal complexes is employed as a source of chirality for inducing chiral nematic phases. By applying a wide range of chiral metal complexes as a dopant, it has been found that tris(beta-diketonato)metal(III) complexes exhibit an extremely high value of helical twisting power. The mechanism of induction of the chiral nematic phase is postulated on the basis of a surface chirality model. The strategy for designing an efficient dopant is described, together with the results using a number of examples of Co(III), Cr(III) and Ru(III) complexes with C(2) symmetry. The development of photo-responsive dopants to achieve the photo-induced structural change of liquid crystal by use of photo-isomerization of chiral metal complexes is also described.
Application of Δ- and Λ-Isomerism of Octahedral Metal Complexes for Inducing Chiral Nematic Phases
Sato, Hisako; Yamagishi, Akihiko
2009-01-01
The Δ- and Λ-isomerism of octahedral metal complexes is employed as a source of chirality for inducing chiral nematic phases. By applying a wide range of chiral metal complexes as a dopant, it has been found that tris(β-diketonato)metal(III) complexes exhibit an extremely high value of helical twisting power. The mechanism of induction of the chiral nematic phase is postulated on the basis of a surface chirality model. The strategy for designing an efficient dopant is described, together with the results using a number of examples of Co(III), Cr(III) and Ru(III) complexes with C2 symmetry. The development of photo-responsive dopants to achieve the photo-induced structural change of liquid crystal by use of photo-isomerization of chiral metal complexes is also described. PMID:20057959
NASA Astrophysics Data System (ADS)
Cécé, Raphaël; Bernard, Didier; Brioude, Jérome; Zahibo, Narcisse
2016-08-01
Tropical islands are characterized by thermal and orographical forcings which may generate microscale air mass circulations. The Lesser Antilles Arc includes small tropical islands (width lower than 50 km) where a total of one-and-a-half million people live. Air quality over this region is affected by anthropogenic and volcanic emissions, or saharan dust. To reduce risks for the population health, the atmospheric dispersion of emitted pollutants must be predicted. In this study, the dispersion of anthropogenic nitrogen oxides (NOx) is numerically modelled over the densely populated area of the Guadeloupe archipelago under weak trade winds, during a typical case of severe pollution. The main goal is to analyze how microscale resolutions affect air pollution in a small tropical island. Three resolutions of domain grid are selected: 1 km, 333 m and 111 m. The Weather Research and Forecasting model (WRF) is used to produce real nested microscale meteorological fields. Then the weather outputs initialize the Lagrangian Particle Dispersion Model (FLEXPART). The forward simulations of a power plant plume showed good ability to reproduce nocturnal peaks recorded by an urban air quality station. The increase in resolution resulted in an improvement of model sensitivity. The nesting to subkilometer grids helped to reduce an overestimation bias mainly because the LES domains better simulate the turbulent motions governing nocturnal flows. For peaks observed at two air quality stations, the backward sensitivity outputs identified realistic sources of NOx in the area. The increase in resolution produced a sharper inverse plume with a more accurate source area. This study showed the first application of the FLEXPART-WRF model to microscale resolutions. Overall, the coupling model WRF-LES-FLEXPART is useful to simulate the pollutant dispersion during a real case of calm wind regime over a complex terrain area. The forward and backward simulation results showed clearly that the subkilometer resolution of 333 m is necessary to reproduce realistic air pollution patterns in this case of short-range transport over a complex terrain area. Globally, this work contributes to enrich the sparsely documented domain of real nested microscale air pollution modelling. This study dealing with the determination of the proper resolution grid and proper turbulence scheme, is of significant interest to the near-source and complex terrain air quality research community.
Wave field synthesis of moving virtual sound sources with complex radiation properties.
Ahrens, Jens; Spors, Sascha
2011-11-01
An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.
Multiple-predators-based capture process on complex networks
NASA Astrophysics Data System (ADS)
Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi
2017-03-01
The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Recognition and source memory as multivariate decision processes.
Banks, W P
2000-07-01
Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.
NASA Astrophysics Data System (ADS)
Tompkins, A. M.; Thomson, M. C.
2017-12-01
Simulations of the impact of climate variations on a vector-bornedisease such as malaria are subject to a number of sources ofuncertainty. These include the model structure and parameter settingsin addition to errors in the climate data and the neglect of theirspatial heterogeneity, especially over complex terrain. We use aconstrained genetic algorithm to confront these two sources ofuncertainty for malaria transmission in the highlands of Kenya. Thetechnique calibrates the parameter settings of a process-based,mathematical model of malaria transmission to vary within theirassessed level of uncertainty and also allows the calibration of thedriving climate data. The simulations show that in highland settingsclose to the threshold for sustained transmission, the uncertainty inclimate is more important to address than the malaria modeluncertainty. Applications of the coupled climate-malaria modelling system are briefly presented.
Initialization of a mesoscale model for April 10, 1979, using alternative data sources
NASA Technical Reports Server (NTRS)
Kalb, M. W.
1984-01-01
A 35 km grid limited area mesoscale model was initialized with high density SESAME radiosonde data and high density TIROS-N satellite temperature profiles for April 10, 1979. These data sources were used individually and with low level wind fields constructed from surface wind observations. The primary objective was to examine the use of satellite temperature data for initializing a mesoscale model by comparing the forecast results with similar experiments employing radiosonde data. The impact of observed low level winds on the model forecasts was also investigated with experiments varying the method of insertion. All forecasts were compared with each other and with mesoscale observations for precipitation, mass and wind structure. Several forecasts produced convective precipitation systems with characteristics satisfying criteria for a mesoscale convective complex. High density satellite temperature data and balanced winds can be used in a mesoscale model to produce forecasts which verify favorably with observations.
Hydropower Modeling Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Andrade, Juan; Cohen, Stuart
Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less
NASA Astrophysics Data System (ADS)
Ito, A.; Feng, Y.
2009-12-01
An accurate prediction of bioavailable iron fraction for ocean biota is hampered by uncertainties in modeling soluble iron fractions in atmospheric aerosols. It has been proposed that atmospheric processing of mineral aerosols by anthropogenic pollutants may be a key pathway to transform insoluble iron into soluble forms. The dissolution of dust minerals strongly depends on solution pH, which is sensitive to the heterogeneous uptake of soluble gases by the dust particle. Due to the complexity, previous model assessments generally use a common assumption in thermodynamical equilibrium between gas and aerosol phases. Here, we compiled an emission inventory of iron from combustion and dust source, and incorporated a dust iron dissolution scheme in a global chemistry-aerosol transport model (IMPACT). We will examine and discuss the uncertainties in estimation of dissolved iron as well as comparisons of the model results with available observations.
Cell sources for in vitro human liver cell culture models.
Zeilinger, Katrin; Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny
2016-09-01
In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described. © 2016 by the Society for Experimental Biology and Medicine.
Cell sources for in vitro human liver cell culture models
Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny
2016-01-01
In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro. However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro. Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described. PMID:27385595
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Diary of an Educational Technologist.
ERIC Educational Resources Information Center
Wasser, Judith Davidson; McGillivray, Kevin; McNamara, Elizabeth T.
1998-01-01
Provides information on Hanau Model Schools Partnership whose goal is for technology to become a firmly accepted part of daily school life. Draws from research sources and excerpts directly from a teacher's electronic logs to present a view of the complex support the educational technologist provides to the full school community. (ASK)
On Quality and Measures in Software Engineering
ERIC Educational Resources Information Center
Bucur, Ion I.
2006-01-01
Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Thermal Image Sensing Model for Robotic Planning and Search.
Castro Jiménez, Lídice E; Martínez-García, Edgar A
2016-08-08
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.
Force on Force Modeling with Formal Task Structures and Dynamic Geometry
2017-03-24
task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLoughlin, K.
2016-01-11
The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from itsmore » nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.« less
Tracking the complex absorption in NGC 2110 with two Suzaku observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivers, Elizabeth; Markowitz, Alex; Rothschild, Richard
2014-05-10
We present spectral analysis of two Suzaku observations of the Seyfert 2 galaxy, NGC 2110. This source has been known to show complex, variable absorption which we study in depth by analyzing these two observations set 7 yr apart and by comparing them to previously analyzed observations with the XMM-Newton and Chandra observatories. We find that there is a relatively stable, full-covering absorber with a column density of ∼3× 10{sup 22} cm{sup –2}, with an additional patchy absorber that is likely variable in both column density and covering fraction over timescales of years, consistent with clouds in a patchy torusmore » or in the broad line region. We model a soft emission line complex, likely arising from ionized plasma and consistent with previous studies. We find no evidence for reflection from an accretion disk in this source with contribution from neither relativistically broadened Fe Kα line emission, nor from a Compton reflection hump.« less
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
NASA Technical Reports Server (NTRS)
Winter, Lisa M.; Veilleux, Sylvain; McKernan, Barry; Kallman, T.
2012-01-01
We present results from an analysis of the broadband, 0.3-195 keV, X-ray spectra of 48 Seyfert 1-1.5 sources detected in the very hard X-rays with the Swift Burst Alert Telescope (BAT). This sample is selected in an all-sky survey conducted in the 14-195 keV band. Therefore, our sources are largely unbiased toward both obscuration and host galaxy properties. Our detailed and uniform model fits to Suzaku/BAT and XMM-Newton/BAT spectra include the neutral absorption, direct power-law, reflected emission, soft excess, warm absorption, and narrow Fe I K[alpha] emission properties for the entire sample. We significantly detect O VII and O VIII edges in 52% of our sample. The strength of these detections is strongly correlated with the neutral column density measured in the spectrum. Among the strongest detections, X-ray grating and UV observations, where available, indicate outflowing material. The ionized column densities of sources with O VII and O VIII detections are clustered in a narrow range with Nwarm [approx] 1021 cm-2, while sources without strong detections have column densities of ionized gas an order of magnitude lower. Therefore, we note that sources without strong detections likely have warm ionized outflows present but at low column densities that are not easily probed with current X-ray observations. Sources with strong complex absorption have a strong soft excess, which may or may not be due to difficulties in modeling the complex spectra of these sources. Still, the detection of a flat [Gamma] [approx] 1 and a strong soft excess may allow us to infer the presence of strong absorption in low signal-to-noise active galactic nucleus spectra. Additionally, we include a useful correction from the Swift BAT luminosity to bolometric luminosity, based on a comparison of our spectral fitting results with published spectral energy distribution fits from 33 of our sources.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
Collimating lens for light-emitting-diode light source based on non-imaging optics.
Wang, Guangzhen; Wang, Lili; Li, Fuli; Zhang, Gongjian
2012-04-10
A collimating lens for a light-emitting-diode (LED) light source is an essential device widely used in lighting engineering. Lens surfaces are calculated by geometrical optics and nonimaging optics. This design progress does not rely on any software optimization and any complex iterative process. This method can be used for any type of light source not only Lambertian. The theoretical model is based on point source. But the practical LED source has a certain size. So in the simulation, an LED chip whose size is 1 mm*1 mm is used to verify the feasibility of the model. The mean results show that the lenses have a very compact structure and good collimating performance. Efficiency is defined as the ratio of the flux in the illuminated plane to the flux from LED source without considering the lens material transmission. Just investigating the loss in the designed lens surfaces, the two types of lenses have high efficiencies of more than 90% and 99%, respectively. Most lighting area (possessing 80% flux) radii are no more than 5 m when the illuminated plane is 200 m away from the light source.
Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.
2018-02-16
In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.
A critical review of principal traffic noise models: Strategies and implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar
2014-04-01
The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, M.T.; Segal, H.M.
1994-06-01
A new complex source microcomputer model has been developed for use at civil airports and Air Force bases. This paper describes both the key features of this model and its application in evaluating the air quality impact of new construction projects at three airports: one in the United States and two in Canada. The single EDMS model replaces the numerous models previously required to assess the air quality impact of pollution sources at airports. EDMS also employs a commercial data base to reduce the time and manpower required to accurately assess and document the air quality impact of airfield operations.more » On July 20, 1993, the U.S. Environmental Protection Agency (EPA) issued the final rule (Federal Register, 7/20/93, page 38816) to add new models to the Guideline on Air Quality Models. At that time EDMS was incorporated into the Guideline as an Appendix A model. 12 refs., 4 figs., 1 tab.« less
Anthropogenic combustion iron as a complex climate forcer.
Matsui, Hitoshi; Mahowald, Natalie M; Moteki, Nobuhiro; Hamilton, Douglas S; Ohata, Sho; Yoshida, Atsushi; Koike, Makoto; Scanza, Rachel A; Flanner, Mark G
2018-04-23
Atmospheric iron affects the global carbon cycle by modulating ocean biogeochemistry through the deposition of soluble iron to the ocean. Iron emitted by anthropogenic (fossil fuel) combustion is a source of soluble iron that is currently considered less important than other soluble iron sources, such as mineral dust and biomass burning. Here we show that the atmospheric burden of anthropogenic combustion iron is 8 times greater than previous estimates by incorporating recent measurements of anthropogenic magnetite into a global aerosol model. This new estimation increases the total deposition flux of soluble iron to southern oceans (30-90 °S) by 52%, with a larger contribution of anthropogenic combustion iron than dust and biomass burning sources. The direct radiative forcing of anthropogenic magnetite is estimated to be 0.021 W m -2 globally and 0.22 W m -2 over East Asia. Our results demonstrate that anthropogenic combustion iron is a larger and more complex climate forcer than previously thought, and therefore plays a key role in the Earth system.
Coherent transport and energy flow patterns in photosynthesis under incoherent excitation.
Pelzer, Kenley M; Can, Tankut; Gray, Stephen K; Morr, Dirk K; Engel, Gregory S
2014-03-13
Long-lived coherences have been observed in photosynthetic complexes after laser excitation, inspiring new theories regarding the extreme quantum efficiency of photosynthetic energy transfer. Whether coherent (ballistic) transport occurs in nature and whether it improves photosynthetic efficiency remain topics of debate. Here, we use a nonequilibrium Green's function analysis to model exciton transport after excitation from an incoherent source (as opposed to coherent laser excitation). We find that even with an incoherent source, the rate of environmental dephasing strongly affects exciton transport efficiency, suggesting that the relationship between dephasing and efficiency is not an artifact of coherent excitation. The Green's function analysis provides a clear view of both the pattern of excitonic fluxes among chromophores and the multidirectionality of energy transfer that is a feature of coherent transport. We see that even in the presence of an incoherent source, transport occurs by qualitatively different mechanisms as dephasing increases. Our approach can be generalized to complex synthetic systems and may provide a new tool for optimizing synthetic light harvesting materials.
Journey into Bone Models: A Review
Scheinpflug, Julia; Pfeiffenberger, Moritz; Damerau, Alexandra; Schwarz, Franziska; Textor, Martin; Lang, Annemarie
2018-01-01
Bone is a complex tissue with a variety of functions, such as providing mechanical stability for locomotion, protection of the inner organs, mineral homeostasis and haematopoiesis. To fulfil these diverse roles in the human body, bone consists of a multitude of different cells and an extracellular matrix that is mechanically stable, yet flexible at the same time. Unlike most tissues, bone is under constant renewal facilitated by a coordinated interaction of bone-forming and bone-resorbing cells. It is thus challenging to recreate bone in its complexity in vitro and most current models rather focus on certain aspects of bone biology that are of relevance for the research question addressed. In addition, animal models are still regarded as the gold-standard in the context of bone biology and pathology, especially for the development of novel treatment strategies. However, species-specific differences impede the translation of findings from animal models to humans. The current review summarizes and discusses the latest developments in bone tissue engineering and organoid culture including suitable cell sources, extracellular matrices and microfluidic bioreactor systems. With available technology in mind, a best possible bone model will be hypothesized. Furthermore, the future need and application of such a complex model will be discussed. PMID:29748516
Journey into Bone Models: A Review.
Scheinpflug, Julia; Pfeiffenberger, Moritz; Damerau, Alexandra; Schwarz, Franziska; Textor, Martin; Lang, Annemarie; Schulze, Frank
2018-05-10
Bone is a complex tissue with a variety of functions, such as providing mechanical stability for locomotion, protection of the inner organs, mineral homeostasis and haematopoiesis. To fulfil these diverse roles in the human body, bone consists of a multitude of different cells and an extracellular matrix that is mechanically stable, yet flexible at the same time. Unlike most tissues, bone is under constant renewal facilitated by a coordinated interaction of bone-forming and bone-resorbing cells. It is thus challenging to recreate bone in its complexity in vitro and most current models rather focus on certain aspects of bone biology that are of relevance for the research question addressed. In addition, animal models are still regarded as the gold-standard in the context of bone biology and pathology, especially for the development of novel treatment strategies. However, species-specific differences impede the translation of findings from animal models to humans. The current review summarizes and discusses the latest developments in bone tissue engineering and organoid culture including suitable cell sources, extracellular matrices and microfluidic bioreactor systems. With available technology in mind, a best possible bone model will be hypothesized. Furthermore, the future need and application of such a complex model will be discussed.
NASA Astrophysics Data System (ADS)
Li, D.
2016-12-01
Sudden water pollution accidents are unavoidable risk events that we must learn to co-exist with. In China's Taihu River Basin, the river flow conditions are complicated with frequently artificial interference. Sudden water pollution accident occurs mainly in the form of a large number of abnormal discharge of wastewater, and has the characteristics with the sudden occurrence, the uncontrollable scope, the uncertainty object and the concentrated distribution of many risk sources. Effective prevention of pollution accidents that may occur is of great significance for the water quality safety management. Bayesian networks can be applied to represent the relationship between pollution sources and river water quality intuitively. Using the time sequential Monte Carlo algorithm, the pollution sources state switching model, water quality model for river network and Bayesian reasoning is integrated together, and the sudden water pollution risk assessment model for river network is developed to quantify the water quality risk under the collective influence of multiple pollution sources. Based on the isotope water transport mechanism, a dynamic tracing model of multiple pollution sources is established, which can describe the relationship between the excessive risk of the system and the multiple risk sources. Finally, the diagnostic reasoning algorithm based on Bayesian network is coupled with the multi-source tracing model, which can identify the contribution of each risk source to the system risk under the complex flow conditions. Taking Taihu Lake water system as the research object, the model is applied to obtain the reasonable results under the three typical years. Studies have shown that the water quality risk at critical sections are influenced by the pollution risk source, the boundary water quality, the hydrological conditions and self -purification capacity, and the multiple pollution sources have obvious effect on water quality risk of the receiving water body. The water quality risk assessment approach developed in this study offers a effective tool for systematically quantifying the random uncertainty in plain river network system, and it also provides the technical support for the decision-making of controlling the sudden water pollution through identification of critical pollution sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bain, H. M.; Luhmann, J. G.; Li, Y.
During periods of increased solar activity, coronal mass ejections (CMEs) can occur in close succession and proximity to one another. This can lead to the interaction and merger of CME ejecta as they propagate in the heliosphere. The particles accelerated in these shocks can result in complex solar energetic particle (SEP) events, as observing spacecraft form both remote and local shock connections. It can be challenging to understand these complex SEP events from in situ profiles alone. Multipoint observations of CMEs in the near-Sun environment, from the Solar Terrestrial Relations Observatory –Sun Earth Connection Coronal and Heliospheric Investigation and themore » Solar and Heliospheric Observatory Large Angle and Spectrometric Coronagraph, greatly improve our chances of identifying the origin of these accelerated particles. However, contextual information on conditions in the heliosphere, including the background solar wind conditions and shock structures, is essential for understanding SEP properties well enough to forecast their characteristics. Wang–Sheeley–Arge WSA-ENLIL + Cone modeling provides a tool to interpret major SEP event periods in the context of a realistic heliospheric model and to determine how much of what is observed in large SEP events depends on nonlocal magnetic connections to shock sources. We discuss observations of the SEP-rich periods of 2010 August and 2012 July in conjunction with ENLIL modeling. We find that much SEP activity can only be understood in the light of such models, and in particular from knowing about both remote and local shock source connections. These results must be folded into the investigations of the physics underlying the longitudinal extent of SEP events, and the source connection versus diffusion pictures of interpretations of SEP events.« less
A source-controlled data center network model.
Yu, Yang; Liang, Mangui; Wang, Zhe
2017-01-01
The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.
A source-controlled data center network model
Yu, Yang; Liang, Mangui; Wang, Zhe
2017-01-01
The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925
Understanding Prairie Fen Hydrology - a Hierarchical Multi-Scale Groundwater Modeling Approach
NASA Astrophysics Data System (ADS)
Sampath, P.; Liao, H.; Abbas, H.; Ma, L.; Li, S.
2012-12-01
Prairie fens provide critical habitat to more than 50 rare species and significantly contribute to the biodiversity of the upper Great Lakes region. The sustainability of these globally unique ecosystems, however, requires that they be fed by a steady supply of pristine, calcareous groundwater. Understanding the hydrology that supports the existence of such fens is essential in preserving these valuable habitats. This research uses process-based multi-scale groundwater modeling for this purpose. Two fen-sites, MacCready Fen and Ives Road Fen, in Southern Michigan were systematically studied. A hierarchy of nested steady-state models was built for each fen-site to capture the system's dynamics at spatial scales ranging from the regional groundwater-shed to the local fens. The models utilize high-resolution Digital Elevation Models (DEM), National Hydrologic Datasets (NHD), a recently-assembled water-well database, and results from a state-wide groundwater mapping project to represent the complex hydro-geological and stress framework. The modeling system simulates both shallow glacial and deep bedrock aquifers as well as the interaction between surface water and groundwater. Aquifer heterogeneities were explicitly simulated with multi-scale transition probability geo-statistics. A two-way hydraulic head feedback mechanism was set up between the nested models, such that the parent models provided boundary conditions to the child models, and in turn the child models provided local information to the parent models. A hierarchical mass budget analysis was performed to estimate the seepage fluxes at the surface water/groundwater interfaces and to assess the relative importance of the processes at multiple scales that contribute water to the fens. The models were calibrated using observed base-flows at stream gauging stations and/or static water levels at wells. Three-dimensional particle tracking was used to predict the sources of water to the fens. We observed from the multi-scale simulations that the water system that supports the fens is a much larger, more connected, and more complex one than expected. The water in the fen can be traced back to a network of sources, including lakes and wetlands at different elevations, which are connected to a regional mound through a "cascade delivery mechanism". This "master recharge area" is the ultimate source of water not only to the fens in its vicinity, but also for many major rivers and aquifers. The implication of this finding is that prairie fens must be managed as part of a much larger, multi-scale groundwater system and we must consider protection of the shorter and long-term water sources. This will require a fundamental reassessment of our current approach to fen conservation, which is primarily based on protection of individual fens and their immediate surroundings. Clearly, in the future we must plan for conservation of the broad recharge areas and the multiple fen complexes they support.
Theoretical study on onset of cubic distortion product otoacoustic emissions
NASA Astrophysics Data System (ADS)
Vencovský, Václav; Vetešník, Aleš
2018-05-01
The distortion product otoacoustic emissions (DPOAEs) are generated when the cochlea is stimulated by two pure tones with different frequencies f1 and f2. Onset of the DPOAE amplitude may have a nonmonotonic complex shape when the f2 is pulsed during a stationary f1 input. Observed complexities have been explained as (1) due to the secondary source of the DPOAE at the distortion product (DP) characteristic site, and (2) due to the spatial distribution of DP sources with different phases. There is also a third possibility that the complexities are due to the suppression of the f1 basilar membrane (BM) response during the f2 onset. In this study, a hydrodynamic cochlea model is used to examine influence of f1 suppression on the time course of DPOAE onset. In particular, a set of simulations was performed for frequency ratio f2/f1 = 1.26 and various levels of the primary tones (L1 and L2=30-70 dB SPL) to determine the relationship between time dependencies of the DPOAE onset and the suppression of the f1 BM response. The model predicts that suppression of the f1 BM response can cause suppression of DPOAE amplitude during the onset period.
Takecian, Pedro L.; Oikawa, Marcio K.; Braghetto, Kelly R.; Rocha, Paulo; Lucena, Fred; Kavounis, Katherine; Schlumpf, Karen S.; Acker, Susan; Carneiro-Proietti, Anna B. F.; Sabino, Ester C.; Custer, Brian; Busch, Michael P.; Ferreira, João E.
2013-01-01
Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development. PMID:23729945
Takecian, Pedro L; Oikawa, Marcio K; Braghetto, Kelly R; Rocha, Paulo; Lucena, Fred; Kavounis, Katherine; Schlumpf, Karen S; Acker, Susan; Carneiro-Proietti, Anna B F; Sabino, Ester C; Custer, Brian; Busch, Michael P; Ferreira, João E
2013-06-01
Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development.
NASA Astrophysics Data System (ADS)
Mathur, R.; Xing, J.; Szykman, J.; Gan, C. M.; Hogrefe, C.; Pleim, J. E.
2015-12-01
Air Pollution simulation models must address the increasing complexity arising from new model applications that treat multi-pollutant interactions across varying space and time scales. Setting and attaining lower ambient air quality standards requires an improved understanding and quantification of source attribution amongst the multiple anthropogenic and natural sources, on time scales ranging from episodic to annual and spatial scales ranging from urban to continental. Changing emission patterns over the developing regions of the world are likely to exacerbate the impacts of long-range pollutant transport on background pollutant levels, which may then impact the attainment of local air quality standards. Thus, strategies for reduction of pollution levels of surface air over a region are complicated not only by the interplay of local emissions sources and several complex physical, chemical, dynamical processes in the atmosphere, but also hemispheric background levels of pollutants. Additionally, as short-lived climate forcers, aerosols and ozone exert regionally heterogeneous radiative forcing and influence regional climate trends. EPA's coupled WRF-CMAQ modeling system is applied over a domain encompassing the northern hemisphere for the period spanning 1990-2010. This period has witnessed significant reductions in anthropogenic emissions in North America and Europe as a result of implementation of control measures and dramatic increases across Asia associated with economic and population growth, resulting in contrasting trends in air pollutant distributions and transport patterns across the northern hemisphere. Model results (trends in pollutant concentrations, optical and radiative characteristics) across the northern hemisphere are analyzed in conjunction with surface, aloft and remote sensing measurements to contrast the differing trends in air pollution and aerosol-radiation interactions in these regions over the past two decades. Given the future LEO (TropOMI) and GEO (Sentinel-4, GEMS, and TEMPO) atmospheric chemistry satellite observing capabilities, the results from these model applications will be discussed in the context of how the new satellite observations could help constrain and reduce uncertainties in the models.
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
A Model for Climate Change Adaptation
NASA Astrophysics Data System (ADS)
Pasqualini, D.; Keating, G. N.
2009-12-01
Climate models predict serious impacts on the western U.S. in the next few decades, including increased temperatures and reduced precipitation. In combination, these changes are linked to profound impacts on fundamental systems, such as water and energy supplies, agriculture, population stability, and the economy. Global and national imperatives for climate change mitigation and adaptation are made actionable at the state level, for instance through greenhouse gas (GHG) emission regulations and incentives for renewable energy sources. However, adaptation occurs at the local level, where energy and water usage can be understood relative to local patterns of agriculture, industry, and culture. In response to the greenhouse gas emission reductions required by California’s Assembly Bill 32 (2006), Sonoma County has committed to sharp emissions reductions across several sectors, including water, energy, and transportation. To assist Sonoma County develop a renewable energy (RE) portfolio to achieve this goal we have developed an integrated assessment model, CLEAR (CLimate-Energy Assessment for Resiliency) model. Building on Sonoma County’s existing baseline studies of energy use, carbon emissions and potential RE sources, the CLEAR model simulates the complex interactions among technology deployment, economics and social behavior. This model enables assessment of these and other components with specific analysis of their coupling and feedbacks because, due to the complex nature of the problem, the interrelated sectors cannot be studied independently. The goal is an approach to climate change mitigation and adaptation that is replicable for use by other interested communities. The model user interfaces helps stakeholders and policymakers understand options for technology implementation.
NASA Astrophysics Data System (ADS)
Koh, E. H.; Lee, E.; Kaown, D.; Lee, K. K.; Green, C. T.
2017-12-01
Timing and magnitudes of nitrate contamination are determined by various factors like contaminant loading, recharge characteristics and geologic system. Information of an elapsed time since recharged water traveling to a certain outlet location, which is defined as groundwater age, can provide indirect interpretation related to the hydrologic characteristics of the aquifer system. There are three major methods (apparent ages, lumped parameter model, and numerical model) to date groundwater ages, which differently characterize groundwater mixing resulted by various groundwater flow pathways in a heterogeneous aquifer system. Therefore, in this study, we compared the three age models in a complex aquifer system by using observed age tracer data and reconstructed history of nitrate contamination by long-term source loading. The 3H-3He and CFC-12 apparent ages, which did not consider the groundwater mixing, estimated the most delayed response time and a highest period of the nitrate loading had not reached yet. However, the lumped parameter model could generate more recent loading response than the apparent ages and the peak loading period influenced the water quality. The numerical model could delineate various groundwater mixing components and its different impacts on nitrate dynamics in the complex aquifer system. The different age estimation methods lead to variations in the estimated contaminant loading history, in which the discrepancy in the age estimation was dominantly observed in the complex aquifer system.
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
Microscale Obstacle Resolving Air Quality Model Evaluation with the Michelstadt Case
Rakai, Anikó; Kristóf, Gergely
2013-01-01
Modelling pollutant dispersion in cities is challenging for air quality models as the urban obstacles have an important effect on the flow field and thus the dispersion. Computational Fluid Dynamics (CFD) models with an additional scalar dispersion transport equation are a possible way to resolve the flowfield in the urban canopy and model dispersion taking into consideration the effect of the buildings explicitly. These models need detailed evaluation with the method of verification and validation to gain confidence in their reliability and use them as a regulatory purpose tool in complex urban geometries. This paper shows the performance of an open source general purpose CFD code, OpenFOAM for a complex urban geometry, Michelstadt, which has both flow field and dispersion measurement data. Continuous release dispersion results are discussed to show the strengths and weaknesses of the modelling approach, focusing on the value of the turbulent Schmidt number, which was found to give best statistical metric results with a value of 0.7. PMID:24027450
Microscale obstacle resolving air quality model evaluation with the Michelstadt case.
Rakai, Anikó; Kristóf, Gergely
2013-01-01
Modelling pollutant dispersion in cities is challenging for air quality models as the urban obstacles have an important effect on the flow field and thus the dispersion. Computational Fluid Dynamics (CFD) models with an additional scalar dispersion transport equation are a possible way to resolve the flowfield in the urban canopy and model dispersion taking into consideration the effect of the buildings explicitly. These models need detailed evaluation with the method of verification and validation to gain confidence in their reliability and use them as a regulatory purpose tool in complex urban geometries. This paper shows the performance of an open source general purpose CFD code, OpenFOAM for a complex urban geometry, Michelstadt, which has both flow field and dispersion measurement data. Continuous release dispersion results are discussed to show the strengths and weaknesses of the modelling approach, focusing on the value of the turbulent Schmidt number, which was found to give best statistical metric results with a value of 0.7.
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen
2015-11-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
NASA Astrophysics Data System (ADS)
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
An adaptable architecture for patient cohort identification from diverse data sources
Bache, Richard; Miles, Simon; Taweel, Adel
2013-01-01
Objective We define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them. Method The architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented. Results We show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts. Discussion Although the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor. Conclusions The architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity. PMID:24064442
Dynamical model for the toroidal sporadic meteors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorný, Petr; Vokrouhlický, David; Nesvorný, David
More than a decade of radar operations by the Canadian Meteor Orbit Radar have allowed both young and moderately old streams to be distinguished from the dispersed sporadic background component. The latter has been categorized according to broad radiant regions visible to Earth-based observers into three broad classes: the helion and anti-helion source, the north and south apex sources, and the north and south toroidal sources (and a related arc structure). The first two are populated mainly by dust released from Jupiter-family comets and new comets. Proper modeling of the toroidal sources has not to date been accomplished. Here, wemore » develop a steady-state model for the toroidal source of the sporadic meteoroid complex, compare our model with the available radar measurements, and investigate a contribution of dust particles from our model to the whole population of sporadic meteoroids. We find that the long-term stable part of the toroidal particles is mainly fed by dust released by Halley type (long period) comets (HTCs). Our synthetic model reproduces most of the observed features of the toroidal particles, including the most troublesome low-eccentricity component, which is due to a combination of two effects: particles' ability to decouple from Jupiter and circularize by the Poynting-Robertson effect, and large collision probability for orbits similar to that of the Earth. Our calibrated model also allows us to estimate the total mass of the HTC-released dust in space and check the flux necessary to maintain the cloud in a steady state.« less
A Novel Approach for Determining Source-Receptor Relationships of Aerosols in Model Simulations
NASA Astrophysics Data System (ADS)
Ma, P.; Gattiker, J.; Liu, X.; Rasch, P. J.
2013-12-01
The climate modeling community usually performs sensitivity studies in the 'one-factor-at-a-time' fashion. However, owing to the a-priori unknown complexity and nonlinearity of the climate system and simulation response, it is computationally expensive to systematically identify the cause-and-effect of multiple factors in climate models. In this study, we use a Gaussian Process emulator, based on a small number of Community Atmosphere Model Version 5.1 (CAM5) simulations (constrained by meteorological reanalyses) using a Latin Hypercube experimental design, to demonstrate that it is possible to characterize model behavior accurately and very efficiently without any modifications to the model itself. We use the emulator to characterize the source-receptor relationships of black carbon (BC), focusing specifically on describing the constituent burden and surface deposition rates from emissions in various regions. Our results show that the emulator is capable of quantifying the contribution of aerosol burden and surface deposition from different source regions, finding that most of current Arctic BC comes from remote sources. We also demonstrate that the sensitivity of the BC burdens to emission perturbations differs for various source regions. For example, the emission growth in Africa where dry convections are strong results in a moderate increase of BC burden over the globe while the same emission growth in the Arctic leads to a significant increase of local BC burdens and surface deposition rates. These results provide insights into the dynamical, physical, and chemical processes of the climate model, and the conclusions may have policy implications for making cost-effective global and regional pollution management strategies.
A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex
NASA Astrophysics Data System (ADS)
Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter
2017-11-01
Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO : ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.
Sonsthagen, Sarah A.; McClaren, Erica L.; Doyle, Frank I.; Titus, K.; Sage, George K.; Wilson, Robert E.; Gust, Judy R.; Talbot, Sandra L.
2012-01-01
Northern Goshawks occupying the Alexander Archipelago, Alaska, and coastal British Columbia nest primarily in old-growth and mature forest, which results in spatial heterogeneity in the distribution of individuals across the landscape. We used microsatellite and mitochondrial data to infer genetic structure, gene flow, and fluctuations in population demography through evolutionary time. Patterns in the genetic signatures were used to assess predictions associated with the three population models: panmixia, metapopulation, and isolated populations. Population genetic structure was observed along with asymmetry in gene flow estimates that changed directionality at different temporal scales, consistent with metapopulation model predictions. Therefore, Northern Goshawk assemblages located in the Alexander Archipelago and coastal British Columbia interact through a metapopulation framework, though they may not fit the classic model of a metapopulation. Long-term population sources (coastal mainland British Columbia) and sinks (Revillagigedo and Vancouver islands) were identified. However, there was no trend through evolutionary time in the directionality of dispersal among the remaining assemblages, suggestive of a rescue-effect dynamic. Admiralty, Douglas, and Chichagof island complex appears to be an evolutionarily recent source population in the Alexander Archipelago. In addition, Kupreanof island complex and Kispiox Forest District populations have high dispersal rates to populations in close geographic proximity and potentially serve as local source populations. Metapopulation dynamics occurring in the Alexander Archipelago and coastal British Columbia by Northern Goshawks highlight the importance of both occupied and unoccupied habitats to long-term population persistence of goshawks in this region.
The decay of hot nuclei formed in La-induced reactions at E/A=45 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libby, Bruce
1993-01-01
The decay of hot nuclei formed in the reactions 139La + 27Al, 51V, natCu, and 139La were studied by the coincident detection of up to four complex fragments (Z > 3) emitted in these reactions. Fragments were characterized as to their atomic number, energy and in- and out-of-plane angles. The probability of the decay by an event of a given complex fragment multiplicity as a function of excitation energy per nucleon of the source is nearly independent of the system studied. Additionally, there is no large increase in the proportion of multiple fragment events as the excitation energy of themore » source increases past 5 MeV/nucleon. This is at odds with many prompt multifragmentation models of nuclear decay. The reactions 139La + 27Al, 51V, natCu were also studied by combining a dynamical model calculation that simulates the early stages of nuclear reactions with a statistical model calculation for the latter stages of the reactions. For the reaction 139La + 27Al, these calculations reproduced many of the experimental features, but other features were not reproduced. For the reaction 139La + 51V, the calculation failed to reproduce somewhat more of the experimental features. The calculation failed to reproduce any of the experimental features of the reaction 139La + natCu, with the exception of the source velocity distributions.« less
The decay of hot nuclei formed in La-induced reactions at E/A=45 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Libby, B.
1993-01-01
The decay of hot nuclei formed in the reactions [sup 139]La + [sup 27]Al, [sup 51]V, [sup nat]Cu, and [sup 139]La were studied by the coincident detection of up to four complex fragments (Z > 3) emitted in these reactions. Fragments were characterized as to their atomic number, energy and in- and out-of-plane angles. The probability of the decay by an event of a given complex fragment multiplicity as a function of excitation energy per nucleon of the source is nearly independent of the system studied. Additionally, there is no large increase in the proportion of multiple fragment events asmore » the excitation energy of the source increases past 5 MeV/nucleon. This is at odds with many prompt multifragmentation models of nuclear decay. The reactions [sup 139]La + [sup 27]Al, [sup 51]V, [sup nat]Cu were also studied by combining a dynamical model calculation that simulates the early stages of nuclear reactions with a statistical model calculation for the latter stages of the reactions. For the reaction [sup 139]La + [sup 27]Al, these calculations reproduced many of the experimental features, but other features were not reproduced. For the reaction [sup 139]La + [sup 51]V, the calculation failed to reproduce somewhat more of the experimental features. The calculation failed to reproduce any of the experimental features of the reaction [sup 139]La + [sup nat]Cu, with the exception of the source velocity distributions.« less
The Complex Outgassing of Comets and the Resulting Coma, a Direct Simulation Monte-Carlo Approach
NASA Astrophysics Data System (ADS)
Fougere, Nicolas
During its journey, when a comet gets within a few astronomical units of the Sun, solar heating liberates gases and dust from its icy nucleus forming a rarefied cometary atmosphere, the so-called coma. This tenuous atmosphere can expand to distances up to millions of kilometers representing orders of magnitude larger than the nucleus size. Most of the practical cases of coma studies involve the consideration of rarefied gas flows under non-LTE conditions where the hydrodynamics approach is not valid. Then, the use of kinetic methods is required to properly study the physics of the cometary coma. The Direct Simulation Monte-Carlo (DSMC) method is the method of choice to solve the Boltzmann equation, giving the opportunity to study the cometary atmosphere from the inner coma where collisions dominate and is in thermodynamic equilibrium to the outer coma where densities are lower and free flow conditions are verified. While previous studies of the coma used direct sublimation from the nucleus for spherically symmetric 1D models, or 2D models with a day/night asymmetry, recent observations of comets showed the existence of local small source areas such as jets, and extended sources via sublimating icy grains, that must be included into cometary models for a realistic representation of the physics of the coma. In this work, we present, for the first time, 1D, 2D, and 3D models that can take into account the full effects of conditions with more complex sources of gas with jets and/or icy grains. Moreover, an innovative work in a full 3D description of the cometary coma using a kinetic method with a realistic nucleus and outgassing is demonstrated. While most of the physical models used in this study had already been developed, they are included in one self-consistent coma model for the first time. The inclusion of complex cometary outgassing processes represents the state-of-the-art of cometary coma modeling. This provides invaluable information about the coma by refining the understanding of the material that constitutes comets. This helps us to comprehend the process of the Solar System formation, one of the top priority questions in the 2013-2022 Planetary Science Decadal survey.
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
A power-efficient communication system between brain-implantable devices and external computers.
Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui
2007-01-01
In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.
Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng
2015-01-01
Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.
Joint Blind Source Separation by Multi-set Canonical Correlation Analysis
Li, Yi-Ou; Adalı, Tülay; Wang, Wei; Calhoun, Vince D
2009-01-01
In this work, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multi-set canonical correlation analysis (M-CCA) [1]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task. PMID:20221319
Development of improved wildfire smoke exposure estimates for health studies in the western U.S.
NASA Astrophysics Data System (ADS)
Ivey, C.; Holmes, H.; Loria Salazar, S. M.; Pierce, A.; Liu, C.
2016-12-01
Wildfire smoke exposure is a significant health concern in the western U.S. because large wildfires have increased in size and frequency over the past four years due to drought conditions. The transport phenomena in complex terrain and timing of the wildfire emissions make the smoke plumes difficult to simulate using conventional air quality models. Monitoring data can be used to estimate exposure metrics, but in rural areas the monitoring networks are too sparse to calculate wildfire exposure metrics for the entire population in a region. Satellite retrievals provide global, spatiotemporal air quality information and are used to track pollution plumes, estimate human exposures, model emissions, and determine sources (i.e., natural versus anthropogenic) in regulatory applications. Particulate matter (PM) exposures can be estimated using columnar aerosol optical depth (AOD), where satellite AOD retrievals serve as a spatial surrogate to simulate surface PM gradients. These exposure models have been successfully used in health effects studies in the eastern U.S. where complex mountainous terrain and surface reflectance do not limit AOD retrival from satellites. Using results from a chemical transport model (CTM) is another effective method to determine spatial gradients of pollutants. However, the CTM does not adequately capture the temporal and spatial distribution of wildfire smoke plumes. By combining the spatiotemporal pollutant fields from both satellite retrievals and CTM results with ground based pollutant observations the spatial wildfire smoke exposure model can be improved. This work will address the challenge of understanding the spatiotemporal distributions of pollutant concentrations to model human exposures of wildfire smoke in regions with complex terrain, where meteorological conditions as well as emission sources significantly influence the spatial distribution of pollutants. The focus will be on developing models to enhance exposure estimates of elevated PM and ozone concentrations from wildfire smoke plumes in the western U.S.
Meteoroid Environment Modeling: The Meteoroid Engineering Model and Shower Forecasting
NASA Technical Reports Server (NTRS)
Moorhead, Althea V.
2017-01-01
The meteoroid environment is often divided conceptually into meteor showers and the sporadic meteor background. It is commonly but incorrectly assumed that meteoroid impacts primarily occur during meteor showers; instead, the vast majority of hazardous meteoroids belong to the sporadic complex. Unlike meteor showers, which persist for a few hours to a few weeks, sporadic meteoroids impact the Earth's atmosphere and spacecraft throughout the year. The Meteoroid Environment Office (MEO) has produced two environment models to handle these cases: the Meteoroid Engineering Model (MEM) and an annual meteor shower forecast. The sporadic complex, despite its year-round activity, is not isotropic in its directionality. Instead, their apparent points of origin, or radiants, are organized into groups called "sources". The speed, directionality, and size distribution of these sporadic sources are modeled by the Meteoroid Engineering Model (MEM), which is currently in its second major release version (MEMR2) [Moorhead et al., 2015]. MEM provides the meteoroid flux relative to a user-provided spacecraft trajectory; it provides the total flux as well as the flux per angular bin, speed interval, and on specific surfaces (ram, wake, etc.). Because the sporadic complex dominates the meteoroid flux, MEM is the most appropriate model to use in spacecraft design. Although showers make up a small fraction of the meteoroid environment, they can produce significant short-term enhancements of the meteoroid flux. Thus, it can be valuable to consider showers when assessing risks associated with vehicle operations that are brief in duration. To assist with such assessments, the MEO issues an annual forecast that reports meteor shower fluxes as a function of time and compares showers with the time-averaged total meteoroid flux. This permits missions to do quick assessments of the increase in risk posed by meteor showers. Section II describes MEM in more detail and describes our current efforts to improve its characteristics for a future release. Section III describes the annual shower forecast and highlights recent improvements made to its algorithm and inputs.
Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten
2017-01-01
ABSTRACT In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods. PMID:28515537
Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten
2017-05-19
In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods.
Molecular modeling of biomolecules by paramagnetic NMR and computational hybrid methods.
Pilla, Kala Bharath; Gaalswyk, Kari; MacCallum, Justin L
2017-11-01
The 3D atomic structures of biomolecules and their complexes are key to our understanding of biomolecular function, recognition, and mechanism. However, it is often difficult to obtain structures, particularly for systems that are complex, dynamic, disordered, or exist in environments like cell membranes. In such cases sparse data from a variety of paramagnetic NMR experiments offers one possible source of structural information. These restraints can be incorporated in computer modeling algorithms that can accurately translate the sparse experimental data into full 3D atomic structures. In this review, we discuss various types of paramagnetic NMR/computational hybrid modeling techniques that can be applied to successful modeling of not only the atomic structure of proteins but also their interacting partners. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermo-hydraulics of the Peruvian accretionary complex at 12°S
Kukowski, Nina; Pecher, Ingo
1999-01-01
The models were constrained by the thermal gradient obtained from the depth of bottomsimulating reflectors (BSRs) at the lower slope and some conventional measurements. We foundthat significant frictional heating is required to explain the observed strong landward increase ofheat flux. This is consistent with results from sandbox modelling which predict strong basalfriction at this margin. A significantly higher heat source is needed to match the observed thermalgradient in the southern line.
Seismic Acoustic Ratio Estimates Using a Moving Vehicle Source
1999-08-01
airwave coupling. Thus, it is likely that the high SAR values are due to acoustic to seismic coupling in a shallow air filled poroelastic layer (e.g...Sabatier et al., 1986b). More complex models for the earth, such as incorporating layering and poroelastic material (e.g., Albert, 1993; Attenborough...groundwater and bedrock in an area .of discontinuous permafrost,” Geophysics 63(5), 1573-1584. Attenborough, K. (1985). “ Acoustical impedance models for
Solomon Islands 2007 Tsunami Near-Field Modeling and Source Earthquake Deformation
NASA Astrophysics Data System (ADS)
Uslu, B.; Wei, Y.; Fritz, H.; Titov, V.; Chamberlin, C.
2008-12-01
The earthquake of 1 April 2007 left behind momentous footages of crust rupture and tsunami impact along the coastline of Solomon Islands (Fritz and Kalligeris, 2008; Taylor et al., 2008; McAdoo et al., 2008; PARI, 2008), while the undisturbed tsunami signals were also recorded at nearby deep-ocean tsunameters and coastal tide stations. These multi-dimensional measurements provide valuable datasets to tackle the challenging aspects at the tsunami source directly by inversion from tsunameter records in real time (available in a time frame of minutes), and its relationship with the seismic source derived either from the seismometer records (available in a time frame of hours or days) or from the crust rupture measurements (available in a time frame of months or years). The tsunami measurements in the near field, including the complex vertical crust motion and tsunami runup, are particularly critical to help interpreting the tsunami source. This study develops high-resolution inundation models for the Solomon Islands to compute the near-field tsunami impact. Using these models, this research compares the tsunameter-derived tsunami source with the seismic-derived earthquake sources from comprehensive perceptions, including vertical uplift and subsidence, tsunami runup heights and their distributional pattern among the islands, deep-ocean tsunameter measurements, and near- and far-field tide gauge records. The present study stresses the significance of the tsunami magnitude, source location, bathymetry and topography in accurately modeling the generation, propagation and inundation of the tsunami waves. This study highlights the accuracy and efficiency of the tsunameter-derived tsunami source in modeling the near-field tsunami impact. As the high- resolution models developed in this study will become part of NOAA's tsunami forecast system, these results also suggest expanding the system for potential applications in tsunami hazard assessment, search and rescue operations, as well as event and post-event planning in the Solomon Islands.
Robinson, James L.
2004-01-01
Water from wells and springs accounts for more than 90 percent of the public water supply in Calhoun County, Alabama. Springs associated with the Jacksonville Thrust Fault Complex are used for public water supply for the cities of Anniston and Jacksonville. The largest ground-water supply is Coldwater Spring, the primary source of water for Anniston, Alabama. The average discharge of Coldwater Spring is about 32 million gallons per day, and the variability of discharge is about 75 percent. Water-quality samples were collected from 6 springs and 15 wells in Calhoun County from November 2001 to January 2003. The pH of the ground water typically was greater than 6.0, and specific conductance was less than 300 microsiemens per centimeter. The water chemistry was dominated by calcium, carbonate, and bicarbonate ions. The hydrogen and oxygen isotopic composition of the water samples indicates the occurrence of a low-temperature, water-rock weathering reaction known as silicate hydrolysis. The residence time of the ground water, or ground-water age, was estimated by using analysis of chlorofluorocarbon, sulfur hexafluoride, and regression modeling. Estimated ground-water ages ranged from less than 10 to approximately 40 years, with a median age of about 18 years. The Spearman rho test was used to identify statistically significant covariance among selected physical properties and constituents in the ground water. The alkalinity, specific conductance, and dissolved solids increased as age increased; these correlations reflect common changes in ground-water quality that occur with increasing residence time and support the accuracy of the age estimates. The concentration of sodium and chloride increased as age increased; the correlation of these constituents is interpreted to indicate natural sources for chloride and sodium. The concentration of silica increased as the concentration of potassium increased; this correlation, in addition to the isotopic data, is evidence that silicate hydrolysis of clay minerals occurred. The geochemical modeling program NETPATH was used to investigate possible mixing scenarios that could yield the chemical composition of water collected from springs associated with the Jacksonville Thrust Fault Complex. The results of NETPATH modeling suggest that the primary source of water in Coldwater Spring is a deep aquifer, and only small amounts of rainwater from nearby sources are discharged from the spring. Starting with Piedmont Sports Spring and moving southwest along a conceptual ground-water flow path that parallels the Jacksonville Thrust Fault Complex, NETPATH simulated the observed water quality of each spring, in succession, by mixing rainwater and water from the spring just to the northeast of the spring being modeled. The percentage of rainwater and ground water needed to simulate the quality of water flowing from the springs ranged from 1 to 25 percent rainwater and 75 to 99 percent ground water.
Battaglia, Maurizio; Gottsmann, J.; Carbone, D.; Fernandez, J.
2008-01-01
Time-dependent gravimetric measurements can detect subsurface processes long before magma flow leads to earthquakes or other eruption precursors. The ability of gravity measurements to detect subsurface mass flow is greatly enhanced if gravity measurements are analyzed and modeled with ground-deformation data. Obtaining the maximum information from microgravity studies requires careful evaluation of the layout of network benchmarks, the gravity environmental signal, and the coupling between gravity changes and crustal deformation. When changes in the system under study are fast (hours to weeks), as in hydrothermal systems and restless volcanoes, continuous gravity observations at selected sites can help to capture many details of the dynamics of the intrusive sources. Despite the instrumental effects, mainly caused by atmospheric temperature, results from monitoring at Mt. Etna volcano show that continuous measurements are a powerful tool for monitoring and studying volcanoes.Several analytical and numerical mathematical models can beused to fit gravity and deformation data. Analytical models offer a closed-form description of the volcanic source. In principle, this allows one to readily infer the relative importance of the source parameters. In active volcanic sites such as Long Valley caldera (California, U.S.A.) and Campi Flegrei (Italy), careful use of analytical models and high-quality data sets has produced good results. However, the simplifications that make analytical models tractable might result in misleading volcanological inter-pretations, particularly when the real crust surrounding the source is far from the homogeneous/ isotropic assumption. Using numerical models allows consideration of more realistic descriptions of the sources and of the crust where they are located (e.g., vertical and lateral mechanical discontinuities, complex source geometries, and topography). Applications at Teide volcano (Tenerife) and Campi Flegrei demonstrate the importance of this more realistic description in gravity calculations. ?? 2008 Society of Exploration Geophysicists. All rights reserved.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Dual-Source Linear Energy Prediction (LINE-P) Model in the Context of WSNs.
Ahmed, Faisal; Tamberg, Gert; Le Moullec, Yannick; Annus, Paul
2017-07-20
Energy harvesting technologies such as miniature power solar panels and micro wind turbines are increasingly used to help power wireless sensor network nodes. However, a major drawback of energy harvesting is its varying and intermittent characteristic, which can negatively affect the quality of service. This calls for careful design and operation of the nodes, possibly by means of, e.g., dynamic duty cycling and/or dynamic frequency and voltage scaling. In this context, various energy prediction models have been proposed in the literature; however, they are typically compute-intensive or only suitable for a single type of energy source. In this paper, we propose Linear Energy Prediction "LINE-P", a lightweight, yet relatively accurate model based on approximation and sampling theory; LINE-P is suitable for dual-source energy harvesting. Simulations and comparisons against existing similar models have been conducted with low and medium resolutions (i.e., 60 and 22 min intervals/24 h) for the solar energy source (low variations) and with high resolutions (15 min intervals/24 h) for the wind energy source. The results show that the accuracy of the solar-based and wind-based predictions is up to approximately 98% and 96%, respectively, while requiring a lower complexity and memory than the other models. For the cases where LINE-P's accuracy is lower than that of other approaches, it still has the advantage of lower computing requirements, making it more suitable for embedded implementation, e.g., in wireless sensor network coordinator nodes or gateways.
NASA Astrophysics Data System (ADS)
Georgiou, K.; Abramoff, R. Z.; Harte, J.; Riley, W. J.; Torn, M. S.
2016-12-01
As global temperatures and atmospheric CO2 concentrations continue to increase, soil microbial activity and decomposition of soil organic matter (SOM) are expected to follow suit, potentially limiting soil carbon storage. Traditional global- and ecosystem-scale models simulate SOM decomposition using linear kinetics, which are inherently unable to reproduce carbon-concentration feedbacks, such as priming of native SOM at elevated CO2 concentrations. Recent studies using nonlinear microbial models of SOM decomposition seek to capture these interactions, and several groups are currently integrating these microbial models into Earth System Models (ESMs). However, despite their widespread ability to exhibit nonlinear responses, these models vary tremendously in complexity and, consequently, dynamics. In this study, we explore, both analytically and numerically, the emergent oscillatory behavior and insensitivity of SOM stocks to carbon inputs that have been deemed `unrealistic' in recent microbial models. We discuss the sources of instability in four models of varying complexity, by sequentially reducing complexity of a detailed model that includes microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We also present an alternative representation of microbial turnover that limits population sizes and, thus, reduces oscillations. We compare these models to several long-term carbon input manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that traditional linear and nonlinear models cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures, and that modifying microbial turnover results in more realistic predictions. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in ESMs.
Tervo, Outi M; Christoffersen, Mads F; Simon, Malene; Miller, Lee A; Jensen, Frants H; Parks, Susan E; Madsen, Peter T
2012-01-01
The low-frequency, powerful vocalizations of blue and fin whales may potentially be detected by conspecifics across entire ocean basins. In contrast, humpback and bowhead whales produce equally powerful, but more complex broadband vocalizations composed of higher frequencies that suffer from higher attenuation. Here we evaluate the active space of high frequency song notes of bowhead whales (Balaena mysticetus) in Western Greenland using measurements of song source levels and ambient noise. Four independent, GPS-synchronized hydrophones were deployed through holes in the ice to localize vocalizing bowhead whales, estimate source levels and measure ambient noise. The song had a mean apparent source level of 185±2 dB rms re 1 µPa @ 1 m and a high mean centroid frequency of 444±48 Hz. Using measured ambient noise levels in the area and Arctic sound spreading models, the estimated active space of these song notes is between 40 and 130 km, an order of magnitude smaller than the estimated active space of low frequency blue and fin whale songs produced at similar source levels and for similar noise conditions. We propose that bowhead whales spatially compensate for their smaller communication range through mating aggregations that co-evolved with broadband song to form a complex and dynamic acoustically mediated sexual display.
Tervo, Outi M.; Christoffersen, Mads F.; Simon, Malene; Miller, Lee A.; Jensen, Frants H.; Parks, Susan E.; Madsen, Peter T.
2012-01-01
The low-frequency, powerful vocalizations of blue and fin whales may potentially be detected by conspecifics across entire ocean basins. In contrast, humpback and bowhead whales produce equally powerful, but more complex broadband vocalizations composed of higher frequencies that suffer from higher attenuation. Here we evaluate the active space of high frequency song notes of bowhead whales (Balaena mysticetus) in Western Greenland using measurements of song source levels and ambient noise. Four independent, GPS-synchronized hydrophones were deployed through holes in the ice to localize vocalizing bowhead whales, estimate source levels and measure ambient noise. The song had a mean apparent source level of 185±2 dB rms re 1 µPa @ 1 m and a high mean centroid frequency of 444±48 Hz. Using measured ambient noise levels in the area and Arctic sound spreading models, the estimated active space of these song notes is between 40 and 130 km, an order of magnitude smaller than the estimated active space of low frequency blue and fin whale songs produced at similar source levels and for similar noise conditions. We propose that bowhead whales spatially compensate for their smaller communication range through mating aggregations that co-evolved with broadband song to form a complex and dynamic acoustically mediated sexual display. PMID:23300591
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
Thilak Krishna, Thilakam Vimal; Creusere, Charles D; Voelz, David G
2011-01-01
Polarization, a property of light that conveys information about the transverse electric field orientation, complements other attributes of electromagnetic radiation such as intensity and frequency. Using multiple passive polarimetric images, we develop an iterative, model-based approach to estimate the complex index of refraction and apply it to target classification.
Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...
Making ResourceFULL™ Decisions: A Process Model for Civic Engagement
ERIC Educational Resources Information Center
Radke, Barbara; Chazdon, Scott
2015-01-01
Many public issues are becoming more complex, interconnected, and cannot be resolved by one individual or entity. Research shows an informed decision is not enough. Addressing these issues requires authentic civic engagement (deliberative dialogue) with the public to reach resourceFULL™ decisions--a decision based on diverse sources of information…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-08
... chemical reactions from precursor gases (e.g., secondary particles). Secondary particles, such as sulfates, nitrates, and complex carbon compounds, are formed from reactions with oxides of sulfur (SO X ), oxides of... nonattainment new source review (nonattainment NSR) permit programs; provisions for air pollution modeling; and...
Toward New Data and Information Management Solutions for Data-Intensive Ecological Research
ERIC Educational Resources Information Center
Laney, Christine Marie
2013-01-01
Ecosystem health is deteriorating in many parts of the world due to direct and indirect anthropogenic pressures. Generating accurate, useful, and impactful models of past, current, and future states of ecosystem structure and function is a complex endeavor that often requires vast amounts of data from multiple sources and knowledge from…
A simulated approach to estimating PM10 and PM2.5 concentrations downwind from cotton gins
USDA-ARS?s Scientific Manuscript database
Cotton gins are required to obtain operating permits from state air pollution regulatory agencies (SAPRA), which regulate the amount of particulate matter that can be emitted. Industrial Source Complex Short Term version 3 (ISCST3) is the Gaussian dispersion model currently used by some SAPRAs to pr...
Galle, J; Hoffmann, M; Aust, G
2009-01-01
Collective phenomena in multi-cellular assemblies can be approached on different levels of complexity. Here, we discuss a number of mathematical models which consider the dynamics of each individual cell, so-called agent-based or individual-based models (IBMs). As a special feature, these models allow to account for intracellular decision processes which are triggered by biomechanical cell-cell or cell-matrix interactions. We discuss their impact on the growth and homeostasis of multi-cellular systems as simulated by lattice-free models. Our results demonstrate that cell polarisation subsequent to cell-cell contact formation can be a source of stability in epithelial monolayers. Stroma contact-dependent regulation of tumour cell proliferation and migration is shown to result in invasion dynamics in accordance with the migrating cancer stem cell hypothesis. However, we demonstrate that different regulation mechanisms can equally well comply with present experimental results. Thus, we suggest a panel of experimental studies for the in-depth validation of the model assumptions.
Nitric oxide bioavailability in the microcirculation: insights from mathematical models.
Tsoukias, Nikolaos M
2008-11-01
Over the last 30 years nitric oxide (NO) has emerged as a key signaling molecule involved in a number of physiological functions, including in the regulation of microcirculatory tone. Despite significant scientific contributions, fundamental questions about NO's role in the microcirculation remain unanswered. Mathematical modeling can assist in investigations of microcirculatory NO physiology and address experimental limitations in quantifying vascular NO concentrations. The number of mathematical models investigating the fate of NO in the vasculature has increased over the last few years, and new models are continuously emerging, incorporating an increasing level of complexity and detail. Models investigate mechanisms that affect NO availability in health and disease. They examine the significance of NO release from nonendothelial sources, the effect of transient release, and the complex interaction of NO with other substances, such as heme-containing proteins and reactive oxygen species. Models are utilized to test and generate hypotheses for the mechanisms that regulate NO-dependent signaling in the microcirculation.
Ground motion in the presence of complex Topography II: Earthquake sources and 3D simulations
Hartzell, Stephen; Ramirez-Guzman, Leonardo; Meremonte, Mark; Leeds, Alena L.
2017-01-01
Eight seismic stations were placed in a linear array with a topographic relief of 222 m over Mission Peak in the east San Francisco Bay region for a period of one year to study topographic effects. Seventy‐two well‐recorded local earthquakes are used to calculate spectral amplitude ratios relative to a reference site. A well‐defined fundamental resonance peak is observed with individual station amplitudes following the theoretically predicted progression of larger amplitudes in the upslope direction. Favored directions of vibration are also seen that are related to the trapping of shear waves within the primary ridge dimensions. Spectral peaks above the fundamental one are also related to topographic effects but follow a more complex pattern. Theoretical predictions using a 3D velocity model and accurate topography reproduce many of the general frequency and time‐domain features of the data. Shifts in spectral frequencies and amplitude differences, however, are related to deficiencies of the model and point out the importance of contributing factors, including the shear‐wave velocity under the topographic feature, near‐surface velocity gradients, and source parameters.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
NASA Astrophysics Data System (ADS)
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
NASA Astrophysics Data System (ADS)
Garg, S.; Sinha, B.; Sinha, V.; Chandra, P.; Sarda Esteve, R.; Gros, V.
2015-12-01
Determining the contribution of different sources to the total BC is necessary for targeted mitigation. Absorption Angstrom exponent (αabs) measurements of black carbon (BC) have recently been introduced as a novel tool to apportion the contribution of biomass burning sources to BC. Two-component Aethalometer model for apportioning BC to biomass burning sources and fossil fuel combustion sources, which uses αabs as a generic indicator of the source type, is widely used for determining the contribution of the two types of sources to the total BC. Our work studies BC emissions in the highly-populated, anthropogenic emissions-dominated Indo-Gangetic Plain and demonstrates that the αabs cannot be used as a generic tracer for biomass burning emissions in a complex environment. Simultaneously collected high time resolution data from a 7-wavelength Aethalometer (AE 42, Magee Scientific, USA) and a high sensitivity Proton Transfer Reaction- Quadrupole Mass Spectrometer (PTR-MS) installed at a sub-urban site in Mohali (Punjab), India, were used to identify a number of biomass combustion plumes during which BC enhancements correlated strongly with an increase in acetonitrile (a well-established biomass burning tracer) mixing ratio. Each type of biomass combustion is classified and characterized by distinct emission ratios of aromatic compounds and oxygenated VOCs to acetonitrile. The identified types of biomass combustion include two different types of crop residue burning (paddy and wheat), burning of leaf-litter, and garbage burning. Traffic (fossil-fuel burning) plumes were also selected for comparison. We find that the two-component Aethalometer source-apportionment method cannot be extrapolated to all types of biomass combustion and αabs of traffic plumes can be >1 in developing countries like India, where use of adulterated fuel in vehicles is common. Thus in a complex environment, where multiple anthropogenic BC sources and air masses of variable photochemical age impact a receptor site, the angstrom exponent is not representative of the combustion type and therefore, cannot be used as a generic tracer to constrain source contributions.
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
The recent Nevada Earthquake (M=6) produced an extraordinary set of crustal guided waves. In this study, we examine the three-component data at all the USArray stations in terms of how well existing models perform in predicting the various phases, Rayleigh waves, Love waves, and Pnl waves. To establish the source parameters, we applied the Cut and Paste Code up to distance of 5° for an average local crustal model which produced a normal mechanism (strike=35°,dip=41°,rake=-85°) at a depth of 9 km and Mw=5.9. Assuming this mechanism, we generated synthetics at all distances for a number of 1D and 3D models.more » The Pnl observations fit the synthetics for the simple models well both in timing (VPn=7.9km/s) and waveform fits out to a distance of about 5°. Beyond this distance a great deal of complexity can be seen to the northwest apparently caused by shallow subducted slab material. These paths require considerable crustal thinning and higher P-velocities. Small delays and advances outline the various tectonic province to the south, Colorado Plateau, etc. with velocities compatible with that reported on by Song et al.(1996). Five-second Rayleigh waves (Airy Phase) can be observed throughout the whole array and show a great deal of variation ( up to 30s). In general, the Love waves are better behaved than the Rayleigh waves. We are presently adding higher frequency to the source description by including source complexity. Preliminary inversions suggest rupture to northeast with a shallow asperity. We are, also, inverting the aftershocks to extend the frequencies to 2 Hz and beyond following the calibration method outlined in Tan and Helmberger (2007). This will allow accurate directivity measurements for events with magnitude larger than 3.5. Thus, we will address the energy decay with distance as s function of frequency band for the various source types.« less
Locating multiple diffusion sources in time varying networks from sparse observations.
Hu, Zhao-Long; Shen, Zhesi; Cao, Shinan; Podobnik, Boris; Yang, Huijie; Wang, Wen-Xu; Lai, Ying-Cheng
2018-02-08
Data based source localization in complex networks has a broad range of applications. Despite recent progress, locating multiple diffusion sources in time varying networks remains to be an outstanding problem. Bridging structural observability and sparse signal reconstruction theories, we develop a general framework to locate diffusion sources in time varying networks based solely on sparse data from a small set of messenger nodes. A general finding is that large degree nodes produce more valuable information than small degree nodes, a result that contrasts that for static networks. Choosing large degree nodes as the messengers, we find that sparse observations from a few such nodes are often sufficient for any number of diffusion sources to be located for a variety of model and empirical networks. Counterintuitively, sources in more rapidly varying networks can be identified more readily with fewer required messenger nodes.
Low-frequency source parameters of twelve large earthquakes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Harabaglia, Paolo
1993-01-01
A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.
The Solar Wind Source Cycle: Relationship to Dynamo Behavior
NASA Astrophysics Data System (ADS)
Luhmann, J. G.; Li, Y.; Lee, C. O.; Jian, L. K.; Petrie, G. J. D.; Arge, C. N.
2017-12-01
Solar cycle trends of interest include the evolving properties of the solar wind, the heliospheric medium through which the Sun's plasmas and fields interact with Earth and the planets -including the evolution of CME/ICMEs enroute. Solar wind sources include the coronal holes-the open field regions that constantly evolve with solar magnetic fields as the cycle progresses, and the streamers between them. The recent cycle has been notably important in demonstrating that not all solar cycles are alike when it comes to contributions from these sources, including in the case of ecliptic solar wind. In particular, it has modified our appreciation of the low latitude coronal hole and streamer sources because of their relative prevalence. One way to understand the basic relationship between these source differences and what is happening inside the Sun and on its surface is to use observation-based models like the PFSS model to evaluate the evolution of the coronal field geometry. Although the accuracy of these models is compromised around solar maximum by lack of global surface field information and the sometimes non-potential evolution of the field related to more frequent and widespread emergence of active regions, they still approximate the character of the coronal field state. We use these models to compare the inferred recent cycle coronal holes and streamer belt sources of solar wind with past cycle counterparts. The results illustrate how (still) hemispherically asymmetric weak polar fields maintain a complex mix of low-to-mid latitude solar wind sources throughout the latest cycle, with a related marked asymmetry in the hemispheric distribution of the ecliptic wind sources. This is likely to be repeated until the polar field strength significantly increases relative to the fields at low latitudes, and the latter symmetrize.
Solar radio continuum storms and a breathing magnetic field model
NASA Technical Reports Server (NTRS)
1975-01-01
Radio noise continuum emissions observed in metric and decametric wave frequencies are, in general, associated with actively varying sunspot groups accompanied by the S-component of microwave radio emissions. These continuum emission sources, often called type I storm sources, are often associated with type III burst storm activity from metric to hectometric wave frequencies. This storm activity is, therefore, closely connected with the development of these continuum emission sources. It is shown that the S-component emission in microwave frequencies generally precedes, by several days, the emission of these noise continuum storms of lower frequencies. In order for these storms to develop, the growth of sunspot groups into complex types is very important in addition to the increase of the average magnetic field intensity and area of these groups. After giving a review on the theory of these noise continuum storm emissions, a model is briefly considered to explain the relation of the emissions to the storms.
Birmingham, Wendy C; Holt-Lunstad, Julianne
2018-04-05
There is a rich literature on social support and physical health, but research has focused primarily on the protective effects of social relationship. The stress buffering model asserts that relationships may be protective by being a source of support when coping with stress, thereby blunting health relevant physiological responses. Research also indicates relationships can be a source of stress, also influencing health. In other words, the social buffering influence may have a counterpart, a social aggravating influence that has an opposite or opposing effect. Drawing upon existing conceptual models, we expand these to delineate how social relationships may influence stress processes and ultimately health. This review summarizes the existing literature that points to the potential deleterious physiological effects of our relationships when they are sources of stress or exacerbate stress. Copyright © 2018 Elsevier B.V. All rights reserved.
Equivalent radiation source of 3D package for electromagnetic characteristics analysis
NASA Astrophysics Data System (ADS)
Li, Jun; Wei, Xingchang; Shu, Yufei
2017-10-01
An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).
Principal process analysis of biological models.
Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc
2018-06-14
Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.
2017-12-01
This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.
OpenFLUID: an open-source software environment for modelling fluxes in landscapes
NASA Astrophysics Data System (ADS)
Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc
2013-04-01
Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org
Viscoelastic modeling of deformation and gravity changes induced by pressurized magmatic sources
NASA Astrophysics Data System (ADS)
Currenti, Gilda
2018-05-01
Gravity and height changes, which reflect magma accumulation in subsurface chambers, are evaluated using analytical and numerical models in order to investigate their relationships and temporal evolutions. The analysis focuses mainly on the exploration of the time-dependent response of gravity and height changes to the pressurization of ellipsoidal magmatic chambers in viscoelastic media. Firstly, the validation of the numerical Finite Element results is performed by comparison with analytical solutions, which are devised for a simple spherical source embedded in a homogeneous viscoelastic half-space medium. Then, the effect of several model parameters on time-dependent height and gravity changes is investigated thanks to the flexibility of the numerical method in handling complex configurations. Both homogeneous and viscoelastic shell models reveal significantly different amplitudes in the ratio between gravity and height changes depending on geometry factors and medium rheology. The results show that these factors also influence the relaxation characteristic times of the investigated geophysical changes. Overall, these temporal patterns are compatible with time-dependent height and gravity changes observed on Etna volcano during the 1994-1997 inflation period. By modeling the viscoelastic response of a pressurized prolate magmatic source, a general agreement between computed and observed geophysical variations is achieved.
NASA Astrophysics Data System (ADS)
Wang, Kunpeng; Tan, Handong
2017-11-01
Controlled-source audio-frequency magnetotellurics (CSAMT) has developed rapidly in recent years and are widely used in the area of mineral and oil resource exploration as well as other fields. The current theory, numerical simulation, and inversion research are based on the assumption that the underground media have resistivity isotropy. However a large number of rock and mineral physical property tests show the resistivity of underground media is generally anisotropic. With the increasing application of CSAMT, the demand for probe accuracy of practical exploration to complex targets continues to increase. The question of how to evaluate the influence of anisotropic resistivity to CSAMT response is becoming important. To meet the demand for CSAMT response research of resistivity anisotropic media, this paper examines the CSAMT electric equations, derives and realizes a three-dimensional (3D) staggered-grid finite difference numerical simulation method of CSAMT resistivity axial anisotropy. Through building a two-dimensional (2D) resistivity anisotropy geoelectric model, we validate the 3D computation result by comparing it to the result of controlled-source electromagnetic method (CSEM) resistivity anisotropy 2D finite element program. Through simulating a 3D resistivity axial anisotropy geoelectric model, we compare and analyze the responses of equatorial configuration, axial configuration, two oblique sources and tensor source. The research shows that the tensor source is suitable for CSAMT to recognize the anisotropic effect of underground structure.
Rubin, Daniel L; Hewett, Micheal; Oliver, Diane E; Klein, Teri E; Altman, Russ B
2002-01-01
Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.
Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm
NASA Astrophysics Data System (ADS)
Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez
2011-04-01
The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.
Multi-decadal Dynamics of Mercury in a Complex Ecosystem
NASA Astrophysics Data System (ADS)
Levin, L.
2016-12-01
A suite of air quality and watershed models was applied to track the ecosystem contributions of mercury (Hg), as well as arsenic (As), and selenium (Se) from local and global sources to the San Juan River basin in the Four Corners region of the American Southwest. Long-term changes in surface water and fish tissue mercury concentrations were also simulated, out to the year 2074.Atmospheric mercury was modeled using a nested, spatial-scale modeling system comprising GEOS-Chem (global scale) and CMAQ-APT (national and regional) models. Four emission scenarios were modeled, including two growth scenarios for Asian mercury emissions. Results showed that the average mercury deposition over the San Juan basin was 21 µg/m2-y. Source contributions to mercury deposition range from 2% to 9% of total deposition prior to post-2016 U.S. controls for air toxics regulatory compliance. Most of the contributions to mercury deposition in the basin are from non-U.S. sources. Watershed simulations showed power plant contributions to fish tissue mercury never exceeded 0.035% during the 85-year model simulation period, even with the long-term growth in fish tissue mercury over that period. Local coal-fired power plants contributed relatively small fractions to mercury deposition (less than 4%) in the basin; background and non-U.S. anthropogenic sources dominated. Fish-tissue mercury levels are projected to increase through 2074 due to growth projections for non-U.S. emission sources. The most important contributor to methylmercury in the lower reaches of the watershed was advection of MeHg produced in situ at upstream headwater locations.
Simulating water-quality trends in public-supply wells in transient flow systems
Starn, J. Jeffrey; Green, Christopher T.; Hinkle, Stephen R.; Bagtzoglou, Amvrossios C.; Stolp, Bernard J.
2014-01-01
Models need not be complex to be useful. An existing groundwater-flow model of Salt Lake Valley, Utah, was adapted for use with convolution-based advective particle tracking to explain broad spatial trends in dissolved solids. This model supports the hypothesis that water produced from wells is increasingly younger with higher proportions of surface sources as pumping changes in the basin over time. At individual wells, however, predicting specific water-quality changes remains challenging. The influence of pumping-induced transient groundwater flow on changes in mean age and source areas is significant. Mean age and source areas were mapped across the model domain to extend the results from observation wells to the entire aquifer to see where changes in concentrations of dissolved solids are expected to occur. The timing of these changes depends on accurate estimates of groundwater velocity. Calibration to tritium concentrations was used to estimate effective porosity and improve correlation between source area changes, age changes, and measured dissolved solids trends. Uncertainty in the model is due in part to spatial and temporal variations in tracer inputs, estimated tracer transport parameters, and in pumping stresses at sampling points. For tracers such as tritium, the presence of two-limbed input curves can be problematic because a single concentration can be associated with multiple disparate travel times. These shortcomings can be ameliorated by adding hydrologic and geologic detail to the model and by adding additional calibration data. However, the Salt Lake Valley model is useful even without such small-scale detail.
Kenow, Kevin P.; Ge, Zhongfu; Fara, Luke J.; Houdek, Steven C.; Lubinski, Brian R.
2016-01-01
Avian botulism type E is responsible for extensive waterbird mortality on the Great Lakes, yet the actual site of toxin exposure remains unclear. Beached carcasses are often used to describe the spatial aspects of botulism mortality outbreaks, but lack specificity of offshore toxin source locations. We detail methodology for developing a neural network model used for predicting waterbird carcass motions in response to wind, wave, and current forcing, in lieu of a complex analytical relationship. This empirically trained model uses current velocity, wind velocity, significant wave height, and wave peak period in Lake Michigan simulated by the Great Lakes Coastal Forecasting System. A detailed procedure is further developed to use the model for back-tracing waterbird carcasses found on beaches in various parts of Lake Michigan, which was validated using drift data for radiomarked common loon (Gavia immer) carcasses deployed at a variety of locations in northern Lake Michigan during September and October of 2013. The back-tracing model was further used on 22 non-radiomarked common loon carcasses found along the shoreline of northern Lake Michigan in October and November of 2012. The model-estimated origins of those cases pointed to some common source locations offshore that coincide with concentrations of common loons observed during aerial surveys. The neural network source tracking model provides a promising approach for identifying locations of botulinum neurotoxin type E intoxication and, in turn, contributes to developing an understanding of the dynamics of toxin production and possible trophic transfer pathways.
Distributed watershed modeling of design storms to identify nonpoint source loading areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endreny, T.A.; Wood, E.F.
1999-03-01
Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yrmore » return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.« less
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Bouse, R.M.; Ruiz, J.; Titley, S.R.; Tosdal, R.M.; Wooden, J.L.
1999-01-01
Porphyry copper deposits in Arizona are genetically associated with Late Cretaceous and early Tertiary igneous complexes that consist of older intermediate volcanic rocks and younger intermediate to felsic intrusions. The igneous complexes and their associated porphyry copper deposits were emplaced into an Early Proterozoic basement characterized by different rocks, geologic histories, and isotopic compositions. Lead isotope compositions of the Proterozoic basement rocks define, from northwest to southeast, the Mojave, central Arizona, and southeastern Arizona provinces. Porphyry copper deposits are present in each Pb isotope province. Lead isotope compositions of Late Cretaceous and early Tertiary plutons, together with those of sulfide minerals in porphyry copper deposits and of Proterozoic country rocks, place important constraints on genesis of the magmatic suites and the porphyry copper deposits themselves. The range of age-corrected Pb isotope compositions of plutons in 12 Late Cretaceous and early Tertiary igneous complexes is 206Pb/204Pb = 17.34 to 22.66, 207Pb/204Pb = 15.43 to 15.96, and 208Pb/204Pb = 37.19 to 40.33. These Pb isotope compositions and calculated model Th/U are similar to those of the Proterozoic rocks in which the plutons were emplaced, thereby indicating that Pb in the younger rocks and ore deposits was inherited from the basement rocks and their sources. No Pb isotope differences distinguish Late Cretaceous and early Tertiary igneous complexes that contain large economic porphyry copper deposits from less rich or smaller deposits that have not been considered economic for mining. Lead isotope compositions of Late Cretaceous and early Tertiary plutons and sulfide minerals from 30 metallic mineral districts, furthermore, require that the southeastern Arizona Pb province be divided into two subprovinces. The northern subprovince has generally lower 206Pb/204Pb and higher model Th/U, and the southern subprovince has higher 206Pb/204Pb and lower model Th/U. These Pb isotope differences are inferred to result from differences in their respective post-1.7 Ga magmatic histories. Throughout Arizona, Pb isotope compositions of Late Cretaceous and early Tertiary plutons and associated sulfide minerals are distinct from those of Jurassic plutons and also middle Tertiary igneous rocks and sulfide minerals. These differences most likely reflect changes in tectonic setting and magmatic sources. Within Late Cretaceous and early Tertiary igneous complexes that host economic porphyry copper deposits, there is commonly a decrease in Pb isotope composition from older to younger plutons. This decrease in Pb isotope values with time suggests an increasing involvement of crust with lower U/Pb than average crust in the source(s) of Late Cretaceous and early Tertiary magmas. Lead isotope compositions of the youngest porphyries in the igneous complexes are similar to those in most sulfide minerals within the associated porphyry copper deposit. This Pb isotope similarity argues for a genetic link between them. However, not all Pb in the sulfide minerals in porphyry copper deposits is magmatically derived. Some sulfide minerals, particularly those that are late stage, or distal to the main orebody, or in Proterozoic or Paleozoic rocks, have elevated Pb isotope compositions displaced toward the gross average Pb isotope composition of the local country rocks. The more radiogenic isotopic compositions argue for a contribution of Pb from those rocks at the site of ore deposition. Combining the Pb isotope data with available geochemical, isotopic, and petrologic data suggests derivation of the young porphyry copper-related plutons, most of their Pb, and other metals from a hybridized lower continental crustal source. Because of the likely involvement of subduction-related mantle-derived basaltic magma in the hybridized lower crustal source, an indiscernible mantle contribution is probable in the porphyry magmas. Clearly, in addition
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.
2016-01-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323
NASA Astrophysics Data System (ADS)
Difilippo, E. L.; Hammond, D. E.; Douglas, R.; Clark, J. F.; Avisar, D.; Dunker, R.
2004-12-01
The Abalone Cove landslide occupies 80 acres of an ancient landslide complex on the Palos Verdes peninsula, and was re-activated in 1979. The uphill portion of the ancient landslide complex has remained stable in historic times. Water infiltration into the slide is a short term catalyst for mass movement in the area, so it is important to determine the sources of groundwater throughout the slide mass. Water may enter the slide mass through direct percolation of recent precipitation, inflow along the head scarp of the ancient landslide or by rising through the slide plane from a deeper aquifer. The objective of this contribution is to use geochemical tracers (tritium and CFC-12) in combination with numerical modeling to constrain the importance of each of these sources. Numerical models were constructed to predict geochemical tracer concentrations throughout the basin, assuming that the only source of water to the slide mass is percolation of recent precipitation. Predicted concentrations were then compared to measured tracer values. In the ancient landslide, predicted and measured tracer concentrations are in good agreement, indicating that most of the water in this area is recent precipitation falling within the basin. Groundwater recharged uphill of the ancient landslide contributes minor flow into the complex through the head scarp, with the majority of this water flowing beneath the ancient slide plane. However, predicted tracer concentrations in the toe of the Abalone Cove landslide are not consistent with measured values. Both CFC-12 and tritium concentrations indicate that water is older than predicted and communication between the slide mass and the aquifer beneath the slide plane must occur in this area. Infiltration of this deep circulating water may exert upward hydraulic pressure on the landslide slip surface, increasing the potential for movement. This hypothesis is consistent with the observation that current movement is only occurring in the area in which tracers indicate communication with the deeper aquifer.
NASA Astrophysics Data System (ADS)
Kumar, J.; Jain, A.; Srivastava, R.
2005-12-01
The identification of pollution sources in aquifers is an important area of research not only for the hydrologists but also for the local and Federal agencies and defense organizations. Once the data in terms of pollutant concentration measurements at observation wells become known, it is important to identify the polluting industry in order to implement punitive or remedial measures. Traditionally, hydrologists have relied on the conceptual methods for the identification of groundwater pollution sources. The problem of identification of groundwater pollution sources using the conceptual methods requires a thorough understanding of the groundwater flow and contaminant transport processes and inverse modeling procedures that are highly complex and difficult to implement. Recently, the soft computing techniques, such as artificial neural networks (ANNs) and genetic algorithms, have provided an attractive and easy to implement alternative to solve complex problems efficiently. Some researchers have used ANNs for the identification of pollution sources in aquifers. A major problem with most previous studies using ANNs has been the large size of the neural networks that are needed to model the inverse problem. The breakthrough curves at an observation well may consist of hundreds of concentration measurements, and presenting all of them to the input layer of an ANN not only results in humongous networks but also requires large amount of training and testing data sets to develop the ANN models. This paper presents the results of a study aimed at using certain characteristics of the breakthrough curves and ANNs for determining the distance of the pollution source from a given observation well. Two different neural network models are developed that differ in the manner of characterizing the breakthrough curves. The first ANN model uses five parameters, similar to the synthetic unit hydrograph parameters, to characterize the breakthrough curves. The five parameters employed are peak concentration, time to peak concentration, the widths of the breakthrough curves at 50% and 75% of the peak concentration, and the time base of the breakthrough curve. The second ANN model employs only the first four parameters leaving out the time base. The measurement of breakthrough curve at an observation well involves very high costs in sample collection at suitable time intervals and analysis for various contaminants. The receding portions of the breakthrough curves are normally very long and excluding the time base from modeling would result in considerable cost savings. The feed-forward multi-layer perceptron (MLP) type neural networks trained using the back-propagation algorithm, are employed in this study. The ANN models for the two approaches were developed using simulated data generated for conservative pollutant transport through a homogeneous aquifer. A new approach for ANN training using back-propagation is employed that considers two different error statistics to prevent over-training and under-training of the ANNs. The preliminary results indicate that the ANNs are able to identify the location of the pollution source very efficiently from both the methods of the breakthrough curves characterization.
NASA Astrophysics Data System (ADS)
Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.
2015-07-01
Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i.e. toluene) oxidation and "more realistic" plant mesocosm systems, demonstrates that such an ensemble of chemometric mapping has the potential to be used for the classification of more complex spectra of unknown origin. More specifically, the addition of mesocosm data from fig and birch tree experiments shows that isoprene and monoterpene emitting sources, respectively, can be mapped onto the statistical model structure and their positional vectors can provide insight into their biological sources and controlling oxidative chemistry. The potential to extend the methodology to the analysis of ambient air is discussed using results obtained from a zero-dimensional box model incorporating mechanistic data obtained from the Master Chemical Mechanism (MCMv3.2). Such an extension to analysing ambient air would prove a powerful asset in assisting with the identification of SOA sources and the elucidation of the underlying chemical mechanisms involved.
Li, Weifeng; Cao, Qiwen; Lang, Kun; Wu, Jiansheng
2017-05-15
Rapid urbanization has significantly contributed to the development of urban heat island (UHI). Regulating landscape composition and configuration would help mitigate the UHI in megacities. Taking Shenzhen, China, as a case study area, we defined heat source and heat sink and identified strong and weak sources as well as strong and weak sinks according to the natural and socioeconomic factors influencing land surface temperature (LST). Thus, the potential thermal contributions of heat source and heat sink patches were differentiated. Then, the heterogeneous effects of landscape pattern on LST were examined by using semiparametric geographically weighted regression (SGWR) models. The results showed that landscape composition has more significant effects on thermal environment than configuration. For a strong source, the percentage of patches has a positive impact on LST. Additionally, when mosaicked with some heat sink, even a small improvement in the degree of dispersion of a strong source helps to alleviate UHI. For a weak source, the percentage and density of patches have positive impacts on LST. For a strong sink, the percentage, density, and degree of aggregation of patches have negative impacts on LST. The effects of edge density and patch shape complexity vary spatially with the fragmentation of a strong sink. Similarly, the impacts of a weak sink are mainly exerted via the characteristics of percent, density, and shape complexity of patches. Copyright © 2017 Elsevier B.V. All rights reserved.
Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.
2016-01-01
Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Zevin, Jason D; Miller, Brett
Reading research is increasingly a multi-disciplinary endeavor involving more complex, team-based science approaches. These approaches offer the potential of capturing the complexity of reading development, the emergence of individual differences in reading performance over time, how these differences relate to the development of reading difficulties and disability, and more fully understanding the nature of skilled reading in adults. This special issue focuses on the potential opportunities and insights that early and richly integrated advanced statistical and computational modeling approaches can provide to our foundational (and translational) understanding of reading. The issue explores how computational and statistical modeling, using both observed and simulated data, can serve as a contact point among research domains and topics, complement other data sources and critically provide analytic advantages over current approaches.
Geochemistry and geodynamics of the Mawat mafic complex in the Zagros Suture zone, northeast Iraq
NASA Astrophysics Data System (ADS)
Azizi, Hossein; Hadi, Ayten; Asahara, Yoshihiro; Mohammad, Youssef Osman
2013-12-01
The Iraqi Zagros Orogenic Belt includes two separate ophiolite belts, which extend along a northwest-southeast trend near the Iranian border. The outer belt shows ophiolite sequences and originated in the oceanic ridge or supra-subduction zone. The inner belt includes the Mawat complex, which is parallel to the outer belt and is separated by the Biston Avoraman block. The Mawat complex with zoning structures includes sedimentary rocks with mafic interbedded lava and tuff, and thick mafic and ultramafic rocks. This complex does not show a typical ophiolite sequences such as those in Penjween and Bulfat. The Mawat complex shows evidence of dynamic deformation during the Late Cretaceous. Geochemical data suggest that basic rocks have high MgO and are significantly depleted in LREE relative to HREE. In addition they show positive ɛ Nd values (+5 to+8) and low 87Sr/86Sr ratios. The occurrence of some OIB type rocks, high Mg basaltic rocks and some intermediate compositions between these two indicate the evolution of the Mawat complex from primary and depleted source mantle. The absence of a typical ophiolite sequence and the presence of good compatibility of the source magma with magma extracted from the mantle plume suggests that a mantle plume from the D″ layer is more consistent as the source of this complex than the oceanic ridge or supra-subduction zone settings. Based on our proposed model the Mawat basin represents an extensional basin formed during the Late Paleozoic to younger along the Arabian passive margin oriented parallel to the Neo-Tethys oceanic ridge or spreading center. The Mawat extensional basin formed without creation of new oceanic basement. During the extension, huge volumes of mafic lava were intruded into this basin. This basin was squeezed between the Arabian Plate and Biston Avoraman block during the Late Cretaceous.
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
The Top 10 List of Gravitational Lens Candidates from the HUBBLE SPACE TELESCOPE Medium Deep Survey
NASA Astrophysics Data System (ADS)
Ratnatunga, Kavan U.; Griffiths, Richard E.; Ostrander, Eric J.
1999-05-01
A total of 10 good candidates for gravitational lensing have been discovered in the WFPC2 images from the Hubble Space Telescope (HST) Medium Deep Survey (MDS) and archival primary observations. These candidate lenses are unique HST discoveries, i.e., they are faint systems with subarcsecond separations between the lensing objects and the lensed source images. Most of them are difficult objects for ground-based spectroscopic confirmation or for measurement of the lens and source redshifts. Seven are ``strong lens'' candidates that appear to have multiple images of the source. Three are cases in which the single image of the source galaxy has been significantly distorted into an arc. The first two quadruply lensed candidates were reported by Ratnatunga et al. We report on the subsequent eight candidates and describe them with simple models based on the assumption of singular isothermal potentials. Residuals from the simple models for some of the candidates indicate that a more complex model for the potential will probably be required to explain the full structural detail of the observations once they are confirmed to be lenses. We also discuss the effective survey area that was searched for these candidate lens objects.
Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J
2015-06-01
The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10(10) histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an (125)I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J., E-mail: mark.j.rivard@gmail.com
2015-06-15
Purpose: The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design. Methods: Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Montemore » Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm. Results: The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an {sup 125}I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.« less
Slope tomography based on eikonal solvers and the adjoint-state method
NASA Astrophysics Data System (ADS)
Tavakoli F., B.; Operto, S.; Ribodetti, A.; Virieux, J.
2017-06-01
Velocity macromodel building is a crucial step in the seismic imaging workflow as it provides the necessary background model for migration or full waveform inversion. In this study, we present a new formulation of stereotomography that can handle more efficiently long-offset acquisition, complex geological structures and large-scale data sets. Stereotomography is a slope tomographic method based upon a semi-automatic picking of local coherent events. Each local coherent event, characterized by its two-way traveltime and two slopes in common-shot and common-receiver gathers, is tied to a scatterer or a reflector segment in the subsurface. Ray tracing provides a natural forward engine to compute traveltime and slopes but can suffer from non-uniform ray sampling in presence of complex media and long-offset acquisitions. Moreover, most implementations of stereotomography explicitly build a sensitivity matrix, leading to the resolution of large systems of linear equations, which can be cumbersome when large-scale data sets are considered. Overcoming these issues comes with a new matrix-free formulation of stereotomography: a factored eikonal solver based on the fast sweeping method to compute first-arrival traveltimes and an adjoint-state formulation to compute the gradient of the misfit function. By solving eikonal equation from sources and receivers, we make the computational cost proportional to the number of sources and receivers while it is independent of picked events density in each shot and receiver gather. The model space involves the subsurface velocities and the scatterer coordinates, while the dips of the reflector segments are implicitly represented by the spatial support of the adjoint sources and are updated through the joint localization of nearby scatterers. We present an application on the complex Marmousi model for a towed-streamer acquisition and a realistic distribution of local events. We show that the estimated model, built without any prior knowledge of the velocities, provides a reliable initial model for frequency-domain FWI of long-offset data for a starting frequency of 4 Hz, although some artefacts at the reservoir level result from a deficit of illumination. This formulation of slope tomography provides a computationally efficient alternative to waveform inversion method such as reflection waveform inversion or differential-semblance optimization to build an initial model for pre-stack depth migration and conventional FWI.
Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing
2017-12-01
It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.
Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran
2006-08-01
A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.
Electromagnetic Modeling of Human Body Using High Performance Computing
NASA Astrophysics Data System (ADS)
Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada
Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent Simmons, J.A.; Knap, A.H.
1991-04-01
The computer model Industrial Source Complex Short Term (ISCST) was used to study the stack emissions from a refuse incinerator proposed for the inland of Bermuda. The model predicts that the highest ground level pollutant concentrations will occur near Prospect, 800 m to 1,000 m due south of the stack. The authors installed a portable laboratory and instruments at Prospect to begin making air quality baseline measurements. By comparing the model's estimates of the incinerator contribution to the background levels measured at the site they predicted that stack emissions would not cause an increase in TSP or SO{sub 2}. Themore » incinerator will be a significant source of HCI to Bermuda air with ambient levels approaching air quality guidelines.« less
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
Modeling the complex activity of sickle cell and thalassemia specialist nurses in England.
Leary, Alison; Anionwu, Elizabeth N
2014-01-01
Specialist advanced practice nursing in hemoglobinopathies has a rich historical and descriptive literature. Subsequent work has shown that the role is valued by patients and families and also by other professionals. However, there is little empirical research on the complexity of activity of these services in terms of interventions offered. In addition, the work of clinical nurse specialists in England has been devalued through a perception of oversimplification. The purpose of this study was to understand the complexity of expert nursing practice in sickle cell and thalassemia. The approach taken to modeling complexity was used from common methods in mathematical modeling and computational mathematics. Knowledge discovery through data was the underpinning framework used in this study using a priori mined data. This allowed categorization of activity and articulation of complexity. In total, 8966 nursing events were captured over 1639 hours from a total of 22.8 whole time equivalents, and several data sources were mined. The work of specialist nurses in this area is complex in terms of the physical and psychosocial care they provide. The nurses also undertook case management activity such as utilizing a very large network of professionals, and others participated in admission avoidance work and education of patients' families and other staff. The work of nurses specializing in hemoglobinopathy care is complex and multidimensional and is likely to contribute to the quality of care in a cost-effective way. An understanding of this complexity can be used as an underpinning to establishing key performance indicators, optimum caseload calculations, and economic evaluation.
Thermal Image Sensing Model for Robotic Planning and Search
Castro Jiménez, Lídice E.; Martínez-García, Edgar A.
2016-01-01
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image’s intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot’s course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach. PMID:27509510
Data Mining and Complex Problems: Case Study in Composite Materials
NASA Technical Reports Server (NTRS)
Rabelo, Luis; Marin, Mario
2009-01-01
Data mining is defined as the discovery of useful, possibly unexpected, patterns and relationships in data using statistical and non-statistical techniques in order to develop schemes for decision and policy making. Data mining can be used to discover the sources and causes of problems in complex systems. In addition, data mining can support simulation strategies by finding the different constants and parameters to be used in the development of simulation models. This paper introduces a framework for data mining and its application to complex problems. To further explain some of the concepts outlined in this paper, the potential application to the NASA Shuttle Reinforced Carbon-Carbon structures and genetic programming is used as an illustration.
Ionospheric scintillation studies
NASA Technical Reports Server (NTRS)
Rino, C. L.; Freemouw, E. J.
1973-01-01
The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.
NASA Astrophysics Data System (ADS)
Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana
2017-11-01
The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps. They can be relevant for the retrofitting of the existing building stock and for driving risk reduction interventions. These analyses do not account for regional M > 6 seismogenic sources which dominate the hazard over long return times (≥ 500 years).
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Spectral studies of cosmic X-ray sources
NASA Astrophysics Data System (ADS)
Blissett, R. J.
1980-01-01
The conventional "indirect" method of reduction and data analysis of spectral data from non-dispersive X-ray detectors, by the fitting of assumed spectral models, is examined. The limitations of this procedure are presented, and alternative schemes are considered in which the derived spectra are not biased to an astrophysical source model. A new method is developed in detail to directly restore incident photon spectra from the detected count histograms. This Spectral Restoration Technique allows an increase in resolution, to a degree dependent on the statistical precision of the data. This is illustrated by numerical simulations. Proportional counter data from Ariel 5 are analysed using this technique. The results obtained for the sources Cas A and the Crab Nebula are consistent with previous analyses and show that increases in resolution of up to a factor three are possible in practice. The source Cyg X-3 is closely examined. Complex spectral variability is found, with the continuum and iron-line emission modulated with the 4.8 hour period of the source. The data suggest multi-component emission in the source. Comparing separate Ariel 5 observations and published data from other experiments, a correlation between the spectral shape and source intensity is evident. The source behaviour is discussed with reference to proposed source models. Data acquired by the low-energy detectors on-board HEAO-1 are analysed using the Spectral Restoration Technique. This treatment explicitly demonstrates the existence of oxygen K-absorption edges in the soft X-ray spectra of the Crab Nebula and Sco X-1. These results are considered with reference to current theories of the interstellar medium. The thesis commences with a review of cosmic X-ray sources and the mechanisms responsible for their spectral signatures, and continues with a discussion of the instruments appropriate for spectral studies in X-ray astronomy.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.
2016-12-01
The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).