Sample records for realistic source parameters

  1. An evaluation of differences due to changing source directivity in room acoustic computer modeling

    NASA Astrophysics Data System (ADS)

    Vigeant, Michelle C.; Wang, Lily M.

    2004-05-01

    This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.

  2. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  3. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  4. The Mock LISA Data Challenge Round 3: New and Improved Sources

    NASA Technical Reports Server (NTRS)

    Baker, John

    2008-01-01

    The Mock LISA Data Challenges are a program to demonstrate and encourage the development of data-analysis capabilities for LISA. Each round of challenges consists of several data sets containing simulated instrument noise and gravitational waves from sources of undisclosed parameters. Participants are asked to analyze the data sets and report the maximum information they can infer about the source parameters. The challenges are being released in rounds of increasing complexity and realism. Challenge 3. currently in progress, brings new source classes, now including cosmic-string cusps and primordial stochastic backgrounds, and more realistic signal models for supermassive black-hole inspirals and galactic double white dwarf binaries.

  5. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  6. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  7. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  8. Quantum information transfer and entanglement with SQUID qubits in cavity QED: a dark-state scheme with tolerance for nonuniform device parameter.

    PubMed

    Yang, Chui-Ping; Chu, Shih-I; Han, Siyuan

    2004-03-19

    We investigate the experimental feasibility of realizing quantum information transfer (QIT) and entanglement with SQUID qubits in a microwave cavity via dark states. Realistic system parameters are presented. Our results show that QIT and entanglement with two-SQUID qubits can be achieved with a high fidelity. The present scheme is tolerant to device parameter nonuniformity. We also show that the strong coupling limit can be achieved with SQUID qubits in a microwave cavity. Thus, cavity-SQUID systems provide a new way for production of nonclassical microwave source and quantum communication.

  9. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  10. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  11. Virtual-source diffusion approximation for enhanced near-field modeling of photon-migration in low-albedo medium.

    PubMed

    Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng

    2015-01-26

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.

  12. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  13. A Gaia DR2 Mock Stellar Catalog

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Demleitner, Markus; Fouesneau, Morgan; Bailer-Jones, Coryn; Rix, Hans-Walter; Andrae, René

    2018-07-01

    We present a mock catalog of Milky Way stars, matching in volume and depth the content of the Gaia data release 2 (GDR2). We generated our catalog using Galaxia, a tool to sample stars from a Besançon Galactic model, together with a realistic 3D dust extinction map. The catalog mimics the complete GDR2 data model and contains most of the entries in the Gaia source catalog: five-parameter astrometry, three-band photometry, radial velocities, stellar parameters, and associated scaled nominal uncertainty estimates. In addition, we supplemented the catalog with extinctions and photometry for non-Gaia bands. This catalog can be used to prepare GDR2 queries in a realistic runtime environment, and it can serve as a Galactic model against which to compare the actual GDR2 data in the space of observables. The catalog is hosted through the virtual observatory GAVO’s Heidelberg data center (http://dc.g-vo.org/tableinfo/gdr2mock.main) service, and thus can be queried using ADQL as for GDR2 data.

  14. Building Better Planet Populations for EXOSIMS

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2018-01-01

    The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.

  15. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  16. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  17. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  18. Impact of heat source/sink on radiative heat transfer to Maxwell nanofluid subject to revised mass flux condition

    NASA Astrophysics Data System (ADS)

    Khan, M.; Irfan, M.; Khan, W. A.

    2018-06-01

    Nanofluids retain noteworthy structure that have absorbed attentions of numerous investigators because of their exploration in nanotechnology and nanoscience. In this scrutiny a mathematical computation of 2D flows of Maxwell nanoliquid influenced by a stretched cylinder has been established. The heat transfer structure is conceded out in the manifestation of thermal radiation and heat source/sink. Moreover, the nanoparticles mass flux condition is engaged in this exploration. This newly endorsed tactic is more realistic where the conjecture is made that the nanoparticle flux is zero and nanoparticle fraction regulates itself on the restrictions consequently. By utilizing apposite conversion the governing PDEs are transformed into ODEs and then tackled analytically via HAM. The attained outcomes are plotted and deliberated in aspect for somatic parameters. It is remarked that with an intensification in the Deborah number β diminish the liquid temperature while it boosts for radiation parameter Rd . Furthermore, the concentration of Maxwell liquid has conflicting impact for Brownian motion Nb and thermophoresis parameters Nt .

  19. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  20. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  1. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  2. Joint probabilistic determination of earthquake location and velocity structure: application to local and regional events

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Haugmard, M.; Mocquet, A.

    2016-12-01

    The most widely used inversion schemes to locate earthquakes are based on iterative linearized least-squares algorithms and using an a priori knowledge of the propagation medium. When a small amount of observations is available for moderate events for instance, these methods may lead to large trade-offs between outputs and both the velocity model and the initial set of hypocentral parameters. We present a joint structure-source determination approach using Bayesian inferences. Monte-Carlo continuous samplings, using Markov chains, generate models within a broad range of parameters, distributed according to the unknown posterior distributions. The non-linear exploration of both the seismic structure (velocity and thickness) and the source parameters relies on a fast forward problem using 1-D travel time computations. The a posteriori covariances between parameters (hypocentre depth, origin time and seismic structure among others) are computed and explicitly documented. This method manages to decrease the influence of the surrounding seismic network geometry (sparse and/or azimuthally inhomogeneous) and a too constrained velocity structure by inferring realistic distributions on hypocentral parameters. Our algorithm is successfully used to accurately locate events of the Armorican Massif (western France), which is characterized by moderate and apparently diffuse local seismicity.

  3. Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B

    NASA Astrophysics Data System (ADS)

    Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete

    2018-04-01

    The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.

  4. X-rays from supernova 1987A

    NASA Technical Reports Server (NTRS)

    Xu, Yueming; Sutherland, Peter; Mccray, Richard; Ross, Randy R.

    1988-01-01

    Detailed calculations of the development of the X-ray spectrum of 1987A are presented using more realistic models for the supernova composition and density structure provided by Woosley. It is shown how the emergence of the X-ray spectrum depends on the parameters of the model and the nature of its central energy source. It is shown that the soft X-ray spectrum should be dominated by a 6.4 keV Fe K(alpha) emission line that could be observed by a sensitive X-ray telescope.

  5. Material impacts and heat flux characterization of an electrothermal plasma source with an applied magnetic field

    NASA Astrophysics Data System (ADS)

    Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.; Rapp, J.; Winfrey, A. L.

    2017-08-01

    To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ˜1 GW/m2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured with two pulse lengths and tested under a solenoidal magnetic field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. The tungsten target plate is analyzed for surface damage using a scanning electron microscope.

  6. On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation

    NASA Astrophysics Data System (ADS)

    Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw

    2012-12-01

    We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.

  7. ON THE MAGNETIC FIELD OF PULSARS WITH REALISTIC NEUTRON STAR CONFIGURATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belvedere, R.; Rueda, Jorge A.; Ruffini, R., E-mail: riccardo.belvedere@icra.it, E-mail: jorge.rueda@icra.it, E-mail: ruffini@icra.it

    2015-01-20

    We have recently developed a neutron star model fulfilling global and not local charge neutrality, both in the static and in the uniformly rotating cases. The model is described by the coupled Einstein-Maxwell-Thomas-Fermi equations, in which all fundamental interactions are accounted for in the framework of general relativity and relativistic mean field theory. Uniform rotation is introduced following Hartle's formalism. We show that the use of realistic parameters of rotating neutron stars, obtained from numerical integration of the self-consistent axisymmetric general relativistic equations of equilibrium, leads to values of the magnetic field and radiation efficiency of pulsars that are verymore » different from estimates based on fiducial parameters that assume a neutron star mass M = 1.4 M {sub ☉}, radius R = 10 km, and moment of inertia I = 10{sup 45} g cm{sup 2}. In addition, we compare and contrast the magnetic field inferred from the traditional Newtonian rotating magnetic dipole model with respect to the one obtained from its general relativistic analog, which takes into account the effect of the finite size of the source. We apply these considerations to the specific high-magnetic field pulsar class and show that, indeed, all of these sources can be described as canonical pulsars driven by the rotational energy of the neutron star, and have magnetic fields lower than the quantum critical field for any value of the neutron star mass.« less

  8. The rotation-powered nature of some soft gamma-ray repeaters and anomalous X-ray pulsars

    NASA Astrophysics Data System (ADS)

    Coelho, Jaziel G.; Cáceres, D. L.; de Lima, R. C. R.; Malheiro, M.; Rueda, J. A.; Ruffini, R.

    2017-03-01

    Context. Soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) are slow rotating isolated pulsars whose energy reservoir is still matter of debate. Adopting neutron star (NS) fiducial parameters; mass M = 1.4 M⊙, radius R = 10 km, and moment of inertia, I = 1045 g cm2, the rotational energy loss, Ėrot, is lower than the observed luminosity (dominated by the X-rays) LX for many of the sources. Aims: We investigate the possibility that some members of this family could be canonical rotation-powered pulsars using realistic NS structure parameters instead of fiducial values. Methods: We compute the NS mass, radius, moment of inertia and angular momentum from numerical integration of the axisymmetric general relativistic equations of equilibrium. We then compute the entire range of allowed values of the rotational energy loss, Ėrot, for the observed values of rotation period P and spin-down rate Ṗ. We also estimate the surface magnetic field using a general relativistic model of a rotating magnetic dipole. Results: We show that realistic NS parameters lowers the estimated value of the magnetic field and radiation efficiency, LX/Ėrot, with respect to estimates based on fiducial NS parameters. We show that nine SGRs/AXPs can be described as canonical pulsars driven by the NS rotational energy, for LX computed in the soft (2-10 keV) X-ray band. We compute the range of NS masses for which LX/Ėrot< 1. We discuss the observed hard X-ray emission in three sources of the group of nine potentially rotation-powered NSs. This additional hard X-ray component dominates over the soft one leading to LX/Ėrot > 1 in two of them. Conclusions: We show that 9 SGRs/AXPs can be rotation-powered NSs if we analyze their X-ray luminosity in the soft 2-10 keV band. Interestingly, four of them show radio emission and six have been associated with supernova remnants (including Swift J1834.9-0846 the first SGR observed with a surrounding wind nebula). These observations give additional support to our results of a natural explanation of these sources in terms of ordinary pulsars. Including the hard X-ray emission observed in three sources of the group of potential rotation-powered NSs, this number of sources with LX/Ėrot< 1 becomes seven. It remains open to verification 1) the accuracy of the estimated distances and 2) the possible contribution of the associated supernova remnants to the hard X-ray emission.

  9. Simulating a transmon implementation of the surface code, Part II

    NASA Astrophysics Data System (ADS)

    O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo

    The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  10. FDTD Modeling of LEMP Propagation in the Earth-Ionosphere Waveguide With Emphasis on Realistic Representation of Lightning Source

    NASA Astrophysics Data System (ADS)

    Tran, Thang H.; Baba, Yoshihiro; Somu, Vijaya B.; Rakov, Vladimir A.

    2017-12-01

    The finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system was used to compute the nearly full-frequency-bandwidth vertical electric field and azimuthal magnetic field waveforms produced on the ground surface by lightning return strokes. The lightning source was represented by the modified transmission-line model with linear current decay with height, which was implemented in the FDTD computations as an appropriate vertical phased-current-source array. The conductivity of atmosphere was assumed to increase exponentially with height, with different conductivity profiles being used for daytime and nighttime conditions. The fields were computed at distances ranging from 50 to 500 km. Sky waves (reflections from the ionosphere) were identified in computed waveforms and used for estimation of apparent ionospheric reflection heights. It was found that our model reproduces reasonably well the daytime electric field waveforms measured at different distances and simulated (using a more sophisticated propagation model) by Qin et al. (2017). Sensitivity of model predictions to changes in the parameters of atmospheric conductivity profile, as well as influences of the lightning source characteristics (current waveshape parameters, return-stroke speed, and channel length) and ground conductivity were examined.

  11. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  12. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia

    NASA Astrophysics Data System (ADS)

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-06-01

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.

  13. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia

    PubMed Central

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-01-01

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies. PMID:26119831

  14. Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia.

    PubMed

    Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S H; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne

    2015-06-29

    Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.

  15. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  16. Leptogenesis from gravity waves in models of inflation.

    PubMed

    Alexander, Stephon H S; Peskin, Michael E; Sheikh-Jabbari, M M

    2006-03-03

    We present a new mechanism for creating the observed cosmic matter-antimatter asymmetry which satisfies all three Sakharov conditions from one common thread, gravitational waves. We generate lepton number through the gravitational anomaly in the lepton number current. The source term comes from elliptically polarized gravity waves that are produced during inflation if the inflaton field contains a CP-odd component. The amount of matter asymmetry generated in our model can be of realistic size for the parameters within the range of some inflationary scenarios and grand unified theories.

  17. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  18. Evaluation of the communications impact of a low power arcjet thruster

    NASA Technical Reports Server (NTRS)

    Carney, Lynnette M.

    1988-01-01

    The interaction of a 1 kW arcjet thruster plume with a communications signal is evaluated. A two-parameter, source flow equation has been used to represent the far flow field distribution of the arcjet plume in a realistic spacecraft configuration. Modelling the plume as a plasma slab, the interaction of the plume with a 4 GHz communications signal is then evaluated in terms of signal attenuation and phase shift between transmitting and receiving antennas. Except for propagation paths which pass very near the arcjet source, the impacts to transmission appear to be negligible. The dominant signal loss mechanism is refraction of the beam rather than absorption losses due to collisions. However, significant reflection of the signal at the sharp vacuum-plasma boundary may also occur for propagation paths which pass near the source.

  19. Open-source LCA tool for estimating greenhouse gas emissions from crude oil production using field characteristics.

    PubMed

    El-Houjeiri, Hassan M; Brandt, Adam R; Duffy, James E

    2013-06-04

    Existing transportation fuel cycle emissions models are either general and calculate nonspecific values of greenhouse gas (GHG) emissions from crude oil production, or are not available for public review and auditing. We have developed the Oil Production Greenhouse Gas Emissions Estimator (OPGEE) to provide open-source, transparent, rigorous GHG assessments for use in scientific assessment, regulatory processes, and analysis of GHG mitigation options by producers. OPGEE uses petroleum engineering fundamentals to model emissions from oil and gas production operations. We introduce OPGEE and explain the methods and assumptions used in its construction. We run OPGEE on a small set of fictional oil fields and explore model sensitivity to selected input parameters. Results show that upstream emissions from petroleum production operations can vary from 3 gCO2/MJ to over 30 gCO2/MJ using realistic ranges of input parameters. Significant drivers of emissions variation are steam injection rates, water handling requirements, and rates of flaring of associated gas.

  20. 4D volcano gravimetry

    USGS Publications Warehouse

    Battaglia, Maurizio; Gottsmann, J.; Carbone, D.; Fernandez, J.

    2008-01-01

    Time-dependent gravimetric measurements can detect subsurface processes long before magma flow leads to earthquakes or other eruption precursors. The ability of gravity measurements to detect subsurface mass flow is greatly enhanced if gravity measurements are analyzed and modeled with ground-deformation data. Obtaining the maximum information from microgravity studies requires careful evaluation of the layout of network benchmarks, the gravity environmental signal, and the coupling between gravity changes and crustal deformation. When changes in the system under study are fast (hours to weeks), as in hydrothermal systems and restless volcanoes, continuous gravity observations at selected sites can help to capture many details of the dynamics of the intrusive sources. Despite the instrumental effects, mainly caused by atmospheric temperature, results from monitoring at Mt. Etna volcano show that continuous measurements are a powerful tool for monitoring and studying volcanoes.Several analytical and numerical mathematical models can beused to fit gravity and deformation data. Analytical models offer a closed-form description of the volcanic source. In principle, this allows one to readily infer the relative importance of the source parameters. In active volcanic sites such as Long Valley caldera (California, U.S.A.) and Campi Flegrei (Italy), careful use of analytical models and high-quality data sets has produced good results. However, the simplifications that make analytical models tractable might result in misleading volcanological inter-pretations, particularly when the real crust surrounding the source is far from the homogeneous/ isotropic assumption. Using numerical models allows consideration of more realistic descriptions of the sources and of the crust where they are located (e.g., vertical and lateral mechanical discontinuities, complex source geometries, and topography). Applications at Teide volcano (Tenerife) and Campi Flegrei demonstrate the importance of this more realistic description in gravity calculations. ?? 2008 Society of Exploration Geophysicists. All rights reserved.

  1. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  2. Material impacts and heat flux characterization of an electrothermal plasma source with an applied magnetic field

    DOE PAGES

    Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.; ...

    2017-08-11

    To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ~1 GW/m 2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured in this paper with two pulse lengths and tested under a solenoidal magneticmore » field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. Finally, the tungsten target plate is analyzed for surface damage using a scanning electron microscope.« less

  3. Material impacts and heat flux characterization of an electrothermal plasma source with an applied magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gebhart, T. E.; Martinez-Rodriguez, R. A.; Baylor, L. R.

    To produce a realistic tokamak-like plasma environment in linear plasma device, a transient source is needed to deliver heat and particle fluxes similar to those seen in an edge localized mode (ELM). ELMs in future large tokamaks will deliver heat fluxes of ~1 GW/m 2 to the divertor plasma facing components at a few Hz. An electrothermal plasma source can deliver heat fluxes of this magnitude. These sources operate in an ablative arc regime which is driven by a DC capacitive discharge. An electrothermal source was configured in this paper with two pulse lengths and tested under a solenoidal magneticmore » field to determine the resulting impact on liner ablation, plasma parameters, and delivered heat flux. The arc travels through and ablates a boron nitride liner and strikes a tungsten plate. Finally, the tungsten target plate is analyzed for surface damage using a scanning electron microscope.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less

  5. Searches for millisecond pulsations in low-mass X-ray binaries

    NASA Technical Reports Server (NTRS)

    Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.

    1991-01-01

    High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.

  6. Surface Current Density Mapping for Identification of Gastric Slow Wave Propagation

    PubMed Central

    Bradshaw, L. A.; Cheng, L. K.; Richards, W. O.; Pullan, A. J.

    2009-01-01

    The magnetogastrogram records clinically relevant parameters of the electrical slow wave of the stomach noninvasively. Besides slow wave frequency, gastric slow wave propagation velocity is a potentially useful clinical indicator of the state of health of gastric tissue, but it is a difficult parameter to determine from noninvasive bioelectric or biomagnetic measurements. We present a method for computing the surface current density (SCD) from multichannel magnetogastrogram recordings that allows computation of the propagation velocity of the gastric slow wave. A moving dipole source model with hypothetical as well as realistic biomagnetometer parameters demonstrates that while a relatively sparse array of magnetometer sensors is sufficient to compute a single average propagation velocity, more detailed information about spatial variations in propagation velocity requires higher density magnetometer arrays. Finally, the method is validated with simultaneous MGG and serosal EMG measurements in a porcine subject. PMID:19403355

  7. A Critical Approach to School Mathematical Knowledge: The Case of "Realistic" Problems in Greek Primary School Textbooks for Seven-Year-Old Pupils

    ERIC Educational Resources Information Center

    Zacharos, Konstantinos; Koustourakis, Gerassimos

    2011-01-01

    The reference contexts that accompany the "realistic" problems chosen for teaching mathematical concepts in the first school grades play a major educational role. However, choosing "realistic" problems in teaching is a complex process that must take into account various pedagogical, sociological and psychological parameters.…

  8. Effect of conductor geometry on source localization: Implications for epilepsy studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlitt, H.; Heller, L.; Best, E.

    1994-07-01

    We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less

  9. Hemodynamic Changes Caused by Flow Diverters in Rabbit Aneurysm Models: Comparison of Virtual and Realistic FD Deployments Based on Micro-CT Reconstruction

    PubMed Central

    Fang, Yibin; Yu, Ying; Cheng, Jiyong; Wang, Shengzhang; Wang, Kuizhong; Liu, Jian-Min; Huang, Qinghai

    2013-01-01

    Adjusting hemodynamics via flow diverter (FD) implantation is emerging as a novel method of treating cerebral aneurysms. However, most previous FD-related hemodynamic studies were based on virtual FD deployment, which may produce different hemodynamic outcomes than realistic (in vivo) FD deployment. We compared hemodynamics between virtual FD and realistic FD deployments in rabbit aneurysm models using computational fluid dynamics (CFD) simulations. FDs were implanted for aneurysms in 14 rabbits. Vascular models based on rabbit-specific angiograms were reconstructed for CFD studies. Real FD configurations were reconstructed based on micro-CT scans after sacrifice, while virtual FD configurations were constructed with SolidWorks software. Hemodynamic parameters before and after FD deployment were analyzed. According to the metal coverage (MC) of implanted FDs calculated based on micro-CT reconstruction, 14 rabbits were divided into two groups (A, MC >35%; B, MC <35%). Normalized mean wall shear stress (WSS), relative residence time (RRT), inflow velocity, and inflow volume in Group A were significantly different (P<0.05) from virtual FD deployment, but pressure was not (P>0.05). The normalized mean WSS in Group A after realistic FD implantation was significantly lower than that of Group B. All parameters in Group B exhibited no significant difference between realistic and virtual FDs. This study confirmed MC-correlated differences in hemodynamic parameters between realistic and virtual FD deployment. PMID:23823503

  10. Minerva exoplanet detection sensitivity from simulated observations

    NASA Astrophysics Data System (ADS)

    McCrady, Nate; Nava, C.

    2014-01-01

    Small rocky planets induce radial velocity signals that are difficult to detect in the presence of stellar noise sources of comparable or larger amplitude. Minerva is a dedicated, robotic observatory that will attain 1 meter per second precision to detect these rocky planets in the habitable zone around nearby stars. We present results of an ongoing project investigating Minerva’s planet detection sensitivity as a function of observational cadence, planet mass, and orbital parameters (period, eccentricity, and argument of periastron). Radial velocity data is simulated with realistic observing cadence, accounting for weather patterns at Mt. Hopkins, Arizona. Instrumental and stellar noise are added to the simulated observations, including effects of oscillation, jitter, starspots and rotation. We extract orbital parameters from the simulated RV data using the RVLIN code. A Monte Carlo analysis is used to explore the parameter space and evaluate planet detection completeness. Our results will inform the Minerva observing strategy by providing a quantitative measure of planet detection sensitivity as a function of orbital parameters and cadence.

  11. Prediction of Breakthrough Curves for Conservative and Reactive Transport from the Structural Parameters of Highly Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.

    2016-12-01

    It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.

  12. VALIDATION OF THE CORONAL THICK TARGET SOURCE MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleishman, Gregory D.; Xu, Yan; Nita, Gelu N.

    2016-01-10

    We present detailed 3D modeling of a dense, coronal thick-target X-ray flare using the GX Simulator tool, photospheric magnetic measurements, and microwave imaging and spectroscopy data. The developed model offers a remarkable agreement between the synthesized and observed spectra and images in both X-ray and microwave domains, which validates the entire model. The flaring loop parameters are chosen to reproduce the emission measure, temperature, and the nonthermal electron distribution at low energies derived from the X-ray spectral fit, while the remaining parameters, unconstrained by the X-ray data, are selected such as to match the microwave images and total power spectra.more » The modeling suggests that the accelerated electrons are trapped in the coronal part of the flaring loop, but away from where the magnetic field is minimal, and, thus, demonstrates that the data are clearly inconsistent with electron magnetic trapping in the weak diffusion regime mediated by the Coulomb collisions. Thus, the modeling supports the interpretation of the coronal thick-target sources as sites of electron acceleration in flares and supplies us with a realistic 3D model with physical parameters of the acceleration region and flaring loop.« less

  13. Consistency relations for sharp features in the primordial spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris

    We study the generation of sharp features in the primordial spectra within the framework of effective field theory of inflation, wherein curvature perturbations are the consequence of the dynamics of a single scalar degree of freedom. We identify two sources in the generation of features: rapid variations of the sound speed c{sub s} (at which curvature fluctuations propagate) and rapid variations of the expansion rate H during inflation. With this in mind, we propose a non-trivial relation linking these two quantities that allows us to study the generation of sharp features in realistic scenarios where features are the result ofmore » the simultaneous occurrence of these two sources. This relation depends on a single parameter with a value determined by the particular model (and its numerical input) responsible for the rapidly varying background. As a consequence, we find a one-parameter consistency relation between the shape and size of features in the bispectrum and features in the power spectrum. To substantiate this result, we discuss several examples of models for which this one-parameter relation (between c{sub s} and H) holds, including models in which features in the spectra are both sudden and resonant.« less

  14. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  15. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  16. Bayesian inference on EMRI signals using low frequency approximations

    NASA Astrophysics Data System (ADS)

    Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian

    2012-07-01

    Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.

  17. Creating an anthropomorphic digital MR phantom—an extensible tool for comparing and evaluating quantitative imaging algorithms

    NASA Astrophysics Data System (ADS)

    Bosca, Ryan J.; Jackson, Edward F.

    2016-01-01

    Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.

  18. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new pseudo-dynamic rupture modeling approach for computing broadband ground-motion time-histories for simulation-based PSHA

  19. Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.

    PubMed

    Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T

    2013-12-06

    The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.

  20. Capillary test specimen, system, and methods for in-situ visualization of capillary flow and fillet formation

    DOEpatents

    Hall, Aaron C.; Hosking, F. Michael ,; Reece, Mark

    2003-06-24

    A capillary test specimen, method, and system for visualizing and quantifying capillary flow of liquids under realistic conditions, including polymer underfilling, injection molding, soldering, brazing, and casting. The capillary test specimen simulates complex joint geometries and has an open cross-section to permit easy visual access from the side. A high-speed, high-magnification camera system records the location and shape of the moving liquid front in real-time, in-situ as it flows out of a source cavity, through an open capillary channel between two surfaces having a controlled capillary gap, and into an open fillet cavity, where it subsequently forms a fillet on free surfaces that have been configured to simulate realistic joint geometries. Electric resistance heating rapidly heats the test specimen, without using a furnace. Image-processing software analyzes the recorded images and calculates the velocity of the moving liquid front, fillet contact angles, and shape of the fillet's meniscus, among other parameters.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroeer, Alexander; Veitch, John

    The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknownmore » parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.« less

  2. Mapping Curie temperature depth in the western United States with a fractal model for crustal magnetization

    USGS Publications Warehouse

    Bouligand, C.; Glen, J.M.G.; Blakely, R.J.

    2009-01-01

    We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.

  3. Effects of wind waves on horizontal array performance in shallow-water conditions

    NASA Astrophysics Data System (ADS)

    Zavol'skii, N. A.; Malekhanov, A. I.; Raevskii, M. A.; Smirnov, A. V.

    2017-09-01

    We analyze the influence of statistical effects of the propagation of an acoustic signal excited by a tone source in a shallow-water channel with a rough sea surface on the efficiency of a horizontal phased array. As the array characteristics, we consider the angular function of the array response for a given direction to the source and the coefficient of amplification of the signal-to-noise ratio (array gain). Numerical simulation was conducted in to the winter hydrological conditions of the Barents Sea in a wide range of parameters determining the spatial signal coherence. The results show the main physical effects of the influence of wind waves on the array characteristics and make it possible to quantitatively predict the efficiency of a large horizontal array in realistic shallow-water channels.

  4. HIGH-ENERGY NEUTRINOS FROM SOURCES IN CLUSTERS OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Olinto, Angela V.

    2016-09-01

    High-energy cosmic rays can be accelerated in clusters of galaxies, by mega-parsec scale shocks induced by the accretion of gas during the formation of large-scale structures, or by powerful sources harbored in clusters. Once accelerated, the highest energy particles leave the cluster via almost rectilinear trajectories, while lower energy ones can be confined by the cluster magnetic field up to cosmological time and interact with the intracluster gas. Using a realistic model of the baryon distribution and the turbulent magnetic field in clusters, we studied the propagation and hadronic interaction of high-energy protons in the intracluster medium. We report themore » cumulative cosmic-ray and neutrino spectra generated by galaxy clusters, including embedded sources, and demonstrate that clusters can contribute a significant fraction of the observed IceCube neutrinos above 30 TeV while remaining undetected in high-energy cosmic rays and γ rays for reasonable choices of parameters and source scenarios.« less

  5. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  6. Biological parameters used in setting captive-breeding quotas for Indonesia's breeding facilities.

    PubMed

    Janssen, Jordi; Chng, Serene C L

    2018-02-01

    The commercial captive breeding of wildlife is often seen as a potential conservation tool to relieve pressure on wild populations, but laundering of wild-sourced specimens as captive bred can seriously undermine conservation efforts and provide a false sense of sustainability. Indonesia is at the center of such controversy; therefore, we examined Indonesia's captive-breeding production plan (CBPP) for 2016. We compared the biological parameters used in the CBPP with parameters in the literature and with parameters suggested by experts on each species and identified shortcomings of the CBPP. Production quotas for 99 out of 129 species were based on inaccurate or unrealistic biological parameters and production quotas deviated more than 10% from what parameters in the literature allow for. For 38 species, the quota exceeded the number of animals that can be bred based on the biological parameters (range 100-540%) calculated with equations in the CBPP. We calculated a lower reproductive output for 88 species based on published biological parameters compared with the parameters used in the CBPP. The equations used in the production plan did not appear to account for other factors (e.g., different survival rate for juveniles compared to adult animals) involved in breeding the proposed large numbers of specimens. We recommend the CBPP be adjusted so that realistic published biological parameters are applied and captive-breeding quotas are not allocated to species if their captive breeding is unlikely to be successful or no breeding stock is available. The shortcomings in the current CBPP create loopholes that mean mammals, reptiles, and amphibians from Indonesia declared captive bred may have been sourced from the wild. © 2017 Society for Conservation Biology.

  7. AGN neutrino flux estimates for a realistic hybrid model

    NASA Astrophysics Data System (ADS)

    Richter, S.; Spanier, F.

    2018-07-01

    Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.

  8. Heated probe diagnostic inside of the gas aggregation nanocluster source

    NASA Astrophysics Data System (ADS)

    Kolpakova, Anna; Shelemin, Artem; Kousal, Jaroslav; Kudrna, Pavel; Tichy, Milan; Biederman, Hynek; Surface; Plasma Science Team

    2016-09-01

    Gas aggregation cluster sources (GAS) usually operate outside common working conditions of most magnetrons and the size of nanoparticles created in GAS is below that commonly studied in dusty plasmas. Therefore, experimental data obtained inside the GAS are important for better understanding of process of nanoparticles formation. In order to study the conditions inside the gas aggregation chamber, special ``diagnostic GAS'' has been constructed. It allows simultaneous monitoring (or spatial profiling) by means of optical emission spectroscopy, mass spectrometry and probe diagnostic. Data obtained from Langmuir and heated probes map the plasma parameters in two dimensions - radial and axial. Titanium has been studied as an example of metal for which the reactive gas in the chamber starts nanoparticles production. Three basic situations were investigated: sputtering from clean titanium target in argon, sputtering from partially pre-oxidized target and sputtering with oxygen introduced into the discharge. It was found that during formation of nanoparticles the plasma parameters differ strongly from the situation without nanoparticles. These experimental data will support the efforts of more realistic modeling of the process. Czech Science Foundation 15-00863S.

  9. Studies of the acoustic transmission characteristics of coaxial nozzles with inverted velocity profiles, volume 1. [jet engine noise radiation through coannular exhaust nozzles

    NASA Technical Reports Server (NTRS)

    Dean, P. D.; Salikuddin, M.; Ahuja, K. K.; Plumblee, H. E.; Mungur, P.

    1979-01-01

    The efficiency of internal noise radiation through coannular exhaust nozzle with an inverted velocity profile was studied. A preliminary investigation was first undertaken to: (1) define the test parameters which influence the internal noise radiation; (2) develop a test methodology which could realistically be used to examine the effects of the test parameters; (3) and to validate this methodology. The result was the choice of an acoustic impulse as the internal noise source in the in the jet nozzles. Noise transmission characteristics of a nozzle system were then investigated. In particular, the effects of fan nozzle convergence angle, core extention length to annulus height ratio, and flow Mach number and temperatures were studied. The results are presented as normalized directivity plots.

  10. Charting the parameter space of the global 21-cm signal

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan

    2017-12-01

    The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.

  11. A multidisciplinary effort to assign realistic source parameters to models of volcanic ash-cloud transport and dispersion during eruptions

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne C.; Servranckx, R.; Webley, P.; Barsotti, S.; Dean, K.; Durant, A.; Ewert, John W.; Neri, A.; Rose, W.I.; Schneider, David J.; Siebert, L.; Stunder, B.; Swanson, G.; Tupper, A.; Volentik, A.; Waythomas, Christopher F.

    2009-01-01

    During volcanic eruptions, volcanic ash transport and dispersion models (VATDs) are used to forecast the location and movement of ash clouds over hours to days in order to define hazards to aircraft and to communities downwind. Those models use input parameters, called “eruption source parameters”, such as plume height H, mass eruption rate Ṁ, duration D, and the mass fraction m63 of erupted debris finer than about 4ϕ or 63 μm, which can remain in the cloud for many hours or days. Observational constraints on the value of such parameters are frequently unavailable in the first minutes or hours after an eruption is detected. Moreover, observed plume height may change during an eruption, requiring rapid assignment of new parameters. This paper reports on a group effort to improve the accuracy of source parameters used by VATDs in the early hours of an eruption. We do so by first compiling a list of eruptions for which these parameters are well constrained, and then using these data to review and update previously studied parameter relationships. We find that the existing scatter in plots of H versus Ṁ yields an uncertainty within the 50% confidence interval of plus or minus a factor of four in eruption rate for a given plume height. This scatter is not clearly attributable to biases in measurement techniques or to well-recognized processes such as elutriation from pyroclastic flows. Sparse data on total grain-size distribution suggest that the mass fraction of fine debris m63 could vary by nearly two orders of magnitude between small basaltic eruptions (∼ 0.01) and large silicic ones (> 0.5). We classify eleven eruption types; four types each for different sizes of silicic and mafic eruptions; submarine eruptions; “brief” or Vulcanian eruptions; and eruptions that generate co-ignimbrite or co-pyroclastic flow plumes. For each eruption type we assign source parameters. We then assign a characteristic eruption type to each of the world's ∼ 1500 Holocene volcanoes. These eruption types and associated parameters can be used for ash-cloud modeling in the event of an eruption, when no observational constraints on these parameters are available.

  12. Earth Global Reference Atmospheric Model (GRAM99): Short Course

    NASA Technical Reports Server (NTRS)

    Leslie, Fred W.; Justus, C. G.

    2007-01-01

    Earth-GRAM is a FORTRAN software package that can run on a variety of platforms including PC's. For any time and location in the Earth's atmosphere, Earth-GRAM provides values of atmospheric quantities such as temperature, pressure, density, winds, constituents, etc.. Dispersions (perturbations) of these parameters are also provided and have realistic correlations, means, and variances - useful for Monte Carlo analysis. Earth-GRAM is driven by observations including a tropospheric database available from the National Climatic Data Center. Although Earth-GRAM can be run in a "stand-alone" mode, many users incorporate it into their trajectory codes. The source code is distributed free-of-charge to eligible recipients.

  13. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  14. Functional modeling of the human auditory brainstem response to broadband stimulationa)

    PubMed Central

    Verhulst, Sarah; Bharadwaj, Hari M.; Mehraei, Golbarg; Shera, Christopher A.; Shinn-Cunningham, Barbara G.

    2015-01-01

    Population responses such as the auditory brainstem response (ABR) are commonly used for hearing screening, but the relationship between single-unit physiology and scalp-recorded population responses are not well understood. Computational models that integrate physiologically realistic models of single-unit auditory-nerve (AN), cochlear nucleus (CN) and inferior colliculus (IC) cells with models of broadband peripheral excitation can be used to simulate ABRs and thereby link detailed knowledge of animal physiology to human applications. Existing functional ABR models fail to capture the empirically observed 1.2–2 ms ABR wave-V latency-vs-intensity decrease that is thought to arise from level-dependent changes in cochlear excitation and firing synchrony across different tonotopic sections. This paper proposes an approach where level-dependent cochlear excitation patterns, which reflect human cochlear filter tuning parameters, drive AN fibers to yield realistic level-dependent properties of the ABR wave-V. The number of free model parameters is minimal, producing a model in which various sources of hearing-impairment can easily be simulated on an individualized and frequency-dependent basis. The model fits latency-vs-intensity functions observed in human ABRs and otoacoustic emissions while maintaining rate-level and threshold characteristics of single-unit AN fibers. The simulations help to reveal which tonotopic regions dominate ABR waveform peaks at different stimulus intensities. PMID:26428802

  15. Zoonotic Transmission of Waterborne Disease: A Mathematical Model.

    PubMed

    Waters, Edward K; Hamilton, Andrew J; Sidhu, Harvinder S; Sidhu, Leesa A; Dunbar, Michelle

    2016-01-01

    Waterborne parasites that infect both humans and animals are common causes of diarrhoeal illness, but the relative importance of transmission between humans and animals and vice versa remains poorly understood. Transmission of infection from animals to humans via environmental reservoirs, such as water sources, has attracted attention as a potential source of endemic and epidemic infections, but existing mathematical models of waterborne disease transmission have limitations for studying this phenomenon, as they only consider contamination of environmental reservoirs by humans. This paper develops a mathematical model that represents the transmission of waterborne parasites within and between both animal and human populations. It also improves upon existing models by including animal contamination of water sources explicitly. Linear stability analysis and simulation results, using realistic parameter values to describe Giardia transmission in rural Australia, show that endemic infection of an animal host with zoonotic protozoa can result in endemic infection in human hosts, even in the absence of person-to-person transmission. These results imply that zoonotic transmission via environmental reservoirs is important.

  16. Climate drift of AMOC, North Atlantic salinity and arctic sea ice in CFSv2 decadal predictions

    NASA Astrophysics Data System (ADS)

    Huang, Bohua; Zhu, Jieshun; Marx, Lawrence; Wu, Xingren; Kumar, Arun; Hu, Zeng-Zhen; Balmaseda, Magdalena A.; Zhang, Shaoqing; Lu, Jian; Schneider, Edwin K.; Kinter, James L., III

    2015-01-01

    There are potential advantages to extending operational seasonal forecast models to predict decadal variability but major efforts are required to assess the model fidelity for this task. In this study, we examine the North Atlantic climate simulated by the NCEP Climate Forecast System, version 2 (CFSv2), using a set of ensemble decadal hindcasts and several 30-year simulations initialized from realistic ocean-atmosphere states. It is found that a substantial climate drift occurs in the first few years of the CFSv2 hindcasts, which represents a major systematic bias and may seriously affect the model's fidelity for decadal prediction. In particular, it is noted that a major reduction of the upper ocean salinity in the northern North Atlantic weakens the Atlantic meridional overturning circulation (AMOC) significantly. This freshening is likely caused by the excessive freshwater transport from the Arctic Ocean and weakened subtropical water transport by the North Atlantic Current. A potential source of the excessive freshwater is the quick melting of sea ice, which also causes unrealistically thin ice cover in the Arctic Ocean. Our sensitivity experiments with adjusted sea ice albedo parameters produce a sustainable ice cover with realistic thickness distribution. It also leads to a moderate increase of the AMOC strength. This study suggests that a realistic freshwater balance, including a proper sea ice feedback, is crucial for simulating the North Atlantic climate and its variability.

  17. Dissipative dark matter halos: The steady state solution

    NASA Astrophysics Data System (ADS)

    Foot, R.

    2018-02-01

    Dissipative dark matter, where dark matter particle properties closely resemble familiar baryonic matter, is considered. Mirror dark matter, which arises from an isomorphic hidden sector, is a specific and theoretically constrained scenario. Other possibilities include models with more generic hidden sectors that contain massless dark photons [unbroken U (1 ) gauge interactions]. Such dark matter not only features dissipative cooling processes but also is assumed to have nontrivial heating sourced by ordinary supernovae (facilitated by the kinetic mixing interaction). The dynamics of dissipative dark matter halos around rotationally supported galaxies, influenced by heating as well as cooling processes, can be modeled by fluid equations. For a sufficiently isolated galaxy with a stable star formation rate, the dissipative dark matter halos are expected to evolve to a steady state configuration which is in hydrostatic equilibrium and where heating and cooling rates locally balance. Here, we take into account the major cooling and heating processes, and numerically solve for the steady state solution under the assumptions of spherical symmetry, negligible dark magnetic fields, and that supernova sourced energy is transported to the halo via dark radiation. For the parameters considered, and assumptions made, we were unable to find a physically realistic solution for the constrained case of mirror dark matter halos. Halo cooling generally exceeds heating at realistic halo mass densities. This problem can be rectified in more generic dissipative dark matter models, and we discuss a specific example in some detail.

  18. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  19. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises.

    PubMed

    Cannavò, Flavio; Camacho, Antonio G; González, Pablo J; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-06-09

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes.

  20. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises

    PubMed Central

    Cannavò, Flavio; Camacho, Antonio G.; González, Pablo J.; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-01-01

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes. PMID:26055494

  1. Design of sub-Angstrom compact free-electron laser source

    NASA Astrophysics Data System (ADS)

    Bonifacio, Rodolfo; Fares, Hesham; Ferrario, Massimo; McNeil, Brian W. J.; Robb, Gordon R. M.

    2017-01-01

    In this paper, we propose for first time practical parameters to construct a compact sub-Angstrom Free Electron Laser (FEL) based on Compton backscattering. Our recipe is based on using picocoulomb electron bunch, enabling very low emittance and ultracold electron beam. We assume the FEL is operating in a quantum regime of Self Amplified Spontaneous Emission (SASE). The fundamental quantum feature is a significantly narrower spectrum of the emitted radiation relative to classical SASE. The quantum regime of the SASE FEL is reached when the momentum spread of the electron beam is smaller than the photon recoil momentum. Following the formulae describing SASE FEL operation, realistic designs for quantum FEL experiments are proposed. We discuss the practical constraints that influence the experimental parameters. Numerical simulations of power spectra and intensities are presented and attractive radiation characteristics such as high flux, narrow linewidth, and short pulse structure are demonstrated.

  2. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  3. Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Day, S. J.; Joakim, B.

    2016-12-01

    We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.

  4. Prediction of broadband ground-motion time histories: Hybrid low/high-frequency method with correlated random source parameters

    USGS Publications Warehouse

    Liu, P.; Archuleta, R.J.; Hartzell, S.H.

    2006-01-01

    We present a new method for calculating broadband time histories of ground motion based on a hybrid low-frequency/high-frequency approach with correlated source parameters. Using a finite-difference method we calculate low- frequency synthetics (< ∼1 Hz) in a 3D velocity structure. We also compute broadband synthetics in a 1D velocity model using a frequency-wavenumber method. The low frequencies from the 3D calculation are combined with the high frequencies from the 1D calculation by using matched filtering at a crossover frequency of 1 Hz. The source description, common to both the 1D and 3D synthetics, is based on correlated random distributions for the slip amplitude, rupture velocity, and rise time on the fault. This source description allows for the specification of source parameters independent of any a priori inversion results. In our broadband modeling we include correlation between slip amplitude, rupture velocity, and rise time, as suggested by dynamic fault modeling. The method of using correlated random source parameters is flexible and can be easily modified to adjust to our changing understanding of earthquake ruptures. A realistic attenuation model is common to both the 3D and 1D calculations that form the low- and high-frequency components of the broadband synthetics. The value of Q is a function of the local shear-wave velocity. To produce more accurate high-frequency amplitudes and durations, the 1D synthetics are corrected with a randomized, frequency-dependent radiation pattern. The 1D synthetics are further corrected for local site and nonlinear soil effects by using a 1D nonlinear propagation code and generic velocity structure appropriate for the site’s National Earthquake Hazards Reduction Program (NEHRP) site classification. The entire procedure is validated by comparison with the 1994 Northridge, California, strong ground motion data set. The bias and error found here for response spectral acceleration are similar to the best results that have been published by others for the Northridge rupture.

  5. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  6. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  7. Towards a Numerical Description of Volcano Aeroacoustic Source Processes using Lattice Boltzmann Strategies

    NASA Astrophysics Data System (ADS)

    Brogi, F.; Malaspinas, O.; Bonadonna, C.; Chopard, B.; Ripepe, M.

    2015-12-01

    Low frequency (< 20Hz) acoustic measurements have a great potential for the real time characterization of volcanic plume source parameters. Using the classical source theory, acoustic data can be related to the exit velocity of the volcanic jet and to mass eruption rate, based on the geometric constrain of the vent and the mixture density. However, the application of the classical acoustic source models to volcanic explosive eruptions has shown to be challenging and a better knowledge of the link between the acoustic radiation and actual volcanic fluid dynamics processes is required. New insights into this subject could be given by the study of realistic aeroacoustic numerical simulations of a volcanic jet. Lattice Boltzmann strategies (LBS) provide the opportunity to develop an accurate, computationally fast, 3D physical model for a volcanic jet. In the field of aeroacoustic applications, dedicated LBS has been proven to have the low dissipative properties needed for capturing the weak acoustic pressure fluctuations. However, due to the big disparity in magnitude between the flow and the acoustic disturbances, even weak spurious noise sources in simulations can ruin the accuracy of the acoustic predictions. Reflected waves from artificial boundaries defined around the flow region can have significant influence on the flow field and overwhelm the acoustic field of interest. In addition, for highly multiscale turbulent flows, such as volcanic plumes, the number of grid points needed to represent the smallest scales might become intractable and the most complicated physics happen only in small portions of the computational domain. The implementation of the grid refinement, in our model allow us to insert local finer grids only where is actually needed and to increase the size of the computational domain for running more realistic simulations. 3D LBS model simulations for turbulent jet aeroacoustics have been accurately validated. Both mean flow and acoustic results are in good agreement with theory and experimental data available in the literature.

  8. Performance of today’s dual energy CT and future multi energy CT in virtual non-contrast imaging and in iodine quantification: A simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan

    2015-07-15

    Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less

  9. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  10. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  11. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  12. Hippocampal effective synchronization values are not pre-seizure indicator without considering the state of the onset channels

    PubMed Central

    Shayegh, Farzaneh; Sadri, Saeed; Amirfattahi, Rassoul; Ansari-Asl, Karim; Bellanger, Jean-Jacques; Senhadji, Lotfi

    2014-01-01

    In this paper, a model-based approach is presented to quantify the effective synchrony between hippocampal areas from depth-EEG signals. This approach is based on the parameter identification procedure of a realistic Multi-Source/Multi-Channel (MSMC) hippocampal model that simulates the function of different areas of hippocampus. In the model it is supposed that the observed signals recorded using intracranial electrodes are generated by some hidden neuronal sources, according to some parameters. An algorithm is proposed to extract the intrinsic (solely relative to one hippocampal area) and extrinsic (coupling coefficients between two areas) model parameters, simultaneously, by a Maximum Likelihood (ML) method. Coupling coefficients are considered as the measure of effective synchronization. This work can be considered as an application of Dynamic Causal Modeling (DCM) that enables us to understand effective synchronization changes during transition from inter-ictal to pre -ictal state. The algorithm is first validated by using some synthetic datasets. Then by extracting the coupling coefficients of real depth-EEG signals by the proposed approach, it is observed that the coupling values show no significant difference between ictal, pre-ictal and inter-ictal states, i.e., either the increase or decrease of coupling coefficients has been observed in all states. However, taking the value of intrinsic parameters into account, pre-seizure state can be distinguished from inter-ictal state. It is claimed that seizures start to appear when there are seizure-related physiological parameters on the onset channel, and its coupling coefficient toward other channels increases simultaneously. As a result of considering both intrinsic and extrinsic parameters as the feature vector, inter-ictal, pre-ictal and ictal activities are discriminated from each other with an accuracy of 91.33% accuracy. PMID:25061815

  13. Interactions and triggering in a 3D rate and state asperity model

    NASA Astrophysics Data System (ADS)

    Dublanchet, P.; Bernard, P.

    2012-12-01

    Precise relocation of micro-seismicity and careful analysis of seismic source parameters have progressively imposed the concept of seismic asperities embedded in a creeping fault segment as being one of the most important aspect that should appear in a realistic representation of micro-seismic sources. Another important issue concerning micro-seismic activity is the existence of robust empirical laws describing the temporal and magnitude distribution of earthquakes, such as the Omori law, the distribution of inter-event time and the Gutenberg-Richter law. In this framework, this study aims at understanding statistical properties of earthquakes, by generating synthetic catalogs with a 3D, quasi-dynamic continuous rate and state asperity model, that takes into account a realistic geometry of asperities. Our approach contrasts with ETAS models (Kagan and Knopoff, 1981) usually implemented to produce earthquake catalogs, in the sense that the non linearity observed in rock friction experiments (Dieterich, 1979) is fully taken into account by the use of rate and state friction law. Furthermore, our model differs from discrete models of faults (Ziv and Cochard, 2006) because the continuity allows us to define realistic geometries and distributions of asperities by the assembling of sub-critical computational cells that always fail in a single event. Moreover, this model allows us to adress the question of the influence of barriers and distribution of asperities on the event statistics. After recalling the main observations of asperities in the specific case of Parkfield segment of San-Andreas Fault, we analyse earthquake statistical properties computed for this area. Then, we present synthetic statistics obtained by our model that allow us to discuss the role of barriers on clustering and triggering phenomena among a population of sources. It appears that an effective size of barrier, that depends on its frictional strength, controls the presence or the absence, in the synthetic catalog, of statistical laws that are similar to what is observed for real earthquakes. As an application, we attempt to draw a comparison between synthetic statistics and the observed statistics of Parkfield in order to characterize what could be a realistic frictional model of Parkfield area. More generally, we obtained synthetic statistical properties that are in agreement with power-law decays characterized by exponents that match the observations at a global scale, showing that our mechanical model is able to provide new insights into the understanding of earthquake interaction processes in general.

  14. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  15. Path integrals with higher order actions: Application to realistic chemical systems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.

    2018-02-01

    Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.

  16. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  17. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less

  18. Macroscopic Source Properties from Dynamic Rupture Styles in Plastic Media

    NASA Astrophysics Data System (ADS)

    Gabriel, A.; Ampuero, J. P.; Dalguer, L. A.; Mai, P. M.

    2011-12-01

    High stress concentrations at earthquake rupture fronts may generate an inelastic off-fault response at the rupture tip, leading to increased energy absorption in the damage zone. Furthermore, the induced asymmetric plastic strain field in in-plane rupture modes may produce bimaterial interfaces that can increase radiation efficiency and reduce frictional dissipation. Off-fault inelasticity thus plays an important role for realistic predictions of near-fault ground motion. Guided by our previous studies in the 2D elastic case, we perform rupture dynamics simulations including rate-and-state friction and off-fault plasticity to investigate the effects on the rupture properties. We quantitatively analyze macroscopic source properties for different rupture styles, ranging from cracks to pulses and subshear to supershear ruptures, and their transitional mechanisms. The energy dissipation due to off-fault inelasticity modifies the conditions to obtain each rupture style and alters macroscopic source properties. We examine apparent fracture energy, rupture and healing front speed, peak slip and peak slip velocity, dynamic stress drop and size of the process and plastic zones, slip and plastic seismic moment, and their connection to ground motion. This presentation focuses on the effects of rupture style and off-fault plasticity on the resulting ground motion patterns, especially on characteristic slip velocity function signatures and resulting seismic moments. We aim at developing scaling rules for equivalent elastic models, as function of background stress and frictional parameters, that may lead to improved "pseudo-dynamic" source parameterizations for ground-motion calculation. Moreover, our simulations provide quantitative relations between off-fault energy dissipation and macroscopic source properties. These relations might provide a self-consistent theoretical framework for the study of the earthquake energy balance based on observable earthquake source parameters.

  19. 3D MHD Models of Active Region Loops

    NASA Technical Reports Server (NTRS)

    Ofman, Leon

    2004-01-01

    Present imaging and spectroscopic observations of active region loops allow to determine many physical parameters of the coronal loops, such as the density, temperature, velocity of flows in loops, and the magnetic field. However, due to projection effects many of these parameters remain ambiguous. Three dimensional imaging in EUV by the STEREO spacecraft will help to resolve the projection ambiguities, and the observations could be used to setup 3D MHD models of active region loops to study the dynamics and stability of active regions. Here the results of 3D MHD models of active region loops are presented, and the progress towards more realistic 3D MHD models of active regions. In particular the effects of impulsive events on the excitation of active region loop oscillations, and the generation, propagations and reflection of EIT waves are shown. It is shown how 3D MHD models together with 3D EUV observations can be used as a diagnostic tool for active region loop physical parameters, and to advance the science of the sources of solar coronal activity.

  20. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  1. Passive simulation of the nonlinear port-Hamiltonian modeling of a Rhodes Piano

    NASA Astrophysics Data System (ADS)

    Falaize, Antoine; Hélie, Thomas

    2017-03-01

    This paper deals with the time-domain simulation of an electro-mechanical piano: the Fender Rhodes. A simplified description of this multi-physical system is considered. It is composed of a hammer (nonlinear mechanical component), a cantilever beam (linear damped vibrating component) and a pickup (nonlinear magneto-electronic transducer). The approach is to propose a power-balanced formulation of the complete system, from which a guaranteed-passive simulation is derived to generate physically-based realistic sound synthesis. Theses issues are addressed in four steps. First, a class of Port-Hamiltonian Systems is introduced: these input-to-output systems fulfill a power balance that can be decomposed into conservative, dissipative and source parts. Second, physical models are proposed for each component and are recast in the port-Hamiltonian formulation. In particular, a finite-dimensional model of the cantilever beam is derived, based on a standard modal decomposition applied to the Euler-Bernoulli model. Third, these systems are interconnected, providing a nonlinear finite-dimensional Port-Hamiltonian System of the piano. Fourth, a passive-guaranteed numerical method is proposed. This method is built to preserve the power balance in the discrete-time domain, and more precisely, its decomposition structured into conservative, dissipative and source parts. Finally, simulations are performed for a set of physical parameters, based on empirical but realistic values. They provide a variety of audio signals which are perceptively relevant and qualitatively similar to some signals measured on a real instrument.

  2. Effects of realistic topography on the ground motion of the Colombian Andes - A case study at the Aburrá Valley, Antioquia

    NASA Astrophysics Data System (ADS)

    Restrepo, Doriam; Bielak, Jacobo; Serrano, Ricardo; Gómez, Juan; Jaramillo, Juan

    2016-03-01

    This paper presents a set of deterministic 3-D ground motion simulations for the greater metropolitan area of Medellín in the Aburrá Valley, an earthquake-prone region of the Colombian Andes that exhibits moderate-to-strong topographic irregularities. We created the velocity model of the Aburrá Valley region (version 1) using the geological structures as a basis for determining the shear wave velocity. The irregular surficial topography is considered by means of a fictitious domain strategy. The simulations cover a 50 × 50 × 25 km3 volume, and four Mw = 5 rupture scenarios along a segment of the Romeral fault, a significant source of seismic activity in Colombia. In order to examine the sensitivity of ground motion to the irregular topography and the 3-D effects of the valley, each earthquake scenario was simulated with three different models: (i) realistic 3-D velocity structure plus realistic topography, (ii) realistic 3-D velocity structure without topography, and (iii) homogeneous half-space with realistic topography. Our results show how surface topography affects the ground response. In particular, our findings highlight the importance of the combined interaction between source-effects, source-directivity, focusing, soft-soil conditions, and 3-D topography. We provide quantitative evidence of this interaction and show that topographic amplification factors can be as high as 500 per cent at some locations. In other areas within the valley, the topographic effects result in relative reductions, but these lie in the 0-150 per cent range.

  3. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  4. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  5. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  6. Excitation of high-frequency electromagnetic waves by energetic electrons with a loss cone distribution in a field-aligned potential drop

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Vinas, Adolfo F.

    1994-01-01

    The electron cyclotron maser instability (CMI) driven by momentum space anisotropy (df/dp (sub perpendicular) greater than 0) has been invoked to explain many aspects, such as the modes of propagation, harmonic emissions, and the source characteristics of the auroral kilometric radiation (AKR). Recent satellite observations of AKR sources indicate that the source regions are often imbedded within the auroral acceleration region characterized by the presence of a field-aligned potential drop. In this paper we investigate the excitation of the fundamental extraordinary mode radiation due to the accelerated electrons. The momentum space distribution of these energetic electrons is modeled by a realistic upward loss cone as modified by the presence of a parallel potential drop below the observation point. On the basis of linear growth rate calculations we present the emission characteristics, such as the frequency spectrum and the emission angular distribution as functions of the plasma parameters. We will discuss the implication of our results on the generation of the AKR from the edges of the auroral density cavities.

  7. Analysis of the attainable efficiency of a direct-bandgap betavoltaic element

    NASA Astrophysics Data System (ADS)

    Sachenko, A. V.; Shkrebtii, A. I.; Korkishko, R. M.; Kostylyov, V. P.; Kulish, M. R.; Sokolovskyi, I. O.; Evstigneev, M.

    2015-11-01

    Conversion of energy of beta-particles into electric energy in a p-n junction based on direct-bandgap semiconductors, such as GaAs, is analyzed considering realistic semiconductor system parameters. An expression for the collection coefficient, Q, of the electron-hole pairs generated by beta-electrons is derived taking into account the existence of the dead layer. We show that the collection coefficient of beta-electrons emitted by a 3H-source to a GaAs p-n junction is close to 1 in a broad range of electron lifetimes in the junction, ranging from 10-9to 10-7 s. For the combination 147Pm/GaAs, Q is relatively large (≥slant 0.4) only for quite long lifetimes (about 10-7 s) and large thicknesses (about 100 μm) of GaAs p-n junctions. For realistic lifetimes of minority carriers and their diffusion coefficients, the open-circuit voltage realized due to the irradiation of a GaAs p-n junction by beta-particles is obtained. The attainable beta-conversion efficiency η in the case of a 3H/GaAs combination is found to exceed that of the 147Pm/GaAs combination.

  8. Building the Case for SNAP: Creation of Multi-Band, Simulated Images With Shapelets

    NASA Technical Reports Server (NTRS)

    Ferry, Matthew A.

    2005-01-01

    Dark energy has simultaneously been the most elusive and most important phenomenon in the shaping of the universe. A case for a proposed space-telescope called SNAP (SuperNova Acceleration Probe) is being built, a crucial component of which is image simulations. One method for this is "Shapelets," developed at Caltech. Shapelets form an orthonormal basis and are uniquely able to represent realistic space images and create new images based on real ones. Previously, simulations were created using the Hubble Deep Field (HDF) as a basis Set in one band. In this project, image simulations are created.using the 4 bands of the Hubble Ultra Deep Field (UDF) as a basis set. This provides a better basis for simulations because (1) the survey is deeper, (2) they have a higher resolution, and (3) this is a step closer to simulating the 9 bands of SNAP. Image simulations are achieved by detecting sources in the UDF, decomposing them into shapelets, tweaking their parameters in realistic ways, and recomposing them into new images. Morphological tests were also run to verify the realism of the simulations. They have a wide variety of uses, including the ability to create weak gravitational lensing simulations.

  9. Broadband light sources based on InAs/InGaAs metamorphic quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seravalli, L.; Trevisi, G.; Frigeri, P.

    We propose a design for a semiconductor structure emitting broadband light in the infrared, based on InAs quantum dots (QDs) embedded into a metamorphic step-graded In{sub x}Ga{sub 1−x}As buffer. We developed a model to calculate the metamorphic QD energy levels based on the realistic QD parameters and on the strain-dependent material properties; we validated the results of simulations by comparison with the experimental values. On this basis, we designed a p-i-n heterostructure with a graded index profile toward the realization of an electrically pumped guided wave device. This has been done by adding layers where QDs are embedded in In{submore » x}Al{sub y}Ga{sub 1−x−y}As layers, to obtain a symmetric structure from a band profile point of view. To assess the room temperature electro-luminescence emission spectrum under realistic electrical injection conditions, we performed device-level simulations based on a coupled drift-diffusion and QD rate equation model. On the basis of the device simulation results, we conclude that the present proposal is a viable option to realize broadband light-emitting devices.« less

  10. Double β-decay nuclear matrix elements for the A=48 and A=58 systems

    NASA Astrophysics Data System (ADS)

    Skouras, L. D.; Vergados, J. D.

    1983-11-01

    The nuclear matrix elements entering the double β decays of the 48Ca-48Ti and 58Ni-58Fe systems have been calculated using a realistic two nucleon interaction and realistic shell model spaces. Effective transition operators corresponding to a variety of gauge theory models have been considered. The stability of such matrix elements against variations of the nuclear parameters is examined. Appropriate lepton violating parameters are extracted from the A=48 data and predictions are made for the lifetimes of the positron decays of the A=58 system. RADIOACTIVITY Double β decay. Gauge theories. Lepton nonconservation. Neutrino mass. Shell model calculations.

  11. Primeval galaxies in the sub-mm and mm

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard; Myers, Steven T.

    1993-01-01

    Although the results of COBE's FIRAS experiment 1 constrain the deviation in energy from the CMB blackbody in the 500-5000 micron range to be delta E/E, sub cmb less than 0.005, primeval galaxies can still lead to a brilliant sub-mm sky of non-Gaussian sources that are detectable at 10 inch resolution from planned arrays such as SCUBA on the James Clerk Maxwell Telescope and, quite plausibly, at sub-arcsecond resolution in planned mm and sub-mm interferometers. Here, we apply our hierarchical peaks method to a CDM model to construct sub-mm and mm maps of bursting PG's appropriate for these instruments with minimum contours chosen to correspond to realistic observational parameters for them and which pass the FIRAS limits.

  12. One-sided measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Cao, Wen-Fei; Zhen, Yi-Zheng; Zheng, Yu-Lin; Li, Li; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai

    2018-01-01

    Measurement-device-independent quantum key distribution (MDI-QKD) protocol was proposed to remove all the detector side channel attacks, while its security relies on the trusted encoding systems. Here we propose a one-sided MDI-QKD (1SMDI-QKD) protocol, which enjoys detection loophole-free advantage, and at the same time weakens the state preparation assumption in MDI-QKD. The 1SMDI-QKD can be regarded as a modified MDI-QKD, in which Bob's encoding system is trusted, while Alice's is uncharacterized. For the practical implementation, we also provide a scheme by utilizing coherent light source with an analytical two decoy state estimation method. Simulation with realistic experimental parameters shows that the protocol has a promising performance, and thus can be applied to practical QKD applications.

  13. Random sampling and validation of covariance matrices of resonance parameters

    NASA Astrophysics Data System (ADS)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  14. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  15. Inhomogeneous ensembles of radical pairs in chemical compasses

    PubMed Central

    Procopio, Maria; Ritz, Thorsten

    2016-01-01

    The biophysical basis for the ability of animals to detect the geomagnetic field and to use it for finding directions remains a mystery of sensory biology. One much debated hypothesis suggests that an ensemble of specialized light-induced radical pair reactions can provide the primary signal for a magnetic compass sensor. The question arises what features of such a radical pair ensemble could be optimized by evolution so as to improve the detection of the direction of weak magnetic fields. Here, we focus on the overlooked aspect of the noise arising from inhomogeneity of copies of biomolecules in a realistic biological environment. Such inhomogeneity leads to variations of the radical pair parameters, thereby deteriorating the signal arising from an ensemble and providing a source of noise. We investigate the effect of variations in hyperfine interactions between different copies of simple radical pairs on the directional response of a compass system. We find that the choice of radical pair parameters greatly influences how strongly the directional response of an ensemble is affected by inhomogeneity. PMID:27804956

  16. Solving the relativistic inverse stellar problem through gravitational waves observation of binary neutron stars

    NASA Astrophysics Data System (ADS)

    Abdelsalhin, Tiziano; Maselli, Andrea; Ferrari, Valeria

    2018-04-01

    The LIGO/Virgo Collaboration has recently announced the direct detection of gravitational waves emitted in the coalescence of a neutron star binary. This discovery allows, for the first time, to set new constraints on the behavior of matter at supranuclear density, complementary with those coming from astrophysical observations in the electromagnetic band. In this paper we demonstrate the feasibility of using gravitational signals to solve the relativistic inverse stellar problem, i.e., to reconstruct the parameters of the equation of state (EoS) from measurements of the stellar mass and tidal Love number. We perform Bayesian inference of mock data, based on different models of the star internal composition, modeled through piecewise polytropes. Our analysis shows that the detection of a small number of sources by a network of advanced interferometers would allow to put accurate bounds on the EoS parameters, and to perform a model selection among the realistic equations of state proposed in the literature.

  17. Omnibus experiment: CPT and CP violation with sterile neutrinos

    NASA Astrophysics Data System (ADS)

    Loo, K. K.; Novikov, Yu N.; Smirnov, M. V.; Trzaska, W. H.; Wurm, M.

    2017-09-01

    The verification of the sterile neutrino hypothesis and, if confirmed, the determination of the relevant oscillation parameters is one of the goals of the neutrino physics in near future. We propose to search for the sterile neutrinos with a high statistics measurement utilizing the radioactive sources and oscillometric approach with large liquid scintillator detector like LENA, JUNO, or RENO-50. Our calculations indicate that such an experiment is realistic and could be performed in parallel to the main research plan for JUNO, LENA, or RENO-50. Assuming as the starting point the values of the oscillation parameters indicated by the current global fit (in 3 + 1 scenario) and requiring at least 5σ confidence level, we estimate that we would be able to detect differences in the mass squared differences Δ m41^2 of electron neutrinos and electron antineutrinos of the order of 1% or larger. That would allow to probe the CPT symmetry with neutrinos with an unprecedented accuracy.

  18. Inhomogeneous ensembles of radical pairs in chemical compasses

    NASA Astrophysics Data System (ADS)

    Procopio, Maria; Ritz, Thorsten

    2016-11-01

    The biophysical basis for the ability of animals to detect the geomagnetic field and to use it for finding directions remains a mystery of sensory biology. One much debated hypothesis suggests that an ensemble of specialized light-induced radical pair reactions can provide the primary signal for a magnetic compass sensor. The question arises what features of such a radical pair ensemble could be optimized by evolution so as to improve the detection of the direction of weak magnetic fields. Here, we focus on the overlooked aspect of the noise arising from inhomogeneity of copies of biomolecules in a realistic biological environment. Such inhomogeneity leads to variations of the radical pair parameters, thereby deteriorating the signal arising from an ensemble and providing a source of noise. We investigate the effect of variations in hyperfine interactions between different copies of simple radical pairs on the directional response of a compass system. We find that the choice of radical pair parameters greatly influences how strongly the directional response of an ensemble is affected by inhomogeneity.

  19. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources.

    PubMed

    Tang, M X; Zhang, Y Y; E, J C; Luo, S N

    2018-05-01

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.

  20. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, M. X.; Zhang, Y. Y.; E, J. C.

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less

  1. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  2. Optimal antibunching in passive photonic devices based on coupled nonlinear resonators

    NASA Astrophysics Data System (ADS)

    Ferretti, S.; Savona, V.; Gerace, D.

    2013-02-01

    We propose the use of weakly nonlinear passive materials for prospective applications in integrated quantum photonics. It is shown that strong enhancement of native optical nonlinearities by electromagnetic field confinement in photonic crystal resonators can lead to single-photon generation only exploiting the quantum interference of two coupled modes and the effect of photon blockade under resonant coherent driving. For realistic system parameters in state of the art microcavities, the efficiency of such a single-photon source is theoretically characterized by means of the second-order correlation function at zero-time delay as the main figure of merit, where major sources of loss and decoherence are taken into account within a standard master equation treatment. These results could stimulate the realization of integrated quantum photonic devices based on non-resonant material media, fully integrable with current semiconductor technology and matching the relevant telecom band operational wavelengths, as an alternative to single-photon nonlinear devices based on cavity quantum electrodynamics with artificial atoms or single atomic-like emitters.

  3. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  4. Diffusive deposition of aerosols in Phebus containment during FPT-2 test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontautas, A.; Urbonavicius, E.

    2012-07-01

    At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less

  5. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA.

    PubMed

    Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M

    2017-10-01

    Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  6. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    NASA Astrophysics Data System (ADS)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  7. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  8. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  9. A Model-Based Investigation of Charge-Generation According to the Relative Diffusional Growth Rate Theory

    NASA Astrophysics Data System (ADS)

    Glassmeier, F.; Arnold, L.; Lohmann, U.; Dietlicher, R.; Paukert, M.

    2016-12-01

    Our current understanding of charge generation in thunderclouds is based on collisional charge transfer between graupel and ice crystals in the presence of liquid water droplets as dominant mechanism. The physical process of charge transfer and the sign of net charge generated on graupel and ice crystals under different cloud conditions is not yet understood. The Relative-Diffusional-Growth-Rate (RDGR) theory (Baker et al. 1987) suggests that the particle with the faster diffusional radius growth is charged positively. In this contribution, we use simulations of idealized thunderclouds with two-moment warm and cold cloud microphysics to generate realistic combinations of RDGR-parameters. We find that these realistic parameter combinations result in a relationship between sign of charge, cloud temperature and effective water content that deviates from previous theoretical and laboratory studies. This deviation indicates that the RDGR theory is sensitive to correlations between parameters that occur in clouds but are not captured in studies that vary temperature and water content while keeping other parameters at fixed values. In addition, our results suggest that diffusional growth from the riming-related local water vapor field, a key component of the RDGR theory, is negligible for realistic parameter combinations. Nevertheless, we confirm that the RDGR theory results in positive or negative charging of particles under different cloud conditions. Under specific conditions, charge generation via the RDGR theory alone might thus be sufficient to explain tripolar charge structures in thunderclouds. In general, however, additional charge generation mechanisms and adaptations to the RDGR theory that consider riming other than via local vapor deposition seem necessary.

  10. Modeling the Normal and Neoplastic Cell Cycle with 'Realistic Boolean Genetic Networks': Their Application for Understanding Carcinogenesis and Assessing Therapeutic Strategies

    NASA Technical Reports Server (NTRS)

    Szallasi, Zoltan; Liang, Shoudan

    2000-01-01

    In this paper we show how Boolean genetic networks could be used to address complex problems in cancer biology. First, we describe a general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner. Second, we introduce 'realistic Boolean genetic networks' that produce time series measurements very similar to those detected in actual biological systems. Third, we outline a series of essential questions related to cancer biology and cancer therapy that could be addressed by the use of 'realistic Boolean genetic network' modeling.

  11. Effects of damping on mode shapes, volume 1

    NASA Technical Reports Server (NTRS)

    Gates, R. M.

    1977-01-01

    Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Methods developed to incorporate structural joints and their damping characteristics into a finite element model of the space shuttle, to determine the point damping parameters required to produce realistic damping in the primary modes, and to calculate the effect of distributed damping on structural resonances through the calculation of admittances.

  12. X-ray reflection from cold white dwarfs in magnetic cataclysmic variables

    NASA Astrophysics Data System (ADS)

    Hayashi, Takayuki; Kitaguchi, Takao; Ishida, Manabu

    2018-02-01

    We model X-ray reflection from white dwarfs (WDs) in magnetic cataclysmic variables (mCVs) using a Monte Carlo simulation. A point source with a power-law spectrum or a realistic post-shock accretion column (PSAC) source irradiates a cool and spherical WD. The PSAC source emits thermal spectra of various temperatures stratified along the column according to the PSAC model. In the point-source simulation, we confirm the following: a source harder and nearer to the WD enhances the reflection; higher iron abundance enhances the equivalent widths (EWs) of fluorescent iron Kα1, 2 lines and their Compton shoulder, and increases the cut-off energy of a Compton hump; significant reflection appears from an area that is more than 90° apart from the position right under the point X-ray source because of the WD curvature. The PSAC simulation reveals the following: a more massive WD basically enhances the intensities of the fluorescent iron Kα1, 2 lines and the Compton hump, except for some specific accretion rate, because the more massive WD makes a hotter PSAC from which higher-energy X-rays are preferentially emitted; a larger specific accretion rate monotonically enhances the reflection because it makes a hotter and shorter PSAC; the intrinsic thermal component hardens by occultation of the cool base of the PSAC by the WD. We quantitatively estimate the influences of the parameters on the EWs and the Compton hump with both types of source. We also calculate X-ray modulation profiles brought about by the WD spin. These depend on the angles of the spin axis from the line of sight and from the PSAC, and on whether the two PSACs can be seen. The reflection spectral model and the modulation model involve the fluorescent lines and the Compton hump and can directly be compared to the data, which allows us to estimate these geometrical parameters with unprecedented accuracy.

  13. Continental-scale river flow in climate models

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.; Caliri, Guilherme

    1994-01-01

    The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.

  14. Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Schumacher, Thomas; Straub, Daniel; Higgins, Christopher

    2012-09-01

    Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.

  15. The effect of a realistic thermal diffusivity on numerical model of a subducting slab

    NASA Astrophysics Data System (ADS)

    Maierova, P.; Steinle-Neumann, G.; Cadek, O.

    2010-12-01

    A number of numerical studies of subducting slab assume simplified (constant or only depth-dependent) models of thermal conductivity. The available mineral physics data indicate, however, that thermal diffusivity is strongly temperature- and pressure-dependent and may also vary among different mantle materials. In the present study, we examine the influence of realistic thermal properties of mantle materials on the thermal state of the upper mantle and the dynamics of subducting slabs. On the basis of the data published in mineral physics literature we compile analytical relationships that approximate the pressure and temperature dependence of thermal diffusivity for major mineral phases of the mantle (olivine, wadsleyite, ringwoodite, garnet, clinopyroxenes, stishovite and perovskite). We propose a simplified composition of mineral assemblages predominating in the subducting slab and the surrounding mantle (pyrolite, mid-ocean ridge basalt, harzburgite) and we estimate their thermal diffusivity using the Hashin-Shtrikman bounds. The resulting complex formula for the diffusivity of each aggregate is then approximated by a simpler analytical relationship that is used in our numerical model as an input parameter. For the numerical modeling we use the Elmer software (open source finite element software for multiphysical problems, see http://www.csc.fi/english/pages/elmer). We set up a 2D Cartesian thermo-mechanical steady-state model of a subducting slab. The model is partly kinematic as the flow is driven by a boundary condition on velocity that is prescribed on the top of the subducting lithospheric plate. Reology of the material is non-linear and is coupled with the thermal equation. Using the realistic relationship for thermal diffusivity of mantle materials, we compute the thermal and flow fields for different input velocity and age of the subducting plate and we compare the results against the models assuming a constant thermal diffusivity. The importance of the realistic description of thermal properties in models of subducted slabs is discussed.

  16. Review on solving the forward problem in EEG source analysis

    PubMed Central

    Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace

    2007-01-01

    Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144

  17. Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion

    NASA Astrophysics Data System (ADS)

    Panza, G. F.; Panza, G. F.; Romanelli, F.

    2001-12-01

    In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html

  18. Moment tensor inversion with three-dimensional sensor configuration of mining induced seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-06-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). A stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double-couple and full moment tensor with high frequency data, is very challenging. Moreover, the application to underground mining system requires accounting for the 3-D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3-D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in the presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to eight events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double-couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip and rake configurations of the double-couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  19. AxonPacking: An Open-Source Software to Simulate Arrangements of Axons in White Matter

    PubMed Central

    Mingasson, Tom; Duval, Tanguy; Stikov, Nikola; Cohen-Adad, Julien

    2017-01-01

    HIGHLIGHTS AxonPacking: Open-source software for simulating white matter microstructure.Validation on a theoretical disk packing problem.Reproducible and stable for various densities and diameter distributions.Can be used to study interplay between myelin/fiber density and restricted fraction. Quantitative Magnetic Resonance Imaging (MRI) can provide parameters that describe white matter microstructure, such as the fiber volume fraction (FVF), the myelin volume fraction (MVF) or the axon volume fraction (AVF) via the fraction of restricted water (fr). While already being used for clinical application, the complex interplay between these parameters requires thorough validation via simulations. These simulations required a realistic, controlled and adaptable model of the white matter axons with the surrounding myelin sheath. While there already exist useful algorithms to perform this task, none of them combine optimisation of axon packing, presence of myelin sheath and availability as free and open source software. Here, we introduce a novel disk packing algorithm that addresses these issues. The performance of the algorithm is tested in term of reproducibility over 50 runs, resulting density, and stability over iterations. This tool was then used to derive multiple values of FVF and to study the impact of this parameter on fr and MVF in light of the known microstructure based on histology sample. The standard deviation of the axon density over runs was lower than 10−3 and the expected hexagonal packing for monodisperse disks was obtained with a density close to the optimal density (obtained: 0.892, theoretical: 0.907). Using an FVF ranging within [0.58, 0.82] and a mean inter-axon gap ranging within [0.1, 1.1] μm, MVF ranged within [0.32, 0.44] and fr ranged within [0.39, 0.71], which is consistent with the histology. The proposed algorithm is implemented in the open-source software AxonPacking (https://github.com/neuropoly/axonpacking) and can be useful for validating diffusion models as well as for enabling researchers to study the interplay between microstructure parameters when evaluating qMRI methods. PMID:28197091

  20. Moment Tensor Inversion with 3D sensor configuration of Mining Induced Seismicity (Kiruna mine, Sweden)

    NASA Astrophysics Data System (ADS)

    Ma, Ju; Dineva, Savka; Cesca, Simone; Heimann, Sebastian

    2018-03-01

    Mining induced seismicity is an undesired consequence of mining operations, which poses significant hazard to miners and infrastructures and requires an accurate analysis of the rupture process. Seismic moment tensors of mining-induced events help to understand the nature of mining-induced seismicity by providing information about the relationship between the mining, stress redistribution and instabilities in the rock mass. In this work, we adapt and test a waveform-based inversion method on high frequency data recorded by a dense underground seismic system in one of the largest underground mines in the world (Kiruna mine, Sweden). Stable algorithm for moment tensor inversion for comparatively small mining induced earthquakes, resolving both the double couple and full moment tensor with high frequency data is very challenging. Moreover, the application to underground mining system requires accounting for the 3D geometry of the monitoring system. We construct a Green's function database using a homogeneous velocity model, but assuming a 3D distribution of potential sources and receivers. We first perform a set of moment tensor inversions using synthetic data to test the effects of different factors on moment tensor inversion stability and source parameters accuracy, including the network spatial coverage, the number of sensors and the signal-to-noise ratio. The influence of the accuracy of the input source parameters on the inversion results is also tested. Those tests show that an accurate selection of the inversion parameters allows resolving the moment tensor also in presence of realistic seismic noise conditions. Finally, the moment tensor inversion methodology is applied to 8 events chosen from mining block #33/34 at Kiruna mine. Source parameters including scalar moment, magnitude, double couple, compensated linear vector dipole and isotropic contributions as well as the strike, dip, rake configurations of the double couple term were obtained. The orientations of the nodal planes of the double-couple component in most cases vary from NNW to NNE with a dip along the ore body or in the opposite direction.

  1. Determination of Earth rotation by the combination of data from different space geodetic systems

    NASA Technical Reports Server (NTRS)

    Archinal, Brent Allen

    1987-01-01

    Formerly, Earth Rotation Parameters (ERP), i.e., polar motion and UTI-UTC values, have been determined using data from only one observational system at a time, or by the combination of parameters previously obtained in such determinations. The question arises as to whether a simultaneous solution using data from several sources would provide an improved determination of such parameters. To pursue this reasoning, fifteen days of observations have been simulated using realistic networks of Lunar Laser Ranging (LLR), Satellite Laser Ranging (SLR) to Lageos, and Very Long Baseline Interferometry (VLBI) stations. A comparison has been done of the accuracy and precision of the ERP obtained from: (1) the individual system solutions, (2) the weighted means of those values, (3) all of the data by means of the combination of the normal equations obtained in 1, and (4) a grand solution with all the data. These simulations show that solutions done by the normal equation combination and grand solution methods provide the best or nearly the best ERP for all the periods considered, but that weighted mean solutions provide nearly the same accuracy and precision. VLBI solutions also provide similar accuracies.

  2. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  3. Deriving realistic source boundary conditions for a CFD simulation of concentrations in workroom air.

    PubMed

    Feigley, Charles E; Do, Thanh H; Khan, Jamil; Lee, Emily; Schnaufer, Nicholas D; Salzberg, Deborah C

    2011-05-01

    Computational fluid dynamics (CFD) is used increasingly to simulate the distribution of airborne contaminants in enclosed spaces for exposure assessment and control, but the importance of realistic boundary conditions is often not fully appreciated. In a workroom for manufacturing capacitors, full-shift samples for isoamyl acetate (IAA) were collected for 3 days at 16 locations, and velocities were measured at supply grills and at various points near the source. Then, velocity and concentration fields were simulated by 3-dimensional steady-state CFD using 295K tetrahedral cells, the k-ε turbulence model, standard wall function, and convergence criteria of 10(-6) for all scalars. Here, we demonstrate the need to represent boundary conditions accurately, especially emission characteristics at the contaminant source, and to obtain good agreement between observations and CFD results. Emission rates for each day were determined from six concentrations measured in the near field and one upwind using an IAA mass balance. The emission was initially represented as undiluted IAA vapor, but the concentrations estimated using CFD differed greatly from the measured concentrations. A second set of simulations was performed using the same IAA emission rates but a more realistic representation of the source. This yielded good agreement with measured values. Paying particular attention to the region with highest worker exposure potential-within 1.3 m of the source center-the air speed and IAA concentrations estimated by CFD were not significantly different from the measured values (P = 0.92 and P = 0.67, respectively). Thus, careful consideration of source boundary conditions greatly improved agreement with the measured values.

  4. Laser acceleration of electrons to giga-electron-volt energies using highly charged ions.

    PubMed

    Hu, S X; Starace, Anthony F

    2006-06-01

    The recent proposal to use highly charged ions as sources of electrons for laser acceleration [S. X. Hu and A. F. Starace, Phys. Rev. Lett. 88, 245003 (2002)] is investigated here in detail by means of three-dimensional, relativistic Monte Carlo simulations for a variety of system parameters, such as laser pulse duration, ionic charge state, and laser focusing spot size. Realistic laser focusing effects--e.g., the existence of longitudinal laser field components-are taken into account. Results of spatial averaging over the laser focus are also presented. These numerical simulations show that the proposed scheme for laser acceleration of electrons from highly charged ions is feasible with current or near-future experimental conditions and that electrons with GeV energies can be obtained in such experiments.

  5. Computational Difficulties in the Identification and Optimization of Control Systems.

    DTIC Science & Technology

    1980-01-01

    necessary and Identify by block number) - -. 3. iABSTRACT (Continue on revers, side It necessary and Identify by block number) As more realistic models ...Island 02912 ABSTRACT As more realistic models for resource management are developed, the need for efficient computational techniques for parameter...optimization (optimal control) in "state" models which This research was supported in part by ttfe National Science Foundation under grant NSF-MCS 79-05774

  6. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  7. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  8. The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades

    NASA Astrophysics Data System (ADS)

    Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.

    2018-04-01

    This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.

  9. SU-F-I-14: 3D Breast Digital Phantom for XACT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Laaroussi, R; Chen, J

    Purpose: The X-ray induced acoustic computed tomography (XACT) is a new imaging modality which combines X-ray contrast and high ultrasonic resolution in a single modality. Using XACT in breast imaging, a 3D breast volume can be imaged by only one pulsed X-ray radiation, which could dramatically reduce the imaging dose for patients undergoing breast cancer screening and diagnosis. A 3D digital phantom that contains both X-ray properties and acoustic properties of different tissue types is indeed needed for developing and optimizing the XACT system. The purpose of this study is to offer a realistic breast digital phantom as a valuablemore » tool for improving breast XACT imaging techniques and potentially leading to better diagnostic outcomes. Methods: A series of breast CT images along the coronal plane from a patient who has breast calcifications are used as the source images. A HU value based segmentation algorithm is employed to identify breast tissues in five categories, namely the skin tissue, fat tissue, glandular tissue, chest bone and calcifications. For each pixel, the dose related parameters, such as material components and density, and acoustic related parameters, such as frequency-dependent acoustic attenuation coefficient and bandwidth, are assigned based on tissue types. Meanwhile, other parameters which are used in sound propagation, including the sound speed, thermal expansion coefficient, and heat capacity are also assigned to each tissue. Results: A series of 2D tissue type image is acquired first and the 3D digital breast phantom is obtained by using commercial 3D reconstruction software. When giving specific settings including dose depositions and ultrasound center frequency, the X-ray induced initial pressure rise can be calculated accordingly. Conclusion: The proposed 3D breast digital phantom represents a realistic breast anatomic structure and provides a valuable tool for developing and evaluating the system performance for XACT.« less

  10. Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals

    NASA Astrophysics Data System (ADS)

    Pelot, N. A.; Behrend, C. E.; Grill, W. M.

    2017-08-01

    Objective. There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics’ vBloc® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. Approach. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. Main results. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed ‘re-excitation’, arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Significance. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.

  11. Modeling the response of small myelinated axons in a compound nerve to kilohertz frequency signals.

    PubMed

    Pelot, N A; Behrend, C E; Grill, W M

    2017-08-01

    There is growing interest in electrical neuromodulation of peripheral nerves, particularly autonomic nerves, to treat various diseases. Electrical signals in the kilohertz frequency (KHF) range can produce different responses, including conduction block. For example, EnteroMedics' vBloc ® therapy for obesity delivers 5 kHz stimulation to block the abdominal vagus nerves, but the mechanisms of action are unclear. We developed a two-part computational model, coupling a 3D finite element model of a cuff electrode around the human abdominal vagus nerve with biophysically-realistic electrical circuit equivalent (cable) model axons (1, 2, and 5.7 µm in diameter). We developed an automated algorithm to classify conduction responses as subthreshold (transmission), KHF-evoked activity (excitation), or block. We quantified neural responses across kilohertz frequencies (5-20 kHz), amplitudes (1-8 mA), and electrode designs. We found heterogeneous conduction responses across the modeled nerve trunk, both for a given parameter set and across parameter sets, although most suprathreshold responses were excitation, rather than block. The firing patterns were irregular near transmission and block boundaries, but otherwise regular, and mean firing rates varied with electrode-fibre distance. Further, we identified excitation responses at amplitudes above block threshold, termed 're-excitation', arising from action potentials initiated at virtual cathodes. Excitation and block thresholds decreased with smaller electrode-fibre distances, larger fibre diameters, and lower kilohertz frequencies. A point source model predicted a larger fraction of blocked fibres and greater change of threshold with distance as compared to the realistic cuff and nerve model. Our findings of widespread asynchronous KHF-evoked activity suggest that conduction block in the abdominal vagus nerves is unlikely with current clinical parameters. Our results indicate that compound neural or downstream muscle force recordings may be unreliable as quantitative measures of neural activity for in vivo studies or as biomarkers in closed-loop clinical devices.

  12. Sulfates as chromophores for multiwavelength photoacoustic imaging phantoms

    NASA Astrophysics Data System (ADS)

    Fonseca, Martina; An, Lu; Beard, Paul; Cox, Ben

    2017-12-01

    As multiwavelength photoacoustic imaging becomes increasingly widely used to obtain quantitative estimates, the need for validation studies conducted on well-characterized experimental phantoms becomes ever more pressing. One challenge that such studies face is the design of stable, well-characterized phantoms and absorbers with properties in a physiologically realistic range. This paper performs a full experimental characterization of aqueous solutions of copper and nickel sulfate, whose properties make them close to ideal as chromophores in multiwavelength photoacoustic imaging phantoms. Their absorption varies linearly with concentration, and they mix linearly. The concentrations needed to yield absorption values within the physiological range are below the saturation limit. The shape of their absorption spectra makes them useful analogs for oxy- and deoxyhemoglobin. They display long-term photostability (no indication of bleaching) as well as resistance to transient effects (no saturable absorption phenomena), and are therefore suitable for exposure to typical pulsed photoacoustic light sources, even when exposed to the high number of pulses required in scanning photoacoustic imaging systems. In addition, solutions with tissue-realistic, predictable, and stable scattering can be prepared by mixing sulfates and Intralipid, as long as an appropriate emulsifier is used. Finally, the Grüneisen parameter of the sulfates was found to be larger than that of water and increased linearly with concentration.

  13. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  14. Analysis of a decision model in the context of equilibrium pricing and order book pricing

    NASA Astrophysics Data System (ADS)

    Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.

    2014-12-01

    An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.

  15. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  16. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  17. Realistic Library Research Methods: Bibliographic Sources Annotated.

    ERIC Educational Resources Information Center

    Kushon, Susan G.; Wells, Bernice

    This guide gives an overview of basic library research methods with emphasis upon developing an understanding of library organization and professional services. Commonly used bibliographic techniques are described for various published and unpublished, print and nonprint materials. Standard reference sources (bibliographies, encyclopedias, annual…

  18. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  19. Comparison between two photovoltaic module models based on transistors

    NASA Astrophysics Data System (ADS)

    Saint-Eve, Frédéric; Sawicki, Jean-Paul; Petit, Pierre; Maufay, Fabrice; Aillerie, Michel

    2018-05-01

    The main objective of this paper is to verify the possibility to reduce to a simple electronic circuit with very few components the behavior simulation of an un-shaded photovoltaic (PV) module. Particularly, two models based on well-tried elementary structures, i.e., the Darlington structure in first model and the voltage regulation with programmable Zener diode in the second are analyzed. Specifications extracted from the behavior of a real I-V characteristic of a panel are considered and the principal electrical variables are deduced. The two models are expected to match with open circuit voltage, maximum power point (MPP) and short circuit current, without forgetting realistic current slopes on the both sides of MPP. The robustness is mentioned when irradiance varies and is considered as an additional fundamental property. For both models, two simulations are done to identify influence of some parameters. In the first model, a parameter allowing to adjust current slope on left side of MPP proves to be also important for the calculation of open circuit voltage. Besides this model does not authorize an entirely adjustment of I-V characteristic and MPP moves significantly away from real value when irradiance increases. On the contrary, the second model seems to have only qualities: open circuit voltage is easy to calculate, current slopes are realistic and there is perhaps a good robustness when irradiance variations are simulated by adjusting short circuit current of PV module. We have shown that these two simplified models are expected to make reliable and easier simulations of complex PV architecture integrating many different devices like PV modules or other renewable energy sources and storage capacities coupled in parallel association.

  20. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  1. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  2. Spectroscopy Made Easy: Evolution

    NASA Astrophysics Data System (ADS)

    Piskunov, Nikolai; Valenti, Jeff A.

    2017-01-01

    Context. The Spectroscopy Made Easy (SME) package has become a popular tool for analyzing stellar spectra, often in connection with large surveys or exoplanet research. SME has evolved significantly since it was first described in 1996, but many of the original caveats and potholes still haunt users. The main drivers for this paper are complexity of the modeling task, the large user community, and the massive effort that has gone into SME. Aims: We do not intend to give a comprehensive introduction to stellar atmospheres, but will describe changes to key components of SME: the equation of state, opacities, and radiative transfer. We will describe the analysis and fitting procedure and investigate various error sources that affect inferred parameters. Methods: We review the current status of SME, emphasizing new algorithms and methods. We describe some best practices for using the package, based on lessons learned over two decades of SME usage. We present a new way to assess uncertainties in derived stellar parameters. Results: Improvements made to SME, better line data, and new model atmospheres yield more realistic stellar spectra, but in many cases systematic errors still dominate over measurement uncertainty. Future enhancements are outlined.

  3. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE PAGES

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...

    2016-07-28

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  4. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  5. Period and amplitude of non-volcanic tremors and repeaters: a dimensional analysis

    NASA Astrophysics Data System (ADS)

    Nielsen, Stefan

    2017-04-01

    Since its relatively recent discovery, the origin of non-volcanic tremor has been source of great curiosity and debate. Two main interpretations have been proposed, one based on fluid migration, the other relating to slow slip events on a plate boundary (the latter hypothesis has recently gained considerable ground). Here I define the conditions of slip of one or more small asperities embedded within a larger creeping fault patch. The radiation-damping equation coupled with rate-and-state friction evolution equations results in a system of ordinary differential equations. For a finite size asperity, the system equates to a peculiar non-linear damped oscillator, converging to a limit cycle. Dimensional analysis shows that period and amplitude of the oscillations depend on dimensional parameter combinations formed from a limited set of parameters: asperity dimension Γ, rate and state friction parameters (a, b, L), shear stiffness of the medium G, mass density ρ, background creep rate ˙V and normal stress σ. Under realistic parameter ranges, the asperity may show (1) tremor-like short period oscillations, accelerating to radiate sufficient energy to be barely detectable and a periodicity of the order of one to ten Hertz, as observed for non-volcanic tremor activity at the base of large inter-plate faults; (2) isolated stick-slip events with intervals in the order of days to months, as observed in repeater events of modest magnitude within creeping fault sections.

  6. Cosmological signatures of ultralight dark matter with an axionlike potential

    NASA Astrophysics Data System (ADS)

    Cedeño, Francisco X. Linares; González-Morales, Alma X.; Ureña-López, L. Arturo

    2017-09-01

    Nonlinearities in a realistic axion field potential may play an important role in the cosmological dynamics. In this paper we use the Boltzmann code class to solve the background and linear perturbations evolution of an axion field and contrast our results with those of CDM and the free axion case. We conclude that there is a slight delay in the onset of the axion field oscillations when nonlinearities in the axion potential are taken into account. Besides, we identify a tachyonic instability of linear modes resulting in the presence of a bump in the power spectrum at small scales. Some comments are in turn about the true source of the tachyonic instability, how the parameters of the axionlike potential can be constrained by Ly-α observations, and the consequences in the stability of self-gravitating objects made of axions.

  7. Systematic cavity design approach for a multi-frequency gyrotron for DEMO and study of its RF behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalaria, P. C., E-mail: parth.kalaria@partner.kit.edu; Avramidis, K. A.; Franck, J.

    High frequency (>230 GHz) megawatt-class gyrotrons are planned as RF sources for electron cyclotron resonance heating and current drive in DEMOnstration fusion power plants (DEMOs). In this paper, for the first time, a feasibility study of a 236 GHz DEMO gyrotron is presented by considering all relevant design goals and the possible technical limitations. A mode-selection procedure is proposed in order to satisfy the multi-frequency and frequency-step tunability requirements. An effective systematic design approach for the optimal design of a gradually tapered cavity is presented. The RF-behavior of the proposed cavity is verified rigorously, supporting 920 kW of stable output power withmore » an interaction efficiency of 36% including the considerations of realistic beam parameters.« less

  8. Laser acceleration of electrons to giga-electron-volt energies using highly charged ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.; Starace, Anthony F.

    2006-06-15

    The recent proposal to use highly charged ions as sources of electrons for laser acceleration [S. X. Hu and A. F. Starace, Phys. Rev. Lett. 88, 245003 (2002)] is investigated here in detail by means of three-dimensional, relativistic Monte Carlo simulations for a variety of system parameters, such as laser pulse duration, ionic charge state, and laser focusing spot size. Realistic laser focusing effects--e.g., the existence of longitudinal laser field components--are taken into account. Results of spatial averaging over the laser focus are also presented. These numerical simulations show that the proposed scheme for laser acceleration of electrons from highlymore » charged ions is feasible with current or near-future experimental conditions and that electrons with GeV energies can be obtained in such experiments.« less

  9. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  10. Evaluation of the local dose enhancement in the combination of proton therapy and nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Rovira, I., E-mail: immamartinez@gmail.com; Prezado, Y.

    Purpose: The outcome of radiotherapy can be further improved by combining irradiation with dose enhancers such as high-Z nanoparticles. Since 2004, spectacular results have been obtained when low-energy x-ray irradiations have been combined with nanoparticles. Recently, the same combination has been explored in hadron therapy. In vitro studies have shown a significant amplification of the biological damage in tumor cells charged with nanoparticles and irradiated with fast ions. This has been attributed to the increase in the ionizations and electron emissions induced by the incident ions or the electrons in the secondary tracks on the high-Z atoms, resulting in amore » local energy deposition enhancement. However, this subject is still a matter of controversy. Within this context, the main goal of the authors’ work was to provide new insights into the dose enhancement effects of nanoparticles in proton therapy. Methods: For this purpose, Monte Carlo calculations (GATE/GEANT4 code) were performed. In particular, the GEANT4-DNA toolkit, which allows the modeling of early biological damages induced by ionizing radiation at the DNA scale, was used. The nanometric radial energy distributions around the nanoparticle were studied, and the processes (such as Auger deexcitation or dissociative electron attachment) participating in the dose deposition of proton therapy treatments in the presence of nanoparticles were evaluated. It has been reported that the architecture of Monte Carlo calculations plays a crucial role in the assessment of nanoparticle dose enhancement and that it may introduce a bias in the results or amplify the possible final dose enhancement. Thus, a dosimetric study of different cases was performed, considering Au and Gd nanoparticles, several nanoparticle sizes (from 4 to 50 nm), and several beam configurations (source-nanoparticle distances and source sizes). Results: This Monte Carlo study shows the influence of the simulations’ parameters on the local dose enhancement and how more realistic configurations lead to a negligible increase of local energy deposition. The obtained dose enhancement factor was up to 1.7 when the source was located at the nanoparticle surface. This dose enhancement was reduced when the source was located at further distances (i.e., in more realistic situations). Additionally, no significant increase in the dissociative electron attachment processes was observed. Conclusions: The authors’ results indicate that physical effects play a minor role in the amplification of damage, as a very low dose enhancement or increase of dissociative electron attachment processes is observed when the authors get closer to more realistic simulations. Thus, other effects, such as biological or chemical processes, may be mainly responsible for the enhanced radiosensibilization observed in biological studies. However, more biological studies are needed to verify this hypothesis.« less

  11. Dark matter vs. astrophysics in the interpretation of AMS-02 electron and positron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauro, Mattia Di; Donato, Fiorenza; Fornengo, Nicolao

    We perform a detailed quantitative analysis of the recent AMS-02 electron and positron data. We investigate the interplay between the emission from primary astrophysical sources, namely Supernova Remnants and Pulsar Wind Nebulae, and the contribution from a dark matter annihilation or decay signal. Our aim is to assess the information that can be derived on dark matter properties when both dark matter and primary astrophysical sources are assumed to jointly contribute to the leptonic observables measured by the AMS-02 experiment. We investigate both the possibility to set robust constraints on the dark matter annihilation/decay rate and the possibility to lookmore » for dark matter signals within realistic models that take into account the full complexity of the astrophysical background. Our results show that AMS-02 data enable to probe efficiently vast regions of the dark matter parameter space and, in some cases, to set constraints on the dark matter annihilation/decay rate that are comparable or even stronger than the ones derived from other indirect detection channels.« less

  12. Optimizing a synchrotron based x-ray lithography system for IC manufacturing

    NASA Astrophysics Data System (ADS)

    Kovacs, Stephen; Speiser, Kenneth; Thaw, Winston; Heese, Richard N.

    1990-05-01

    The electron storage ring is a realistic solution as a radiation source for production grade, industrial X-ray lithography system. Today several large scale plans are in motion to design and implement synchrotron storage rings of different types for this purpose in the USA and abroad. Most of the scientific and technological problems related to the physics, design and manufacturing engineering, and commissioning of these systems for microlithography have been resolved or are under extensive study. However, investigation on issues connected to application of Synchrotron Orbit Radiation (SOR ) in chip production environment has been somewhat neglected. In this paper we have filled this gap pointing out direct effects of some basic synchrotron design parameters and associated subsystems (injector, X-ray beam line) on the operation and cost of lithography in production. The following factors were considered: synchrotron configuration, injection energy, beam intensity variability, number of beam lines and wafer exposure concept. A cost model has been worked out and applied to three different X-ray Lithography Source (XLS) systems. The results of these applications are compared and conclusions drawn.

  13. PyCoTools: A Python Toolbox for COPASI.

    PubMed

    Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P

    2018-05-22

    COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.

  14. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  15. Technical Note: Artificial coral reef mesocosms for ocean acidification investigations

    NASA Astrophysics Data System (ADS)

    Leblud, J.; Moulin, L.; Batigny, A.; Dubois, P.; Grosjean, P.

    2014-11-01

    The design and evaluation of replicated artificial mesocosms are presented in the context of a thirteen month experiment on the effects of ocean acidification on tropical coral reefs. They are defined here as (semi)-closed (i.e. with or without water change from the reef) mesocosms in the laboratory with a more realistic physico-chemical environment than microcosms. Important physico-chemical parameters (i.e. pH, pO2, pCO2, total alkalinity, temperature, salinity, total alkaline earth metals and nutrients availability) were successfully monitored and controlled. Daily variations of irradiance and pH were applied to approach field conditions. Results highlighted that it was possible to maintain realistic physico-chemical parameters, including daily changes, into artificial mesocosms. On the other hand, the two identical artificial mesocosms evolved differently in terms of global community oxygen budgets although the initial biological communities and physico-chemical parameters were comparable. Artificial reef mesocosms seem to leave enough degrees of freedom to the enclosed community of living organisms to organize and change along possibly diverging pathways.

  16. A baroclinic quasigeostrophic open ocean model

    NASA Technical Reports Server (NTRS)

    Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.

    1983-01-01

    A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.

  17. Effect of Anatomically Realistic Full-Head Model on Activation of Cortical Neurons in Subdural Cortical Stimulation—A Computational Study

    NASA Astrophysics Data System (ADS)

    Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan

    2016-06-01

    Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.

  18. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  19. Acoustic propagation and atmosphere characteristics derived from infrasonic waves generated by the Concorde.

    PubMed

    Le, Pichon Alexis; Garcés, Milton; Blanc, Elisabeth; Barthélémy, Maud; Drob, Doug P

    2002-01-01

    Infrasonic signals generated by daily supersonic Concorde flights between North America and Europe have been consistently recorded by an array of microbarographs in France. These signals are used to investigate the effects of atmospheric variability on long-range sound propagation. Statistical analysis of wave parameters shows seasonal and daily variations associated with changes in the wind structure of the atmosphere. The measurements are compared to the predictions obtained by tracing rays through realistic atmospheric models. Theoretical ray paths allow a consistent interpretation of the observed wave parameters. Variations in the reflection level, travel time, azimuth deviation and propagation range are explained by the source and propagation models. The angular deviation of a ray's azimuth direction, due to the seasonal and diurnal fluctuations of the transverse wind component, is found to be approximately 5 degrees from the initial launch direction. One application of the seasonal and diurnal variations of the observed phase parameters is the use of ground measurements to estimate fluctuations in the wind velocity at the reflection heights. The simulations point out that care must be taken when ascribing a phase velocity to a turning height. Ray path simulations which allow the correct computation of reflection heights are essential for accurate phase identifications.

  20. All Sky Cloud Coverage Monitoring for SONG-China Project

    NASA Astrophysics Data System (ADS)

    Tian, J. F.; Deng, L. C.; Yan, Z. Z.; Wang, K.; Wu, Y.

    2016-05-01

    In order to monitor the cloud distributions at Qinghai station, a site selected for SONG (Stellar Observations Network Group)-China node, the design of the proto-type of all sky camera (ASC) applied in Xinglong station is adopted. Both hardware and software improvements have been made in order to be more precise and deliver quantitative measurements. The ARM (Advanced Reduced Instruction Set Computer Machine) MCU (Microcontroller Unit) instead of PC is used to control the upgraded version of ASC. A much higher reliability has been realized in the current scheme. Independent of the positions of the Sun and Moon, the weather conditions are constantly changing, therefore it is difficult to get proper exposure parameters using only the temporal information of the major light sources. A realistic exposure parameters for the ASC can actually be defined using a real-time sky brightness monitor that is also installed at the same site. The night sky brightness value is a very sensitive function of the cloud coverage, and can be accurately measured by the sky quality monitor. We study the correlation between the exposure parameter and night sky brightness value, and give the mathematical relation. The images of the all sky camera are inserted into database directly. All sky quality images are archived in FITS format which can be used for further analysis.

  1. Modeling the direction-continuous time-of-arrival in head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr

    2015-01-01

    Head-related transfer functions (HRTFs) describe the filtering of the incoming sound by the torso, head, and pinna. As a consequence of the propagation path from the source to the ear, each HRTF contains a direction-dependent, broadband time-of-arrival (TOA). TOAs are usually estimated independently for each direction from HRTFs, a method prone to artifacts and limited by the spatial sampling. In this study, a continuous-direction TOA model combined with an outlier-removal algorithm is proposed. The model is based on a simplified geometric representation of the listener, and his/her arbitrary position within the HRTF measurement. The outlier-removal procedure uses the extreme studentized deviation test to remove implausible TOAs. The model was evaluated for numerically calculated HRTFs of sphere, torso, and pinna under various conditions. The accuracy of estimated parameters was within the resolution given by the sampling rate. Applied to acoustically measured HRTFs of 172 listeners, the estimated parameters were consistent with realistic listener geometry. The outlier removal further improved the goodness-of-fit, particularly for some problematic fits. The comparison with a simpler model that fixed the listener position to the center of the measurement geometry showed a clear advantage of listener position as an additional free model parameter. PMID:24606268

  2. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  3. J&K Fitness Supply Company: Auditing Inventory

    ERIC Educational Resources Information Center

    Clikeman, Paul M.

    2012-01-01

    This case provides auditing students with an opportunity to perform substantive tests of inventory using realistic-looking source documents. The learning objectives are to help students understand: (1) the procedures auditors perform in order to test inventory; (2) the source documents used in auditing inventory; and (3) the types of misstatements…

  4. Historical Literature and Democratic Education. Occasional Paper.

    ERIC Educational Resources Information Center

    Scott, John Anthony

    This document discusses the movement to bring original historical sources into the classroom. Because students of history need access to sources of information that provide direct or primary evidence about reality, teachers must show that realistic alternatives to the traditional history text exist. In the first section of this paper, efforts to…

  5. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  6. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions

    PubMed Central

    2017-01-01

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells. PMID:28920081

  7. Modeling the Performance Limitations and Prospects of Perovskite/Si Tandem Solar Cells under Realistic Operating Conditions.

    PubMed

    Futscher, Moritz H; Ehrler, Bruno

    2017-09-08

    Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells.

  8. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  9. Seismic source inversion using Green's reciprocity and a 3-D structural model for the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Simutė, S.; Fichtner, A.

    2015-12-01

    We present a feasibility study for seismic source inversions using a 3-D velocity model for the Japanese Islands. The approach involves numerically calculating 3-D Green's tensors, which is made efficient by exploiting Green's reciprocity. The rationale for 3-D seismic source inversion has several aspects. For structurally complex regions, such as the Japan area, it is necessary to account for 3-D Earth heterogeneities to prevent unknown structure polluting source solutions. In addition, earthquake source characterisation can serve as a means to delineate existing faults. Source parameters obtained for more realistic Earth models can then facilitate improvements in seismic tomography and early warning systems, which are particularly important for seismically active areas, such as Japan. We have created a database of numerically computed 3-D Green's reciprocals for a 40°× 40°× 600 km size area around the Japanese Archipelago for >150 broadband stations. For this we used a regional 3-D velocity model, recently obtained from full waveform inversion. The model includes attenuation and radial anisotropy and explains seismic waveform data for periods between 10 - 80 s generally well. The aim is to perform source inversions using the database of 3-D Green's tensors. As preliminary steps, we present initial concepts to address issues that are at the basis of our approach. We first investigate to which extent Green's reciprocity works in a discrete domain. Considering substantial amounts of computed Green's tensors we address storage requirements and file formatting. We discuss the importance of the initial source model, as an intelligent choice can substantially reduce the search volume. Possibilities to perform a Bayesian inversion and ways to move to finite source inversion are also explored.

  10. Parametric Sensitivity Analysis for the Asian Summer Monsoon Precipitation Simulation in the Beijing Climate Center AGCM Version 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Zhang, Yaocun; Qian, Yun

    In this study, we apply an efficient sampling approach and conduct a large number of simulations to explore the sensitivity of the simulated Asian summer monsoon (ASM) precipitation, including the climatological state and interannual variability, to eight parameters related to the cloud and precipitation processes in the Beijing Climate Center AGCM version 2.1 (BCC_AGCM2.1). Our results show that BCC_AGCM2.1 has large biases in simulating the ASM precipitation. The precipitation efficiency and evaporation coefficient for deep convection are the most sensitive parameters in simulating the ASM precipitation. With optimal parameter values, the simulated precipitation climatology could be remarkably improved, e.g. increasedmore » precipitation over the equator Indian Ocean, suppressed precipitation over the Philippine Sea, and more realistic Meiyu distribution over Eastern China. The ASM precipitation interannual variability is further analyzed, with a focus on the ENSO impacts. It shows the simulations with better ASM precipitation climatology can also produce more realistic precipitation anomalies during El Niño decaying summer. In the low-skill experiments for precipitation climatology, the ENSO-induced precipitation anomalies are most significant over continents (vs. over ocean in observation) in the South Asian monsoon region. More realistic results are derived from the higher-skill experiments with stronger anomalies over the Indian Ocean and weaker anomalies over India and the western Pacific, favoring more evident easterly anomalies forced by the tropical Indian Ocean warming and stronger Indian Ocean-western Pacific tele-connection as observed. Our model results reveal a strong connection between the simulated ASM precipitation climatological state and interannual variability in BCC_AGCM2.1 when key parameters are perturbed.« less

  11. The Thermal Regulation of Gravitational Instabilities in Protoplanetary Disks. III. Simulations with Radiative Cooling and Realistic Opacities

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Mejía, Annie C.; Durisen, Richard H.; Cai, Kai; Pickett, Megan K.; D'Alessio, Paola

    2006-11-01

    This paper presents a fully three-dimensional radiative hydrodymanics simulation with realistic opacities for a gravitationally unstable 0.07 Msolar disk around a 0.5 Msolar star. We address the following aspects of disk evolution: the strength of gravitational instabilities under realistic cooling, mass transport in the disk that arises from GIs, comparisons between the gravitational and Reynolds stresses measured in the disk and those expected in an α-disk, and comparisons between the SED derived for the disk and SEDs derived from observationally determined parameters. The mass transport in this disk is dominated by global modes, and the cooling times are too long to permit fragmentation for all radii. Moreover, our results suggest a plausible explanation for the FU Ori outburst phenomenon.

  12. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  13. Python scripting in the nengo simulator.

    PubMed

    Stewart, Terrence C; Tripp, Bryan; Eliasmith, Chris

    2009-01-01

    Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.

  14. Interactions Between Energetic Electrons and Realistic Whistler Mode Waves in the Jovian Magnetosphere

    NASA Astrophysics Data System (ADS)

    de Soria-Santacruz Pich, M.; Drozdov, A.; Menietti, J. D.; Garrett, H. B.; Kellerman, A. C.; Shprits, Y. Y.

    2016-12-01

    The radiation belts of Jupiter are the most intense of all the planets in the solar system. Their source is not well understood but they are believed to be the result of inward radial transport beyond the orbit of Io. In the case of Earth, the radiation belts are the result of local acceleration and radial diffusion from whistler waves, and it has been suggested that this type of acceleration may also be significant in the magnetosphere of Jupiter. Multiple diffusion codes have been developed to study the dynamics of the Earth's magnetosphere and characterize the interaction between relativistic electrons and whistler waves; in the present paper we adapt one of these codes, the two-dimensional version of the Versatile Electron Radiation Belt (VERB) computer code, to the case of the Jovian magnetosphere. We use realistic parameters to determine the importance of whistler emissions in the acceleration and loss of electrons in the Jovian magnetosphere. More specifically, we use an extensive wave survey from the Galileo spacecraft and initial conditions derived from the Galileo Interim Radiation Electron Model version 2 (GIRE2) to estimate the pitch angle and energy diffusion of the electron population due to lower and upper band whistlers as a function of latitude and radial distance from the planet, and we calculate the decay rates that result from this interaction.

  15. Python Scripting in the Nengo Simulator

    PubMed Central

    Stewart, Terrence C.; Tripp, Bryan; Eliasmith, Chris

    2008-01-01

    Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models. PMID:19352442

  16. Assessing and reporting uncertainties in dietary exposure analysis: Mapping of uncertainties in a tiered approach.

    PubMed

    Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David

    2015-08-01

    Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Effect of flow velocity and temperature on ignition characteristics in laser ignition of natural gas and air mixtures

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Riley, M. J. W.; Borman, A.; Dowding, C.; Kirk, A.; Bickerton, R.

    2015-03-01

    Laser induced spark ignition offers the potential for greater reliability and consistency in ignition of lean air/fuel mixtures. This increased reliability is essential for the application of gas turbines as primary or secondary reserve energy sources in smart grid systems, enabling the integration of renewable energy sources whose output is prone to fluctuation over time. This work details a study into the effect of flow velocity and temperature on minimum ignition energies in laser-induced spark ignition in an atmospheric combustion test rig, representative of a sub 15 MW industrial gas turbine (Siemens Industrial Turbomachinery Ltd., Lincoln, UK). Determination of minimum ignition energies required for a range of temperatures and flow velocities is essential for establishing an operating window in which laser-induced spark ignition can operate under realistic, engine-like start conditions. Ignition of a natural gas and air mixture at atmospheric pressure was conducted using a laser ignition system utilizing a Q-switched Nd:YAG laser source operating at 532 nm wavelength and 4 ns pulse length. Analysis of the influence of flow velocity and temperature on ignition characteristics is presented in terms of required photon flux density, a useful parameter to consider during the development laser ignition systems.

  18. MEASURING NEUTRON STAR RADII VIA PULSE PROFILE MODELING WITH NICER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özel, Feryal; Psaltis, Dimitrios; Bauböck, Michi

    2016-11-20

    The Neutron-star Interior Composition Explorer is an X-ray astrophysics payload that will be placed on the International Space Station . Its primary science goal is to measure with high accuracy the pulse profiles that arise from the non-uniform thermal surface emission of rotation-powered pulsars. Modeling general relativistic effects on the profiles will lead to measuring the radii of these neutron stars and to constraining their equation of state. Achieving this goal will depend, among other things, on accurate knowledge of the source, sky, and instrument backgrounds. We use here simple analytic estimates to quantify the level at which these backgroundsmore » need to be known in order for the upcoming measurements to provide significant constraints on the properties of neutron stars. We show that, even in the minimal-information scenario, knowledge of the background at a few percent level for a background-to-source countrate ratio of 0.2 allows for a measurement of the neutron star compactness to better than 10% uncertainty for most of the parameter space. These constraints improve further when more realistic assumptions are made about the neutron star emission and spin, and when additional information about the source itself, such as its mass or distance, are incorporated.« less

  19. The Importance of Electron Source Population to the Remarkable Enhancement of Radiation belt Electrons during the October 2012 Storm

    NASA Astrophysics Data System (ADS)

    Tu, W.; Cunningham, G.; Reeves, G. D.; Chen, Y.; Henderson, M. G.; Blake, J. B.; Baker, D. N.; Spence, H.

    2013-12-01

    During the October 8-9 2012 storm, the MeV electron fluxes in the heart of the outer radiation belt are first wiped out then exhibit a three-orders-of-magnitude increase on the timescale of hours, as observed by the MagEIS and REPT instruments aboard the Van Allen Probes. There is strong observational evidence that the remarkable enhancement is due to local acceleration by chorus waves, as shown in the recent Science paper by Reeves et al.1. However, the importance of the dynamic electron source population transported in from the plasma sheet, to the observed remarkable enhancement, has not been studied. We illustrate the importance of the source population with our simulation of the event using the DREAM 3D diffusion model. Three new modifications have been implemented in the model: 1) incorporating a realistic and time-dependent low-energy boundary condition at 100 keV obtained from the MagEIS data; 2) utilizing event-specific chorus wave distributions derived from the low-energy electron precipitation observed by POES and validated against the in situ wave data from EMFISIS; 3) using an ';open' boundary condition at L*=11 and implementing electron lifetimes on the order of the drift period outside the solar-wind driven last closed drift shell. The model quantitatively reproduces the MeV electron dynamics during this event, including the fast dropout at the start of Oct. 8th, low electron flux during the first Dst dip, and the remarkable enhancement peaked at L*=4.2 during the second Dst dip. By comparing the model results with realistic source population against those with constant low-energy boundary (see figure), we find that the realistic electron source population is critical to reproduce the observed fast and significant increase of MeV electrons. 1Reeves, G. D., et al. (2013), Electron Acceleration in the Heart of the Van Allen Radiation Belts, Science, DOI:10.1126/science.1237743. Comparison between data and model results during the October 2012 storm for electrons at μ=3168 MeV/G and K=0.1 G1/2Re. Top plot is the electron phase space density data measured by the two Van Allen Probes; middle plot shows the results from the DREAM 3D diffusion model with a realistic electron source population derived from MagEIS data; and the bottom plot is the model results with a constant source population.

  20. PROBING DYNAMICS OF ELECTRON ACCELERATION WITH RADIO AND X-RAY SPECTROSCOPY, IMAGING, AND TIMING IN THE 2002 APRIL 11 SOLAR FLARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleishman, Gregory D.; Nita, Gelu M.; Gary, Dale E.

    Based on detailed analysis of radio and X-ray observations of a flare on 2002 April 11 augmented by realistic three-dimensional modeling, we have identified a radio emission component produced directly at the flare acceleration region. This acceleration region radio component has distinctly different (1) spectrum, (2) light curves, (3) spatial location, and, thus, (4) physical parameters from those of the separately identified trapped or precipitating electron components. To derive evolution of physical parameters of the radio sources we apply forward fitting of the radio spectrum time sequence with the gyrosynchrotron source function with five to six free parameters. At themore » stage when the contribution from the acceleration region dominates the radio spectrum, the X-ray- and radio-derived electron energy spectral indices agree well with each other. During this time the maximum energy of the accelerated electron spectrum displays a monotonic increase with time from {approx}300 keV to {approx}2 MeV over roughly one minute duration indicative of an acceleration process in the form of growth of the power-law tail; the fast electron residence time in the acceleration region is about 2-4 s, which is much longer than the time of flight and so requires a strong diffusion mode there to inhibit free-streaming propagation. The acceleration region has a relatively strong magnetic field, B {approx} 120 G, and a low thermal density, n{sub e} {approx}< 2 Multiplication-Sign 10{sup 9} cm{sup -3}. These acceleration region properties are consistent with a stochastic acceleration mechanism.« less

  1. Transmission parameters estimated for Salmonella typhimurium in swine using susceptible-infectious-resistant models and a Bayesian approach

    PubMed Central

    2014-01-01

    Background Transmission models can aid understanding of disease dynamics and are useful in testing the efficiency of control measures. The aim of this study was to formulate an appropriate stochastic Susceptible-Infectious-Resistant/Carrier (SIR) model for Salmonella Typhimurium in pigs and thus estimate the transmission parameters between states. Results The transmission parameters were estimated using data from a longitudinal study of three Danish farrow-to-finish pig herds known to be infected. A Bayesian model framework was proposed, which comprised Binomial components for the transition from susceptible to infectious and from infectious to carrier; and a Poisson component for carrier to infectious. Cohort random effects were incorporated into these models to allow for unobserved cohort-specific variables as well as unobserved sources of transmission, thus enabling a more realistic estimation of the transmission parameters. In the case of the transition from susceptible to infectious, the cohort random effects were also time varying. The number of infectious pigs not detected by the parallel testing was treated as unknown, and the probability of non-detection was estimated using information about the sensitivity and specificity of the bacteriological and serological tests. The estimate of the transmission rate from susceptible to infectious was 0.33 [0.06, 1.52], from infectious to carrier was 0.18 [0.14, 0.23] and from carrier to infectious was 0.01 [0.0001, 0.04]. The estimate for the basic reproduction ration (R 0 ) was 1.91 [0.78, 5.24]. The probability of non-detection was estimated to be 0.18 [0.12, 0.25]. Conclusions The proposed framework for stochastic SIR models was successfully implemented to estimate transmission rate parameters for Salmonella Typhimurium in swine field data. R 0 was 1.91, implying that there was dissemination of the infection within pigs of the same cohort. There was significant temporal-cohort variability, especially at the susceptible to infectious stage. The model adequately fitted the data, allowing for both observed and unobserved sources of uncertainty (cohort effects, diagnostic test sensitivity), so leading to more reliable estimates of transmission parameters. PMID:24774444

  2. Evaluations on the potential productivity of winter wheat based on agro-ecological zone in the world

    NASA Astrophysics Data System (ADS)

    Wang, H.; Li, Q.; Du, X.; Zhao, L.; Lu, Y.; Li, D.; Liu, J.

    2015-04-01

    Wheat is the most widely grown crop globally and an essential source of calories in human diets. Maintaining and increasing global wheat production is therefore strongly linked to food security. In this paper, the evaluation model of winter wheat potential productivity was proposed based on agro-ecological zone and the historical winter wheat yield data in recent 30 years (1983-2011) obtained from FAO. And the potential productions of winter wheat in the world were investigated. The results showed that the realistic potential productivity of winter wheat in Western Europe was highest and it was more than 7500 kg/hm2. The realistic potential productivity of winter wheat in North China Plain were also higher, which was about 6000 kg/hm2. However, the realistic potential productivity of winter wheat in the United States which is the main winter wheat producing country were not high, only about 3000 kg/hm2. In addition to these regions which were the main winter wheat producing areas, the realistic potential productivity in other regions of the world were very low and mainly less than 1500 kg/hm2, like in southwest region of Russia. The gaps between potential productivity and realistic productivity of winter wheat in Kazakhstan and India were biggest, and the percentages of the gap in realistic productivity of winter wheat in Kazakhstan and India were more than 40%. In Russia, the gap between potential productivity and realistic productivity of winter wheat was lowest and the percentage of the gap in realistic productivity of winter wheat in Russia was only 10%.

  3. Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.

    2017-12-01

    A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.

  4. Calibration of infiltration parameters on hydrological tank model using runoff coefficient of rational method

    NASA Astrophysics Data System (ADS)

    Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery

    2017-09-01

    In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.

  5. Open Source Projects in Software Engineering Education: A Mapping Study

    ERIC Educational Resources Information Center

    Nascimento, Debora M. C.; Almeida Bittencourt, Roberto; Chavez, Christina

    2015-01-01

    Context: It is common practice in academia to have students work with "toy" projects in software engineering (SE) courses. One way to make such courses more realistic and reduce the gap between academic courses and industry needs is getting students involved in open source projects (OSP) with faculty supervision. Objective: This study…

  6. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  7. Spin dynamics modeling in the AGS based on a stepwise ray-tracing method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dutheil, Yann

    The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less

  8. The pulsar planet production process

    NASA Technical Reports Server (NTRS)

    Phinney, E. S.; Hansen, B. M. S.

    1993-01-01

    Most plausible scenarios for the formation of planets around pulsars end with a disk of gas around the pulsar. The supplicant author then points to the solar system to bolster faith in the miraculous transfiguration of gas into planets. We here investigate this process of transfiguration. We derive analytic sequences of quasi-static disks which give good approximations to exact solutions of the disk diffusion equation with realistic opacity tables. These allow quick and efficient surveys of parameter space. We discuss the outward transfer of mass in accretion disks and the resulting timescale constraints, the effects of illumination by the central source on the disk and dust within it, and the effects of the widely different elemental compositions of the disks in the various scenarios, and their extensions to globular clusters. We point out where significant uncertainties exist in the appropriate grain opacities, and in the effect of illumination and winds from the neutron star.

  9. Potential release of fibers from burning carbon composites. [aircraft fires

    NASA Technical Reports Server (NTRS)

    Bell, V. L.

    1980-01-01

    A comprehensive experimental carbon fiber source program was conducted to determine the potential for the release of conductive carbon fibers from burning composites. Laboratory testing determined the relative importance of several parameters influencing the amounts of single fibers released, while large-scale aviation jet fuel pool fires provided realistic confirmation of the laboratory data. The dimensions and size distributions of fire-released carbon fibers were determined, not only for those of concern in an electrical sense, but also for those of potential interest from a health and environmental standpoint. Fire plume and chemistry studies were performed with large pool fires to provide an experimental input into an analytical modelling of simulated aircraft crash fires. A study of a high voltage spark system resulted in a promising device for the detection, counting, and sizing of electrically conductive fibers, for both active and passive modes of operation.

  10. Analysis of the Vibration Propagation in the Subsoil

    NASA Astrophysics Data System (ADS)

    Jastrzębska, Małgorzata; Łupieżowiec, Marian; Uliniarz, Rafał; Jaroń, Artur

    2015-02-01

    The paper presents in a comprehensive way issues related to propagation in a soil environment of vibrations originating during sheet piling vibratory driving. Considerations carried out comprised the FEM analysis of initial-boundary behaviour of the subsoil during impacts accompanying the works performed. The analysis has used the authors' RU+MCC constitutive model, which can realistically describe complex deformation characteristics in soils in the field of small strains, which accompany the phenomenon of shock propagation. The basis for model creation and for specification of material parameters of the presented model consisted of first-class tests performed in a triaxial apparatus using proximity detectors guaranteeing a proper measurement of strains ranging from 10-1 to 10-3% and bender elements. Results obtained from numerical analyses were confronted with results of field tests consisting in measurements of acceleration amplitudes generated on the ground surface due to technological impacts versus the distance from vibration source.

  11. A Class of Exact Solutions of the Boussinesq Equation for Horizontal and Sloping Aquifers

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Porporato, A.

    2018-02-01

    The nonlinear equation of Boussinesq (1877) is a foundational approach for studying groundwater flow through an unconfined aquifer, but solving the full nonlinear version of the Boussinesq equation remains a challenge. Here, we present an exact solution to the full nonlinear Boussinesq equation that not only applies to sloping aquifers but also accounts for source and sink terms such as bedrock seepage, an often significant flux in headwater catchments. This new solution captures the hysteretic relationship (a loop rating curve) between the groundwater flow rate and the water table height, which may be used to provide a more realistic representation of streamflow and groundwater dynamics in hillslopes. In addition, the solution provides an expression where the flow recession varies based on hillslope parameters such as bedrock slope, bedrock seepage, aquifer recharge, plant transpiration, and other factors that vary across landscape types.

  12. Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model

    NASA Technical Reports Server (NTRS)

    Kazeminezhad, F.; Anghaie, S.

    2008-01-01

    Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.

  13. Nonparametric variational optimization of reaction coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk

    State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less

  14. Magnetism and thermal evolution of the terrestrial planets

    NASA Technical Reports Server (NTRS)

    Stevenson, D. J.; Spohn, T.; Schubert, G.

    1983-01-01

    The absence in the cases of Venus and Mars of the substantial intrinsic magnetic fields of the earth and Mercury is considered, in light of thermal history calculations which suggest that, while the cores of Mercury and the earth are continuing to freeze, the cores of Venus and Mars may still be completely liquid. It is noted that completely fluid cores, lacking intrinsic heat sources, are not likely to sustain thermal convection for the age of the solar system, but cool to a subadiabatic, conductive state that cannot maintain a dynamo because of the gravitational energy release and the chemically driven convection that accompany inner core growth. The models presented include realistic pressure- and composition-dependent freezing curves for the core, and material parameters are chosen so that correct present-day values of heat outflow, upper mantle temperature and viscosity, and inner core radius, are obtained for the earth.

  15. Thermospheric temperature measurement technique.

    NASA Technical Reports Server (NTRS)

    Hueser, J. E.; Fowler, P.

    1972-01-01

    A method for measurement of temperature in the earth's lower thermosphere from a high-velocity probes is described. An undisturbed atmospheric sample is admitted to the instrument by means of a free molecular flow inlet system of skimmers which avoids surface collisions of the molecules prior to detection. Measurement of the time-of-flight distribution of an initially well-localized group of nitrogen metastable molecular states produced in an open, crossed electron-molecular beam source, yields information on the atmospheric temperature. It is shown that for high vehicle velocities, the time-of-flight distribution of the metastable flux is a sensitive indicator of atmospheric temperature. The temperature measurement precision should be greater than 94% at the 99% confidence level over the range of altitudes from 120-170 km. These precision and altitude range estimates are based on the statistical consideration of the counting rates achieved with a multichannel analyzer using realistic values for system parameters.

  16. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.

  17. The Propagation of Cosmic Rays from the Galactic Wind Termination Shock: Back to the Galaxy?

    NASA Astrophysics Data System (ADS)

    Merten, Lukas; Bustard, Chad; Zweibel, Ellen G.; Becker Tjus, Julia

    2018-05-01

    Although several theories exist for the origin of cosmic rays (CRs) in the region between the spectral “knee” and “ankle,” this problem is still unsolved. A variety of observations suggest that the transition from Galactic to extragalactic sources occurs in this energy range. In this work, we examine whether a Galactic wind that eventually forms a termination shock far outside the Galactic plane can contribute as a possible source to the observed flux in the region of interest. Previous work by Bustard et al. estimated that particles can be accelerated to energies above the “knee” up to R max = 1016 eV for parameters drawn from a model of a Milky Way wind. A remaining question is whether the accelerated CRs can propagate back into the Galaxy. To answer this crucial question, we simulate the propagation of the CRs using the low-energy extension of the CRPropa framework, based on the solution of the transport equation via stochastic differential equations. The setup includes all relevant processes, including three-dimensional anisotropic spatial diffusion, advection, and corresponding adiabatic cooling. We find that, assuming realistic parameters for the shock evolution, a possible Galactic termination shock can contribute significantly to the energy budget in the “knee” region and above. We estimate the resulting produced neutrino fluxes and find them to be below measurements from IceCube and limits by KM3NeT.

  18. How Well Does Fracture Set Characterization Reduce Uncertainty in Capture Zone Size for Wells Situated in Sedimentary Bedrock Aquifers?

    NASA Astrophysics Data System (ADS)

    West, A. C.; Novakowski, K. S.

    2005-12-01

    Regional groundwater flow models are rife with uncertainty. The three-dimensional flux vector fields must generally be inferred using inverse modelling from sparse measurements of hydraulic head, from measurements of hydraulic parameters at a scale that is miniscule in comparison to that of the domain, and from none to a very few measurements of recharge or discharge rate. Despite the inherent uncertainty in these models they are routinely used to delineate steady-state or time-of-travel capture zones for the purpose of wellhead protection. The latter are defined as the volume of the aquifer within which released particles will arrive at the well within the specified time and their delineation requires the additional step of dividing the magnitudes of the flux vectors by the assumed porosity to arrive at the ``average linear groundwater velocity'' vector field. Since the porosity is usually assumed constant over the domain one could be forgiven for thinking that the uncertainty introduced at this step is minor in comparison to the flow model calibration step. We consider this question when the porosity in question is fracture porosity in flat-lying sedimentary bedrock. We also consider whether or not the diffusive uptake of solute into the rock matrix which lies between the source and the production well reduces or enhances the uncertainty. To evaluate the uncertainty an aquifer cross section is conceptualized as an array of horizontal, randomly-spaced, parallel-plate fractures of random aperture, with adjacent horizontal fractures connected by vertical fractures again of random spacing and aperture. The source is assumed to be a continuous concentration (i.e. a dirichlet boundary condition) representing a leaking tank or a DNAPL pool, and the receptor is a fully pentrating well located in the down-gradient direction. In this context the time-of-travel capture zone is defined as the separation distance required such that the source does not contaminate the well beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.

  19. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    DTIC Science & Technology

    2010-01-01

    proposed by Pasion and Oldenburg [25]: Q(t) = kt−βe−γt. (10) Various combinations of these fitting parameters can be used as inputs to classifier... Pasion -Oldenburg parameters k, β, and γ for each anomaly by a direct nonlinear least-squares fit of (10) and by linear (pseudo)inversion of its...combinations of the Pasion -Oldenburg parameters. Com- bining k and γ yields results similar to those of k and R, as Figure 7 and Table 2 show. Figure 8 and

  20. Forecasts of health care utilization related to pandemic A(H1N1)2009 influenza in the Nord-Pas-de-Calais region, France.

    PubMed

    Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P

    2015-05-01

    To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  1. Studies on the use of helicopters for oil spill clearance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinelli, F.N.

    A program of work was undertaken to assess the use of a commercially available underslung cropspraying bucket for spraying oil spill dispersants. The study consisted of land-based trials to measure relevant parameters of the spray and the effect on these parameters of spray height and dispersant viscosity. A sea trial was undertaken to observe the system under realistic conditions. (Copyright (c) Crown Copyright.)

  2. Universal dynamical properties preclude standard clustering in a large class of biochemical data.

    PubMed

    Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi

    2014-09-01

    Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  4. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  5. Shear stress along the conduit wall as a plausible source of tilt at Soufrière Hills volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Green, D. N.; Neuberg, J.; Cayol, V.

    2006-05-01

    Surface deformations recorded in close proximity to the active lava dome at Soufrière Hills volcano, Montserrat, can be used to infer stresses within the uppermost 1000 m of the conduit system. Most deformation source models consider only isotropic pressurisation of the conduit. We show that tilt recorded during rapid magma extrusion in 1997 could have also been generated by shear stresses sustained along the conduit wall; these stresses are a consequence of pressure gradients that develop along the conduit. Numerical modelling, incorporating realistic topography, can reproduce both the morphology and half the amplitude of the measured deformation field using a realistic shear stress amplitude, equivalent to a pressure gradient of 3.5 × 104 Pa m-1 along a 1000 m long conduit with a 15 m radius. This shear stress model has advantages over the isotropic pressure models because it does not require either physically unattainable overpressures or source radii larger than 200 m to explain the same deformation.

  6. TOXICOLOGICAL EVALUATION OF REALISTIC EMISSIONS OF SOURCE AEROSOLS (TERESA): APPLICATION TO POWER PLANT-DERIVED PM2.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annette C. Rohr; Petros Koutrakis; John Godleski

    Determining the health impacts of different sources and components of fine particulate matter (PM2.5) is an important scientific goal, because PM is a complex mixture of both inorganic and organic constituents that likely differ in their potential to cause adverse health outcomes. The TERESA (Toxicological Evaluation of Realistic Emissions of Source Aerosols) study focused on two PM sources - coal-fired power plants and mobile sources - and sought to investigate the toxicological effects of exposure to realistic emissions from these sources. The DOE-EPRI Cooperative Agreement covered the performance and analysis of field experiments at three power plants. The mobile sourcemore » component consisted of experiments conducted at a traffic tunnel in Boston; these activities were funded through the Harvard-EPA Particulate Matter Research Center and will be reported separately in the peer-reviewed literature. TERESA attempted to delineate health effects of primary particles, secondary (aged) particles, and mixtures of these with common atmospheric constituents. The study involved withdrawal of emissions directly from power plant stacks, followed by aging and atmospheric transformation of emissions in a mobile laboratory in a manner that simulated downwind power plant plume processing. Secondary organic aerosol (SOA) derived from the biogenic volatile organic compound {alpha}-pinene was added in some experiments, and in others ammonia was added to neutralize strong acidity. Specifically, four scenarios were studied at each plant: primary particles (P); secondary (oxidized) particles (PO); oxidized particles + secondary organic aerosol (SOA) (POS); and oxidized and neutralized particles + SOA (PONS). Extensive exposure characterization was carried out, including gas-phase and particulate species. Male Sprague Dawley rats were exposed for 6 hours to filtered air or different atmospheric mixtures. Toxicological endpoints included (1) breathing pattern; (2) bronchoalveolar lavage (BAL) fluid cytology and biochemistry; (3) blood cytology; (4) in vivo oxidative stress in heart and lung tissue; and (5) heart and lung histopathology. In addition, at one plant, cardiac arrhythmias and heart rate variability (HRV) were evaluated in a rat model of myocardial infarction. Statistical analyses included analyses of variance (ANOVA) to determine differences between exposed and control animals in response to different scenario/plant combinations; univariate analyses to link individual scenario components to responses; and multivariate analyses (Random Forest analyses) to evaluate component effects in a multipollutant setting. Results from the power plant studies indicated some biological responses to some plant/scenario combinations. A number of significant breathing pattern changes were observed; however, significant clinical changes such as specific irritant effects were not readily apparent, and effects tended to be isolated changes in certain respiratory parameters. Some individual exposure scenario components appeared to be more strongly and consistently related to respiratory parameter changes; however, the specific scenario investigated remained a better predictor of response than individual components of that scenario. Bronchoalveolar lavage indicated some changes in cellularity of BAL fluid in response to the POS and PONS scenarios; these responses were considered toxicologically mild in magnitude. No changes in blood cytology were observed at any plant or scenario. Lung oxidative stress was increased with the POS scenario at one plant, and cardiac oxidative stress was increased with the PONS scenario also at one plant, suggesting limited oxidative stress in response to power plant emissions with added atmospheric constituents. There were some mild histological findings in lung tissue in response to the P and PONS scenarios. Finally, the MI model experiments indicated that premature ventricular beat frequency was increased at the plant studied, while no changes in heart rate, HRV, or electrocardiographic intervals were observed. Overall, the TERESA results should be interpreted as indicating toxicologically mild adverse responses to some scenarios. The varied responses among the three plants indicate heterogeneity in emissions. Ongoing studies using the TERESA approach to evaluate the toxicity of traffic-related pollution will yield valuable data for comparative toxicity assessment and will give us a better understanding of the contribution of different sources to the morbidity and mortality associated with exposure to air pollution.« less

  7. Prediction of the 21-cm signal from reionization: comparison between 3D and 1D radiative transfer schemes

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Mellema, Garrelt; Giri, Sambit K.; Choudhury, T. Roy; Datta, Kanan K.; Majumdar, Suman

    2018-05-01

    Three-dimensional radiative transfer simulations of the epoch of reionization can produce realistic results, but are computationally expensive. On the other hand, simulations relying on one-dimensional radiative transfer solutions are faster but limited in accuracy due to their more approximate nature. Here, we compare the performance of the reionization simulation codes GRIZZLY and C2-RAY which use 1D and 3D radiative transfer schemes, respectively. The comparison is performed using the same cosmological density fields, halo catalogues, and source properties. We find that the ionization maps, as well as the 21-cm signal maps from these two simulations are very similar even for complex scenarios which include thermal feedback on low-mass haloes. The comparison between the schemes in terms of the statistical quantities such as the power spectrum of the brightness temperature fluctuation agrees with each other within 10 per cent error throughout the entire reionization history. GRIZZLY seems to perform slightly better than the seminumerical approaches considered in Majumdar et al. which are based on the excursion set principle. We argue that GRIZZLY can be efficiently used for exploring parameter space, establishing observations strategies, and estimating parameters from 21-cm observations.

  8. Spherically symmetric Einstein-aether perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coley, Alan A.; Latta, Joey; Leon, Genly

    We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less

  9. Hierarchical statistical modeling of xylem vulnerability to cavitation.

    PubMed

    Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda

    2009-01-01

    Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.

  10. X-ray detectability of accreting isolated black holes in our Galaxy

    NASA Astrophysics Data System (ADS)

    Tsuna, Daichi; Kawanaka, Norita; Totani, Tomonori

    2018-06-01

    Detectability of isolated black holes (IBHs) without a companion star but emitting X-rays by accretion from dense interstellar medium (ISM) or molecular cloud gas is investigated. We calculate orbits of IBHs in the Galaxy to derive a realistic spatial distribution of IBHs for various mean values of kick velocity at their birth υavg. X-ray luminosities of these IBHs are then calculated considering various phases of ISM and molecular clouds for a wide range of the accretion efficiency λ (a ratio of the actual accretion rate to the Bondi rate) that is rather uncertain. It is found that detectable IBHs mostly reside near the Galactic Centre (GC), and hence taking the Galactic structure into account is essential. In the hard X-ray band, where identification of IBHs from other contaminating X-ray sources may be easier, the expected number of IBHs detectable by the past survey by NuSTAR towards GC is at most order unity. However, 30-100 IBHs may be detected by the future survey by FORCE with an optimistic parameter set of υavg = 50 km s-1 and λ = 0.1, implying that it may be possible to detect IBHs or constrain the model parameters.

  11. Generation of anatomically realistic numerical phantoms for photoacoustic and ultrasonic breast imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.

    2017-04-01

    Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.

  12. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  13. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  14. Predicting Residential Exposure to Phthalate Plasticizer Emitted from Vinyl Flooring: Sensitivity, Uncertainty, and Implications for Biomonitoring

    PubMed Central

    Xu, Ying; Cohen Hubal, Elaine A.; Little, John C.

    2010-01-01

    Background Because of the ubiquitous nature of phthalates in the environment and the potential for adverse human health effects, an urgent need exists to identify the most important sources and pathways of exposure. Objectives Using emissions of di(2-ethylhexyl) phthalate (DEHP) from vinyl flooring (VF) as an illustrative example, we describe a fundamental approach that can be used to identify the important sources and pathways of exposure associated with phthalates in indoor material. Methods We used a three-compartment model to estimate the emission rate of DEHP from VF and the evolving exposures via inhalation, dermal absorption, and oral ingestion of dust in a realistic indoor setting. Results A sensitivity analysis indicates that the VF source characteristics (surface area and material-phase concentration of DEHP), as well as the external mass-transfer coefficient and ventilation rate, are important variables that influence the steady-state DEHP concentration and the resulting exposure. In addition, DEHP is sorbed by interior surfaces, and the associated surface area and surface/air partition coefficients strongly influence the time to steady state. The roughly 40-fold range in predicted exposure reveals the inherent difficulty in using biomonitoring to identify specific sources of exposure to phthalates in the general population. Conclusions The relatively simple dependence on source and chemical-specific transport parameters suggests that the mechanistic modeling approach could be extended to predict exposures arising from other sources of phthalates as well as additional sources of other semivolatile organic compounds (SVOCs) such as biocides and flame retardants. This modeling approach could also provide a relatively inexpensive way to quantify exposure to many of the SVOCs used in indoor materials and consumer products. PMID:20123613

  15. Predicting residential exposure to phthalate plasticizer emitted from vinyl flooring: sensitivity, uncertainty, and implications for biomonitoring.

    PubMed

    Xu, Ying; Cohen Hubal, Elaine A; Little, John C

    2010-02-01

    Because of the ubiquitous nature of phthalates in the environment and the potential for adverse human health effects, an urgent need exists to identify the most important sources and pathways of exposure. Using emissions of di(2-ethylhexyl) phthalate (DEHP) from vinyl flooring (VF) as an illustrative example, we describe a fundamental approach that can be used to identify the important sources and pathways of exposure associated with phthalates in indoor material. We used a three-compartment model to estimate the emission rate of DEHP from VF and the evolving exposures via inhalation, dermal absorption, and oral ingestion of dust in a realistic indoor setting. A sensitivity analysis indicates that the VF source characteristics (surface area and material-phase concentration of DEHP), as well as the external mass-transfer coefficient and ventilation rate, are important variables that influence the steady-state DEHP concentration and the resulting exposure. In addition, DEHP is sorbed by interior surfaces, and the associated surface area and surface/air partition coefficients strongly influence the time to steady state. The roughly 40-fold range in predicted exposure reveals the inherent difficulty in using biomonitoring to identify specific sources of exposure to phthalates in the general population. The relatively simple dependence on source and chemical-specific transport parameters suggests that the mechanistic modeling approach could be extended to predict exposures arising from other sources of phthalates as well as additional sources of other semivolatile organic compounds (SVOCs) such as biocides and flame retardants. This modeling approach could also provide a relatively inexpensive way to quantify exposure to many of the SVOCs used in indoor materials and consumer products.

  16. Modelling the performance of interferometric gravitational-wave detectors with realistically imperfect optics

    NASA Astrophysics Data System (ADS)

    Bochner, Brett

    The LIGO project is part of a world-wide effort to detect the influx of Gravitational Waves upon the earth from astrophysical sources, via their interaction with laser beams in interferometric detectors that are designed for extraordinarily high sensitivity. Central to the successful performance of LIGO detectors is the quality of their optical components, and the efficient optimization of interferometer configuration parameters. To predict LIGO performance with optics possessing realistic imperfections, we have developed a numerical simulation program to compute the steady-state electric fields of a complete, coupled-cavity LIGO interferometer. The program can model a wide variety of deformations, including laser beam mismatch and/or misalignment, finite mirror size, mirror tilts, curvature distortions, mirror surface roughness, and substrate inhomogeneities. Important interferometer parameters are automatically optimized during program execution to achieve the best possible sensitivity for each new set of perturbed mirrors. This thesis includes investigations of two interferometer designs: the initial LIGO system, and an advanced LIGO configuration called Dual Recycling. For Initial-LIGO simulations, the program models carrier and sideband frequency beams to compute the explicit shot-noise-limited gravitational wave sensitivity of the interferometer. It is demonstrated that optics of exceptional quality (root-mean-square deformations of less than ~1 nm in the central mirror regions) are necessary to meet Initial-LIGO performance requirements, but that they can be feasibly met. It is also shown that improvements in mirror quality can substantially increase LIGO's sensitivity to selected astrophysical sources. For Dual Recycling, the program models gravitational- wave-induced sidebands over a range of frequencies to demonstrate that the tuned and narrow-banded signal responses predicted for this configuration can be achieved with imperfect optics. Dual Recycling has lower losses at the interferometer signal port than the Initial-LIGO system, though not significantly improved tolerance to mirror roughness deformations in terms of maintaining high signals. Finally, it is shown that 'Wavefront Healing', the claim that losses can be re- injected into the system to feed the gravitational wave signals, is successful in theory, but limited in practice for optics which cause large scattering losses. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253- 1690.)

  17. Modelling the performance of interferometric gravitational-wave detectors with realistically imperfect optics

    NASA Astrophysics Data System (ADS)

    Bochner, Brett

    1998-12-01

    The LIGO project is part of a world-wide effort to detect the influx of Gravitational Waves upon the earth from astrophysical sources, via their interaction with laser beams in interferometric detectors that are designed for extraordinarily high sensitivity. Central to the successful performance of LIGO detectors is the quality of their optical components, and the efficient optimization of interferometer configuration parameters. To predict LIGO performance with optics possessing realistic imperfections, we have developed a numerical simulation program to compute the steady-state electric fields of a complete, coupled-cavity LIGO interferometer. The program can model a wide variety of deformations, including laser beam mismatch and/or misalignment, finite mirror size, mirror tilts, curvature distortions, mirror surface roughness, and substrate inhomogeneities. Important interferometer parameters are automatically optimized during program execution to achieve the best possible sensitivity for each new set of perturbed mirrors. This thesis includes investigations of two interferometer designs: the initial LIGO system, and an advanced LIGO configuration called Dual Recycling. For Initial-LIGO simulations, the program models carrier and sideband frequency beams to compute the explicit shot-noise-limited gravitational wave sensitivity of the interferometer. It is demonstrated that optics of exceptional quality (root-mean-square deformations of less than ~1 nm in the central mirror regions) are necessary to meet Initial-LIGO performance requirements, but that they can be feasibly met. It is also shown that improvements in mirror quality can substantially increase LIGO's sensitivity to selected astrophysical sources. For Dual Recycling, the program models gravitational- wave-induced sidebands over a range of frequencies to demonstrate that the tuned and narrow-banded signal responses predicted for this configuration can be achieved with imperfect optics. Dual Recycling has lower losses at the interferometer signal port than the Initial-LIGO system, though not significantly improved tolerance to mirror roughness deformations in terms of maintaining high signals. Finally, it is shown that 'Wavefront Healing', the claim that losses can be re- injected into the system to feed the gravitational wave signals, is successful in theory, but limited in practice for optics which cause large scattering losses. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253- 1690.)

  18. Benefits of detailed models of muscle activation and mechanics

    NASA Technical Reports Server (NTRS)

    Lehman, S. L.; Stark, L.

    1981-01-01

    Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.

  19. Model parameters for representative wetland plant functional groups

    USDA-ARS?s Scientific Manuscript database

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and...

  20. Assessing the monitoring performance using a synthetic microseismic catalogue for hydraulic fracturing

    NASA Astrophysics Data System (ADS)

    Ángel López Comino, José; Kriegerowski, Marius; Cesca, Simone; Dahm, Torsten; Mirek, Janusz; Lasocki, Stanislaw

    2016-04-01

    Hydraulic fracturing is considered among the human operations which could induce or trigger seismicity or microseismic activity. The influence of hydraulic fracturing operations is typically expected in terms of weak magnitude events. However, the sensitivity of the rock mass to trigger seismicity varies significantly for different sites and cannot be easily predicted prior to operations. In order to assess the sensitivity of microseismity to hydraulic fracturing operations, we perform a seismic monitoring at a shale gas exploration/exploitation site in the central-western part of the Peribaltic synclise at Pomerania (Poland). The monitoring will be continued before, during and after the termination of hydraulic fracturing operations. The fracking operations are planned in April 2016 at a depth 4000 m. A specific network setup has been installed since summer 2015, including a distributed network of broadband stations and three small-scale arrays. The network covers a region of 60 km2. The aperture of small scale arrays is between 450 and 950 m. So far no fracturing operations have been performed, but seismic data can already be used to assess the seismic noise and background microseismicity, and to investigate and assess the detection performance of our monitoring setup. Here we adopt a recently developed tool to generate a synthetic catalogue and waveform dataset, which realistically account for the expected microseismicity. Synthetic waveforms are generated for a local crustal model, considering a realistic distribution of hypocenters, magnitudes, moment tensors, and source durations. Noise free synthetic seismograms are superposed to real noise traces, to reproduce true monitoring conditions at the different station locations. We estimate the detection probability for different magnitudes, source-receiver distances, and noise conditions. This information is used to estimate the magnitude of completeness at the depth of the hydraulic fracturing horizontal wells. Our technique is useful to evaluate the efficiency of the seismic network and validate detection and location algorithms, taking into account the signal to noise ratio. The same dataset may be used at a later time, to assess the performance of other seismological analysis, such as hypocentral location, magnitude estimation and source parameters inversion. This work is funded by the EU H2020 SHEER project.

  1. Toxicity of biosolids-derived triclosan and triclocarban to six crop species.

    PubMed

    Prosser, Ryan S; Lissemore, Linda; Solomon, Keith R; Sibley, Paul K

    2014-08-01

    Biosolids are an important source of nutrients and organic matter, which are necessary for the productive cultivation of crop plants. Biosolids have been found to contain the personal care products triclosan and triclocarban at high concentrations relative to other pharmaceuticals and personal care products. The present study investigates whether exposure of 6 plant species (radish, carrot, soybean, lettuce, spring wheat, and corn) to triclosan or triclocarban derived from biosolids has an adverse effect on seed emergence and/or plant growth parameters. Plants were grown in soil amended with biosolids at a realistic agronomic rate. Biosolids were spiked with triclosan or triclocarban to produce increasing environmentally relevant exposures. The concentration of triclosan and triclocarban in biosolids-amended soil declined by up to 97% and 57%, respectively, over the course of the experiments. Amendment with biosolids had a positive effect on the majority of growth parameters in radish, carrot, soybean, lettuce, and wheat plants. No consistent triclosan- or triclocarban-dependent trends in seed emergence and plant growth parameters were observed in 5 of 6 plant species. A significant negative trend in shoot mass was observed for lettuce plants exposed to increasing concentrations of triclocarban (p<0.001). If best management practices are followed for biosolids amendment, triclosan and triclocarban pose a negligible risk to seed emergence and growth of crop plants. © 2014 SETAC.

  2. An Empirical Mass Function Distribution

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  3. Conceptual design study of the moderate size superconducting spherical tokamak power plant

    NASA Astrophysics Data System (ADS)

    Gi, Keii; Ono, Yasushi; Nakamura, Makoto; Someya, Youji; Utoh, Hiroyasu; Tobita, Kenji; Ono, Masayuki

    2015-06-01

    A new conceptual design of the superconducting spherical tokamak (ST) power plant was proposed as an attractive choice for tokamak fusion reactors. We reassessed a possibility of the ST as a power plant using the conservative reactor engineering constraints often used for the conventional tokamak reactor design. An extensive parameters scan which covers all ranges of feasible superconducting ST reactors was completed, and five constraints which include already achieved plasma magnetohydrodynamic (MHD) and confinement parameters in ST experiments were established for the purpose of choosing the optimum operation point. Based on comparison with the estimated future energy costs of electricity (COEs) in Japan, cost-effective ST reactors can be designed if their COEs are smaller than 120 mills kW-1 h-1 (2013). We selected the optimized design point: A = 2.0 and Rp = 5.4 m after considering the maintenance scheme and TF ripple. A self-consistent free-boundary MHD equilibrium and poloidal field coil configuration of the ST reactor were designed by modifying the neutral beam injection system and plasma profiles. The MHD stability of the equilibrium was analysed and a ramp-up scenario was considered for ensuring the new ST design. The optimized moderate-size ST power plant conceptual design realizes realistic plasma and fusion engineering parameters keeping its economic competitiveness against existing energy sources in Japan.

  4. Performance of CMOS imager as sensing element for a Real-time Active Pixel Dosimeter for Interventional Radiology procedures

    NASA Astrophysics Data System (ADS)

    Magalotti, D.; Bissi, L.; Conti, E.; Paolucci, M.; Placidi, P.; Scorzoni, A.; Servoli, L.

    2014-01-01

    Staff members applying Interventional Radiology procedures are exposed to ionizing radiation, which can induce detrimental effects to the human body, and requires an improvement of radiation protection. This paper is focused on the study of the sensor element for a wireless real-time dosimeter to be worn by the medical staff during the interventional radiology procedures, in the framework of the Real-Time Active PIxel Dosimetry (RAPID) INFN project. We characterize a CMOS imager to be used as detection element for the photons scattered by the patient body. The CMOS imager has been first characterized in laboratory using fluorescence X-ray sources, then a PMMA phantom has been used to diffuse the X-ray photons from an angiography system. Different operating conditions have been used to test the detector response in realistic situations, by varying the X-ray tube parameters (continuous/pulsed mode, tube voltage and current, pulse parameters), the sensor parameters (gain, integration time) and the relative distance between sensor and phantom. The sensor response has been compared with measurements performed using passive dosimeters (TLD) and also with a certified beam, in an accredited calibration centre, in order to obtain an absolute calibration. The results are very encouraging, with dose and dose rate measurement uncertainties below the 10% level even for the most demanding Interventional Radiology protocols.

  5. The effect of collagen fibril orientation on the biphasic mechanics of articular cartilage.

    PubMed

    Meng, Qingen; An, Shuqiang; Damion, Robin A; Jin, Zhongmin; Wilcox, Ruth; Fisher, John; Jones, Alison

    2017-01-01

    The highly inhomogeneous distribution of collagen fibrils may have important effects on the biphasic mechanics of articular cartilage. However, the effect of the inhomogeneity of collagen fibrils has mainly been investigated using simplified three-layered models, which may have underestimated the effect of collagen fibrils by neglecting their realistic orientation. The aim of this study was to investigate the effect of the realistic orientation of collagen fibrils on the biphasic mechanics of articular cartilage. Five biphasic material models, each of which included a different level of complexity of fibril reinforcement, were solved using two different finite element software packages (Abaqus and FEBio). Model 1 considered the realistic orientation of fibrils, which was derived from diffusion tensor magnetic resonance images. The simplified three-layered orientation was used for Model 2. Models 3-5 were three control models. The realistic collagen orientations obtained in this study were consistent with the literature. Results from the two finite element implementations were in agreement for each of the conditions modelled. The comparison between the control models confirmed some functions of collagen fibrils. The comparison between Models 1 and 2 showed that the widely-used three-layered inhomogeneous model can produce similar fluid load support to the model including the realistic fibril orientation; however, an accurate prediction of the other mechanical parameters requires the inclusion of the realistic orientation of collagen fibrils. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. The development and modeling of devices and paradigms for transcranial magnetic stimulation

    PubMed Central

    Goetz, Stefan M.; Deng, Zhi-De

    2017-01-01

    Magnetic stimulation is a noninvasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modeling. PMID:28443696

  7. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 3; Aero-Acoustic Analyses and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.

    2008-01-01

    A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.

  8. Development of a 3D numerical code to calculate the trajectories of the blow off electrons emitted by a vacuum surface discharge: Application to the study of the electromagnetic interference induced on a spacecraft

    NASA Astrophysics Data System (ADS)

    Froger, Etienne

    1993-05-01

    A description of the electromagnetic behavior of a satellite subjected to an electric discharge is given using a specially developed numerical code. One of the particularities of vacuum discharges, obtained by irradiation of polymers, is the intense emission of electrons into the spacecraft environment. Electromagnetic radiation, associated with the trajectories of the particles around the spacecraft, is considered as the main source of the interference observed. In the absence of accurate orbital data and realistic ground tests, the assessment of these effects requires numerical simulation of the interaction between this electron source and the spacecraft. This is done by the GEODE particle code which is applied to characteristic configurations in order to estimate the spacecraft response to a discharge, which is simulated from a vacuum discharge model designed in laboratory. The spacecraft response to a current injection is simulated by the ALICE numerical three dimensional code. The comparison between discharge and injection effects, from the results given by the two codes, illustrates the representativity of electromagnetic susceptibility tests and the main parameters for their definition.

  9. Laboratory Measurement of the Brighter-fatter Effect in an H2RG Infrared Detector

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Smith, R.; Huff, E.; Rhodes, J.

    2018-06-01

    The “brighter-fatter” (BF) effect is a phenomenon—originally discovered in charge coupled devices—in which the size of the detector point-spread function (PSF) increases with brightness. We present, for the first time, laboratory measurements demonstrating the existence of the effect in a Hawaii-2RG HgCdTe near-infrared (NIR) detector. We use JPL’s Precision Projector Laboratory, a facility for emulating astronomical observations with UV/VIS/NIR detectors, to project about 17,000 point sources onto the detector to stimulate the effect. After calibrating the detector for nonlinearity with flat-fields, we find evidence that charge is nonlinearly shifted from bright pixels to neighboring pixels during exposures of point sources, consistent with the existence of a BF-type effect. NASAs Wide Field Infrared Survey Telescope (WFIRST) will use similar detectors to measure weak gravitational lensing from the shapes of hundreds of million of galaxies in the NIR. The WFIRST PSF size must be calibrated to ≈0.1% to avoid biased inferences of dark matter and dark energy parameters; therefore further study and calibration of the BF effect in realistic images will be crucial.

  10. The development and modelling of devices and paradigms for transcranial magnetic stimulation.

    PubMed

    Goetz, Stefan M; Deng, Zhi-De

    2017-04-01

    Magnetic stimulation is a non-invasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain, as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modelling.

  11. TESTING WIND AS AN EXPLANATION FOR THE SPIN PROBLEM IN THE CONTINUUM-FITTING METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Bei; Czerny, Bożena; Sobolewska, Małgosia

    2016-04-20

    The continuum-fitting method is one of the two most advanced methods of determining the black hole spin in accreting X-ray binary systems. There are, however, still some unresolved issues with the underlying disk models. One of these issues manifests as an apparent decrease in spin for increasing source luminosity. Here, we perform a few simple tests to establish whether outflows from the disk close to the inner radius can address this problem. We employ four different parametric models to describe the wind and compare these to the apparent decrease in spin with luminosity measured in the sources LMC X-3 andmore » GRS 1915+105. Wind models in which parameters do not explicitly depend on the accretion rate cannot reproduce the spin measurements. Models with mass accretion rate dependent outflows, however, have spectra that emulate the observed ones. The assumption of a wind thus effectively removes the artifact of spin decrease. This solution is not unique; the same conclusion can be obtained using a truncated inner disk model. To distinguish among the valid models, we will need high-resolution X-ray data and a realistic description of the Comptonization in the wind.« less

  12. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  13. Characterization of electrophysiological propagation by multichannel sensors

    PubMed Central

    Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.

    2016-01-01

    Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907

  14. Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis

    PubMed Central

    Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.

    2017-01-01

    Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477

  15. Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity

    NASA Astrophysics Data System (ADS)

    Li, Dunzhu; Gurnis, Michael; Stadler, Georg

    2017-04-01

    We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.

  16. Imaging the redshifted 21 cm pattern around the first sources during the cosmic dawn using the SKA

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.; Choudhuri, Samir

    2017-01-01

    Understanding properties of the first sources in the Universe using the redshifted H I 21 cm signal is one of the major aims of present and upcoming low-frequency experiments. We investigate the possibility of imaging the redshifted 21 cm pattern around the first sources during the cosmic dawn using the SKA1-low. We model the H I 21 cm image maps, appropriate for the SKA1-low, around the first sources consisting of stars and X-ray sources within galaxies. In addition to the system noise, we also account for the astrophysical foregrounds by adding them to the signal maps. We find that after subtracting the foregrounds using a polynomial fit and suppressing the noise by smoothing the maps over 10-30 arcmin angular scale, the isolated sources at z ˜ 15 are detectable with the ˜4σ-9σ confidence level in 2000 h of observation with the SKA1-low. Although the 21 cm profiles around the sources get altered because of the Gaussian smoothing, the images can still be used to extract some of the source properties. We account for overlaps in the patterns of the individual sources by generating realistic H I 21 cm maps of the cosmic dawn that are based on N-body simulations and a one-dimensional radiative transfer code. We find that these sources should be detectable in the SKA1-low images at z = 15 with a signal-to-noise ratio (SNR) of ˜14(4) in 2000 (200) h of observations. One possible observational strategy thus could be to observe multiple fields for shorter observation times, identify fields with SNR ≳ 3 and observe these fields for much longer duration. Such observations are expected to be useful in constraining the parameters related to the first sources.

  17. Reductive dechlorination of trichloroethene DNAPL source zones: source zone architecture versus electron donor availability

    NASA Astrophysics Data System (ADS)

    Krol, M.; Kokkinaki, A.; Sleep, B.

    2014-12-01

    The persistence of dense-non-aqueous-phase liquids (DNAPLs) in the subsurface has led practitioners and regulatory agencies to turn towards low-maintenance, low-cost remediation methods. Biological degradation has been suggested as a possible solution, based on the well-proven ability of certain microbial species to break down dissolved chlorinated ethenes under favorable conditions. However, the biodegradation of pure phase chlorinated ethenes is subject to additional constraints: the continuous release of electron acceptor at a rate governed by mass transfer kinetics, and the temporal and spatial heterogeneity of DNAPL source zones which leads to spatially and temporally variable availability of the reactants for reductive dechlorination. In this work, we investigate the relationship between various DNAPL source zone characteristics and reaction kinetics using COMPSIM, a multiphase groundwater model that considers non-equilibrium mass transfer and Monod-type kinetics for reductive dechlorination. Numerical simulations are performed for simple, homogeneous trichloroethene DNAPL source zones to demonstrate the effect of single source zone characteristics, as well as for larger, more realistic heterogeneous source zones. It is shown that source zone size, and mass transfer kinetics may have a decisive effect on the predicted bio-enhancement. Finally, we evaluate the performance of DNAPL bioremediation for realistic, thermodynamically constrained, concentrations of electron donor. Our results indicate that the latter may be the most important limitation for the success of DNAPL bioremediation, leading to reduced bio-enhancement and, in many cases, comparable performance with water flooding.

  18. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  19. Assessing and optimizing infrasound network performance: application to remote volcano monitoring

    NASA Astrophysics Data System (ADS)

    Tailpied, D.; LE Pichon, A.; Marchetti, E.; Kallel, M.; Ceranna, L.

    2014-12-01

    Infrasound is an efficient monitoring technique to remotely detect and characterize explosive sources such as volcanoes. Simulation methods incorporating realistic source and propagation effects have been developed to quantify the detection capability of any network. These methods can also be used to optimize the network configuration (number of stations, geographical location) in order to reduce the detection thresholds taking into account seasonal effects in infrasound propagation. Recent studies have shown that remote infrasound observations can provide useful information about the eruption chronology and the released acoustic energy. Comparisons with near-field recordings allow evaluating the potential of these observations to better constrain source parameters when other monitoring techniques (satellite, seismic, gas) are not available or cannot be made. Because of its regular activity, the well-instrumented Mount Etna is in Europe a unique natural repetitive source to test and optimize detection and simulation methods. The closest infrasound station part of the International Monitoring System is located in Tunisia (IS48). In summer, during the downwind season, it allows an unambiguous identification of signals associated with Etna eruptions. Under the European ARISE project (Atmospheric dynamics InfraStructure in Europe, FP7/2007-2013), experimental arrays have been installed in order to characterize infrasound propagation in different ranges of distance and direction. In addition, a small-aperture array, set up on the flank by the University of Firenze, has been operating since 2007. Such an experimental setting offers an opportunity to address the societal benefits that can be achieved through routine infrasound monitoring.

  20. Temporal Characterization of Aircraft Noise Sources

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Sullivan, Brenda M.; Rizzi, Stephen A.

    2004-01-01

    Current aircraft source noise prediction tools yield time-independent frequency spectra as functions of directivity angle. Realistic evaluation and human assessment of aircraft fly-over noise require the temporal characteristics of the noise signature. The purpose of the current study is to analyze empirical data from broadband jet and tonal fan noise sources and to provide the temporal information required for prediction-based synthesis. Noise sources included a one-tenth-scale engine exhaust nozzle and a one-fifth scale scale turbofan engine. A methodology was developed to characterize the low frequency fluctuations employing the Short Time Fourier Transform in a MATLAB computing environment. It was shown that a trade-off is necessary between frequency and time resolution in the acoustic spectrogram. The procedure requires careful evaluation and selection of the data analysis parameters, including the data sampling frequency, Fourier Transform window size, associated time period and frequency resolution, and time period window overlap. Low frequency fluctuations were applied to the synthesis of broadband noise with the resulting records sounding virtually indistinguishable from the measured data in initial subjective evaluations. Amplitude fluctuations of blade passage frequency (BPF) harmonics were successfully characterized for conditions equivalent to take-off and approach. Data demonstrated that the fifth harmonic of the BPF varied more in frequency than the BPF itself and exhibited larger amplitude fluctuations over the duration of the time record. Frequency fluctuations were found to be not perceptible in the current characterization of tonal components.

  1. Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks

    PubMed Central

    Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram

    2013-01-01

    The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546

  2. Dynamics of entanglement and uncertainty relation in coupled harmonic oscillator system: exact results

    NASA Astrophysics Data System (ADS)

    Park, DaeKil

    2018-06-01

    The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.

  3. The management challenge for household waste in emerging economies like Brazil: realistic source separation and activation of reverse logistics.

    PubMed

    Fehr, M

    2014-09-01

    Business opportunities in the household waste sector in emerging economies still evolve around the activities of bulk collection and tipping with an open material balance. This research, conducted in Brazil, pursued the objective of shifting opportunities from tipping to reverse logistics in order to close the balance. To do this, it illustrated how specific knowledge of sorted waste composition and reverse logistics operations can be used to determine realistic temporal and quantitative landfill diversion targets in an emerging economy context. Experimentation constructed and confirmed the recycling trilogy that consists of source separation, collection infrastructure and reverse logistics. The study on source separation demonstrated the vital difference between raw and sorted waste compositions. Raw waste contained 70% biodegradable and 30% inert matter. Source separation produced 47% biodegradable, 20% inert and 33% mixed material. The study on collection infrastructure developed the necessary receiving facilities. The study on reverse logistics identified private operators capable of collecting and processing all separated inert items. Recycling activities for biodegradable material were scarce and erratic. Only farmers would take the material as animal feed. No composting initiatives existed. The management challenge was identified as stimulating these activities in order to complete the trilogy and divert the 47% source-separated biodegradable discards from the landfills. © The Author(s) 2014.

  4. More physics in the laundromat

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2010-12-01

    The physics of a washing machine spin cycle is extended to include the spin-up and spin-down phases. We show that, for realistic parameters, an adiabatic approximation applies, and thus the familiar forced, damped harmonic oscillator analysis can be applied to these phases.

  5. Flowing Hot or Cold: User-Friendly Computational Models of Terrestrial and Planetary Lava Channels and Lakes

    NASA Astrophysics Data System (ADS)

    Sakimoto, S. E. H.

    2016-12-01

    Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.

  6. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  7. Method for decreasing CT simulation time of complex phantoms and systems through separation of material specific projection data

    NASA Astrophysics Data System (ADS)

    Divel, Sarah E.; Christensen, Soren; Wintermark, Max; Lansberg, Maarten G.; Pelc, Norbert J.

    2017-03-01

    Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.

  8. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  9. Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.

    PubMed

    Riaz, Bushra; Pfeiffer, Christoph; Schneiderman, Justin F

    2017-08-01

    While commercial magnetoencephalography (MEG) systems are the functional neuroimaging state-of-the-art in terms of spatio-temporal resolution, MEG sensors have not changed significantly since the 1990s. Interest in newer sensors that operate at less extreme temperatures, e.g., high critical temperature (high-T c ) SQUIDs, optically-pumped magnetometers, etc., is growing because they enable significant reductions in head-to-sensor standoff (on-scalp MEG). Various metrics quantify the advantages of on-scalp MEG, but a single straightforward one is lacking. Previous works have furthermore been limited to arbitrary and/or unrealistic sensor layouts. We introduce spatial information density (SID) maps for quantitative and qualitative evaluations of sensor arrays. SID-maps present the spatial distribution of information a sensor array extracts from a source space while accounting for relevant source and sensor parameters. We use it in a systematic comparison of three practical on-scalp MEG sensor array layouts (based on high-T c SQUIDs) and the standard Elekta Neuromag TRIUX magnetometer array. Results strengthen the case for on-scalp and specifically high-T c SQUID-based MEG while providing a path for the practical design of future MEG systems. SID-maps are furthermore general to arbitrary magnetic sensor technologies and source spaces and can thus be used for design and evaluation of sensor arrays for magnetocardiography, magnetic particle imaging, etc.

  10. Strong motion simulation by the composite source modeling: A case study of 1679 M8.0 Sanhe-Pinggu earthquake

    NASA Astrophysics Data System (ADS)

    Liu, Bo-Yan; Shi, Bao-Ping; Zhang, Jian

    2007-05-01

    In this study, a composite source model has been used to calculate the realistic strong ground motions in Beijing area, caused by 1679 M S8.0 earthquake in Sanhe-Pinggu. The results could provide us the useful physical parameters for the future seismic hazard analysis in this area. Considering the regional geological/geophysical background, we simulated the scenario earthquake with an associated ground motions in the area ranging from 39.3°N to 41.1°N in latitude and from 115.35°E to 117.55°E in longitude. Some of the key factors which could influence the characteristics of strong ground motion have been discussed, and the resultant peak ground acceleration (PGA) distribution and the peak ground velocity (PGV) distribution around Beijing area also have been made as well. A comparison of the simulated result with the results derived from the attenuation relation has been made, and a sufficient discussion about the advantages and disadvantages of composite source model also has been given in this study. The numerical results, such as the PGA, PGV, peak ground displacement (PGD), and the three-component time-histories developed for Beijing area, have a potential application in earthquake engineering field and building code design, especially for the evaluation of critical constructions, government decision making and the seismic hazard assessment by financial/insurance companies.

  11. Combining EEG and MEG for the Reconstruction of Epileptic Activity Using a Calibrated Realistic Volume Conductor Model

    PubMed Central

    Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann

    2014-01-01

    To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208

  12. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.

  13. Combined loading criterial influence on structural performance

    NASA Technical Reports Server (NTRS)

    Kuchta, B. J.; Sealey, D. M.; Howell, L. J.

    1972-01-01

    An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.

  14. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    NASA Astrophysics Data System (ADS)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  15. Effects of two-temperature parameter and thermal nonlocal parameter on transient responses of a half-space subjected to ramp-type heating

    NASA Astrophysics Data System (ADS)

    Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng

    2017-07-01

    Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.

  16. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  17. Brownian motion model with stochastic parameters for asset prices

    NASA Astrophysics Data System (ADS)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  18. Material and shape optimization for multi-layered vocal fold models using transient loadings.

    PubMed

    Schmidt, Bastian; Leugering, Günter; Stingl, Michael; Hüttner, Björn; Agaimy, Abbas; Döllinger, Michael

    2013-08-01

    Commonly applied models to study vocal fold vibrations in combination with air flow distributions are self-sustained physical models of the larynx consisting of artificial silicone vocal folds. Choosing appropriate mechanical parameters and layer geometries for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In earlier work by Schmidt et al. [J. Acoust. Soc. Am. 129, 2168-2180 (2011)], the authors presented an approach in which material parameters of a static numerical vocal fold model were optimized to achieve an agreement of the displacement field with data retrieved from hemilarynx experiments. This method is now generalized to a fully transient setting. Moreover in addition to the material parameters, the extended approach is capable of finding optimized layer geometries. Depending on chosen material restriction, significant modifications of the reference geometry are predicted. The additional flexibility in the design space leads to a significantly more realistic deformation behavior. At the same time, the predicted biomechanical and geometrical results are still feasible for manufacturing physical vocal fold models consisting of several silicone layers. As a consequence, the proposed combined experimental and numerical method is suited to guide the construction of physical vocal fold models.

  19. Using Heat Pulses for Quantifying 3d Seepage Velocity in Groundwater-Surface Water Interactions, Considering Source Size, Regime, and Dispersion

    NASA Astrophysics Data System (ADS)

    Zlotnik, V. A.; Tartakovsky, D. M.

    2017-12-01

    The study is motivated by rapid proliferation of field methods for measurements of seepage velocity using heat tracing and is directed to broadening their potential for studies of groundwater-surface water interactions, and hyporheic zone in particular. In vast majority, existing methods assume vertical or horizontal, uniform, 1D seepage velocity. Often, 1D transport assumed as well, and analytical models of heat transport by Suzuki-Stallman are heavily used to infer seepage velocity. However, both of these assumptions (1D flow and 1D transport) are violated due to the flow geometry, media heterogeneity, and localized heat sources. Attempts to apply more realistic conceptual models still lack full 3D view, and known 2D examples are treated numerically, or by making additional simplifying assumptions about velocity orientation. Heat pulse instruments and sensors already offer an opportunity to collect data sufficient for 3D seepage velocity identification at appropriate scale, but interpretation tools for groundwater-surface water interactions in 3D have not been developed yet. We propose an approach that can substantially improve capabilities of already existing field instruments without additional measurements. Proposed closed-form analytical solutions are simple and well suited for using in inverse modeling. Field applications and ramifications for applications, including data analysis are discussed. The approach simplifies data collection, determines 3D seepage velocity, and facilitates interpretation of relations between heat transport parameters, fluid flow, and media properties. Results are obtained using tensor properties of transport parameters, Green's functions, and rotational coordinate transformations using the Euler angles

  20. Fully-Coupled Dynamical Jitter Modeling of Momentum Exchange Devices

    NASA Astrophysics Data System (ADS)

    Alcorn, John

    A primary source of spacecraft jitter is due to mass imbalances within momentum exchange devices (MEDs) used for fine pointing, such as reaction wheels (RWs) and variable-speed control moment gyroscopes (VSCMGs). Although these effects are often characterized through experimentation in order to validate pointing stability requirements, it is of interest to include jitter in a computer simulation of the spacecraft in the early stages of spacecraft development. An estimate of jitter amplitude may be found by modeling MED imbalance torques as external disturbance forces and torques on the spacecraft. In this case, MED mass imbalances are lumped into static and dynamic imbalance parameters, allowing jitter force and torque to be simply proportional to wheel speed squared. A physically realistic dynamic model may be obtained by defining mass imbalances in terms of a wheel center of mass location and inertia tensor. The fully-coupled dynamic model allows for momentum and energy validation of the system. This is often critical when modeling additional complex dynamical behavior such as flexible dynamics and fuel slosh. Furthermore, it is necessary to use the fully-coupled model in instances where the relative mass properties of the spacecraft with respect to the RWs cause the simplified jitter model to be inaccurate. This thesis presents a generalized approach to MED imbalance modeling of a rigid spacecraft hub with N RWs or VSCMGs. A discussion is included to convert from manufacturer specifications of RW imbalances to the parameters introduced within each model. Implementations of the fully-coupled RW and VSCMG models derived within this thesis are released open-source as part of the Basilisk astrodynamics software.

  1. Sensitivity analysis on the performances of a closed-loop Ground Source Heat Pump

    NASA Astrophysics Data System (ADS)

    Casasso, Alessandro; Sethi, Rajandrea

    2014-05-01

    Ground Source Heat Pumps (GSHP) permit to achieve a significant reduction of greenhouse gas emissions, and the margins for economic saving of this technology are strongly correlated to the long-term sustainability of the exploitation of the heat stored in the soil. The operation of a GSHP over its lifetime should be therefore modelled considering realistic conditions, and a thorough characterization of the physical properties of the soil is essential to avoid large errors of prediction. In this work, a BHE modelling procedure with the finite-element code FEFLOW is presented. Starting from the governing equations of the heat transport in the soil around a GSHP and inside the BHE, the most important parameters are individuated and the adopted program settings are explained. A sensitivity analysis is then carried on both the design parameters of the heat exchanger, in order to understand the margins of improvement of a careful design and installation, and the physical properties of the soil, with the aim of quantifying the uncertainty induced by their variability. The relative importance of each parameter is therefore assessed by comparing the statistical distributions of the fluid temperatures and estimating the energy consumption of the heat pump, and practical conclusions are from these results about the site characterization, the design and the installation of a BHE. References Casasso A., Sethi R., 2014 Efficiency of closed loop geothermal heat pumps: A sensitivity analysis, Renewable Energy 62 (2014), pp. 737-746 Chiasson A.C., Rees S.J., Spitler J.D., 2000, A preliminary assessment of the effects of groundwater flow on closed-loop ground-source heat pump systems, ASHRAE Transactions 106 (2000), pp. 380-393 Delaleux F., Py X., Olives R., Dominguez A., 2012, Enhancement of geothermal borehole heat exchangers performances by improvement of bentonite grouts conductivity, Applied Thermal Engineering 33-34, pp. 92-99 Diao N., Li Q., Fang Z., 2004, Heat transfer in ground heat exchangers with groundwater advection, International Journal of Thermal Sciences 43, pp. 1203-1211 Michopoulos A., Kyriakis N., 2010, The influence of a vertical ground heat exchanger length on the electricity consumption of the heat pumps, Renewable Energy 35 (2010), pp. 1403-1407

  2. CONVECTION THEORY AND SUB-PHOTOSPHERIC STRATIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnett, David; Meakin, Casey; Young, Patrick A., E-mail: darnett@as.arizona.ed, E-mail: casey.meakin@gmail.co, E-mail: patrick.young.1@asu.ed

    2010-02-20

    As a preliminary step toward a complete theoretical integration of three-dimensional compressible hydrodynamic simulations into stellar evolution, convection at the surface and sub-surface layers of the Sun is re-examined, from a restricted point of view, in the language of mixing-length theory (MLT). Requiring that MLT use a hydrodynamically realistic dissipation length gives a new constraint on solar models. While the stellar structure which results is similar to that obtained by Yale Rotational Evolution Code (Guenther et al.; Bahcall and Pinsonneault) and Garching models (Schlattl et al.), the theoretical picture differs. A new quantitative connection is made between macro-turbulence, micro-turbulence, andmore » the convective velocity scale at the photosphere, which has finite values. The 'geometric parameter' in MLT is found to correspond more reasonably with the thickness of the superadiabatic region (SAR), as it must for consistency in MLT, and its integrated effect may correspond to that of the strong downward plumes which drive convection (Stein and Nordlund), and thus has a physical interpretation even in MLT. If we crudely require the thickness of the SAR to be consistent with the 'geometric factor' used in MLT, there is no longer a free parameter, at least in principle. Use of three-dimensional simulations of both adiabatic convection and stellar atmospheres will allow the determination of the dissipation length and the geometric parameter (i.e., the entropy jump) more realistically, and with no astronomical calibration. A physically realistic treatment of convection in stellar evolution will require substantial additional modifications beyond MLT, including nonlocal effects of kinetic energy flux, entrainment (the most dramatic difference from MLT found by Meakin and Arnett), rotation, and magnetic fields.« less

  3. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  4. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  5. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  6. Bivalves: From individual to population modelling

    NASA Astrophysics Data System (ADS)

    Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Ruardij, P.

    2014-11-01

    An individual based population model for bivalves was designed, built and tested in a 0D approach, to simulate the population dynamics of a mussel bed located in an intertidal area. The processes at the individual level were simulated following the dynamic energy budget theory, whereas initial egg mortality, background mortality, food competition, and predation (including cannibalism) were additional population processes. Model properties were studied through the analysis of theoretical scenarios and by simulation of different mortality parameter combinations in a realistic setup, imposing environmental measurements. Realistic criteria were applied to narrow down the possible combination of parameter values. Field observations obtained in the long-term and multi-station monitoring program were compared with the model scenarios. The realistically selected modeling scenarios were able to reproduce reasonably the timing of some peaks in the individual abundances in the mussel bed and its size distribution but the number of individuals was not well predicted. The results suggest that the mortality in the early life stages (egg and larvae) plays an important role in population dynamics, either by initial egg mortality, larvae dispersion, settlement failure or shrimp predation. Future steps include the coupling of the population model with a hydrodynamic and biogeochemical model to improve the simulation of egg/larvae dispersion, settlement probability, food transport and also to simulate the feedback of the organisms' activity on the water column properties, which will result in an improvement of the food quantity and quality characterization.

  7. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  8. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  9. The gravitational wave background from massive black hole binaries in Illustris: spectral features and time to detection with pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto; Taylor, Stephen R.

    2017-11-01

    Pulsar timing arrays (PTAs) around the world are using the incredible consistency of millisecond pulsars to measure low-frequency gravitational waves from (super)massive black hole (MBH) binaries. We use comprehensive MBH merger models based on cosmological hydrodynamic simulations to predict the spectrum of the stochastic gravitational wave background (GWB). We use real time-of-arrival specifications from the European, NANOGrav, Parkes, and International PTA (IPTA) to calculate realistic times to detection of the GWB across a wide range of model parameters. In addition to exploring the parameter space of environmental hardening processes (in particular: stellar scattering efficiencies), we have expanded our models to include eccentric binary evolution which can have a strong effect on the GWB spectrum. Our models show that strong stellar scattering and high characteristic eccentricities enhance the GWB strain amplitude near the PTA-sensitive `sweet-spot' (near the frequency f = 1 yr-1), slightly improving detection prospects in these cases. While the GWB amplitude is degenerate between cosmological and environmental parameters, the location of a spectral turnover at low frequencies (f ≲ 0.1 yr-1) is strongly indicative of environmental coupling. At high frequencies (f ≳ 1 yr-1), the GWB spectral index can be used to infer the number density of sources and possibly their eccentricity distribution. Even with merger models that use pessimistic environmental and eccentricity parameters, if the current rate of PTA expansion continues, we find that the IPTA is highly likely to make a detection within about 10 yr.

  10. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  11. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  12. Environmentally realistic concentrations of the antibiotic Trimethoprim affect haemocyte parameters but not antioxidant enzyme activities in the clam Ruditapes philippinarum.

    PubMed

    Matozzo, Valerio; De Notaris, Chiara; Finos, Livio; Filippini, Raffaella; Piovan, Anna

    2015-11-01

    Several biomarkers were measured to evaluate the effects of Trimethoprim (TMP; 300, 600 and 900 ng/L) in the clam Ruditapes philippinarum after exposure for 1, 3 and 7 days. The actual TMP concentrations were also measured in the experimental tanks. The total haemocyte count significantly increased in 7 day-exposed clams, whereas alterations in haemocyte volume were observed after 1 and 3 days of exposure. Haemocyte proliferation was increased significantly in animals exposed for 1 and 7 days, whereas haemocyte lysate lysozyme activity decreased significantly after 1 and 3 days. In addition, TMP significantly increased haemolymph lactate dehydrogenase activity after 3 and 7 days. Regarding antioxidant enzymes, only a significant time-dependent effect on CAT activity was recorded. This study demonstrated that environmentally realistic concentrations of TMP affect haemocyte parameters in clams, suggesting that haemocytes are a useful cellular model for the assessment of the impact of TMP on bivalves. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Radio weak lensing shear measurement in the visibility domain - II. Source extraction

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.

    2018-05-01

    This paper extends the method introduced in Rivi et al. (2016b) to measure galaxy ellipticities in the visibility domain for radio weak lensing surveys. In that paper, we focused on the development and testing of the method for the simple case of individual galaxies located at the phase centre, and proposed to extend it to the realistic case of many sources in the field of view by isolating visibilities of each source with a faceting technique. In this second paper, we present a detailed algorithm for source extraction in the visibility domain and show its effectiveness as a function of the source number density by running simulations of SKA1-MID observations in the band 950-1150 MHz and comparing original and measured values of galaxies' ellipticities. Shear measurements from a realistic population of 104 galaxies randomly located in a field of view of 1 \\deg ^2 (i.e. the source density expected for the current radio weak lensing survey proposal with SKA1) are also performed. At SNR ≥ 10, the multiplicative bias is only a factor 1.5 worse than what found when analysing individual sources, and is still comparable to the bias values reported for similar measurement methods at optical wavelengths. The additive bias is unchanged from the case of individual sources, but it is significantly larger than typically found in optical surveys. This bias depends on the shape of the uv coverage and we suggest that a uv-plane weighting scheme to produce a more isotropic shape could reduce and control additive bias.

  14. Developing a framework for a novel multi-disciplinary, multi-agency intervention(s), to improve medication management in community-dwelling older people on complex medication regimens (MEMORABLE)--a realist synthesis.

    PubMed

    Maidment, Ian; Booth, Andrew; Mullan, Judy; McKeown, Jane; Bailey, Sylvia; Wong, Geoffrey

    2017-07-03

    Medication-related adverse events have been estimated to be responsible for 5700 deaths and cost the UK £750 million annually. This burden falls disproportionately on older people. Outcomes from interventions to optimise medication management are caused by multiple context-sensitive mechanisms. The MEdication Management in Older people: REalist Approaches BAsed on Literature and Evaluation (MEMORABLE) project uses realist synthesis to understand how, why, for whom and in what context interventions, to improve medication management in older people on complex medication regimes residing in the community, work. This realist synthesis uses secondary data and primary data from interviews to develop the programme theory. A realist logic of analysis will synthesise data both within and across the two data sources to inform the design of a complex intervention(s) to help improve medication management in older people. 1. Literature review The review (using realist synthesis) contains five stages to develop an initial programme theory to understand why processes are more or less successful and under which situations: focussing of the research question; developing the initial programme theory; developing the search strategy; selection and appraisal based on relevance and rigour; and data analysis/synthesis to develop and refine the programme theory and context, intervention and mechanism configurations. 2. Realist interviews Realist interviews will explore and refine our understanding of the programme theory developed from the realist synthesis. Up to 30 older people and their informal carers (15 older people with multi-morbidity, 10 informal carers and 5 older people with dementia), and 20 care staff will be interviewed. 3. Developing framework for the intervention(s) Data from the realist synthesis and interviews will be used to refine the programme theory for the intervention(s) to identify: the mechanisms that need to be 'triggered', and the contexts related to these mechanisms. Intervention strategies that change the contexts so the mechanisms are triggered to produce desired outcomes will be developed. Feedback on these strategies will be obtained. This realist synthesis aims to develop a framework (underpinned by our programme theory) for a novel multi-disciplinary, multi-agency intervention(s), to improve medication management in community-dwelling older people on complex medication regimens. PROSPERO CRD42016043506.

  15. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    PubMed

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Searching for gravitational waves from neutron stars

    NASA Astrophysics Data System (ADS)

    Idrisy, Ashikuzzaman

    In this dissertation we discuss gravitational waves (GWs) and their neutron star (NS) sources. We begin with a general discussion of the motivation for searching for GWs and the indirect experimental evidence of their existence. Then we discuss the various mechanisms through which NS can emit GWs, paying special attention the r-mode oscillations. Finally we end with discussion of GW detection. In Chapter 2 we describe research into the frequencies of r-mode oscillations. Knowing these frequencies can be useful for guiding and interpreting gravitational wave and electromagnetic observations. The frequencies of slowly rotating, barotropic, and non-magnetic Newtonian stars are well known, but subject to various corrections. After making simple estimates of the relative strengths of these corrections we conclude that relativistic corrections are the most important. For this reason we extend the formalism of K. H. Lockitch, J. L. Friedman, and N. Andersson [Phys. Rev. D 68, 124010 (2003)], who consider relativistic polytropes, to the case of realistic equations of state. This formulation results in perturbation equations which are solved using a spectral method. We find that for realistic equations of state the r-mode frequency ranges from 1.39--1.57 times the spin frequency of the star when the relativistic compactness parameter (M/R) is varied over the astrophysically motivated interval 0.110--0.310. Following a successful r-mode detection our results can help constrain the high density equation of state. In Chapter 3 we present a technical introduction to the data analysis tools used in GW searches. Starting from the plane-wave solutions derived in Chapter 1 we develop the F-statistic used in the matched filtering technique. This technique relies on coherently integrating the GW detector's data stream with a theoretically modeled wave signal. The statistic is used to test the null hypothesis that the data contains no signal. In this chapter we also discuss how to generate the parameter space of a GW search so as to cover the largest physical range of parameters, while keeping the search computationally feasible. Finally we discuss the time-domain solar system barycentered resampling algorithm as a way to improve to the computational cost of the analysis. In Chapter 4 we discuss a search for GWs from two supernova remnants, G65.7 and G330.2. The searches were conducted using data from the 6th science run of the LIGO detectors. Since the searches were modeled on the Cassiopeia A search paper, Abadie et. al. [Astrophys. J. 722,1504--1513, 2010], we also used the frequency and the first and second derivatives of the frequency as the parameter space of the search. There are two main differences from the previous search. The first is the use of the resampling algorithm, which sped up the calculation of the F-statistic by a factor of 3 and thus allowed for longer stretches of data to be coherently integrated. Being able to integrate more data meant that we could beat the indirect limit on GWs expected from these sources. We used a 51 day integration time for G65.7 and 24 days for G330.2. The second difference is that the analysis pipeline is now more automated. This allows for a more efficient data analysis process. We did not find a credible source of GWs and so we placed upper limits on the gravitational wave strain, ellipticity, and r-mode amplitude of the sources. The best upper-limit for the strain was 3.0 x 10 -25, for ellipticity it was 7.0 x 10-6 and for r-mode amplitude it was 2.2 x 10-4 .

  17. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  18. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  19. Identifiability and estimation of multiple transmission pathways in cholera and waterborne disease.

    PubMed

    Eisenberg, Marisa C; Robertson, Suzanne L; Tien, Joseph H

    2013-05-07

    Cholera and many waterborne diseases exhibit multiple characteristic timescales or pathways of infection, which can be modeled as direct and indirect transmission. A major public health issue for waterborne diseases involves understanding the modes of transmission in order to improve control and prevention strategies. An important epidemiological question is: given data for an outbreak, can we determine the role and relative importance of direct vs. environmental/waterborne routes of transmission? We examine whether parameters for a differential equation model of waterborne disease transmission dynamics can be identified, both in the ideal setting of noise-free data (structural identifiability) and in the more realistic setting in the presence of noise (practical identifiability). We used a differential algebra approach together with several numerical approaches, with a particular emphasis on identifiability of the transmission rates. To examine these issues in a practical public health context, we apply the model to a recent cholera outbreak in Angola (2006). Our results show that the model parameters-including both water and person-to-person transmission routes-are globally structurally identifiable, although they become unidentifiable when the environmental transmission timescale is fast. Even for water dynamics within the identifiable range, when noisy data are considered, only a combination of the water transmission parameters can practically be estimated. This makes the waterborne transmission parameters difficult to estimate, leading to inaccurate estimates of important epidemiological parameters such as the basic reproduction number (R0). However, measurements of pathogen persistence time in environmental water sources or measurements of pathogen concentration in the water can improve model identifiability and allow for more accurate estimation of waterborne transmission pathway parameters as well as R0. Parameter estimates for the Angola outbreak suggest that both transmission pathways are needed to explain the observed cholera dynamics. These results highlight the importance of incorporating environmental data when examining waterborne disease. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Assessment of spatial distrilbution of porosity and aquifer geohydraulic parameters in parts of the Tertiary - Quaternary hydrogeoresource of south-eastern Nigeria

    NASA Astrophysics Data System (ADS)

    George, N. J.; Akpan, A. E.; Akpan, F. S.

    2017-12-01

    An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.

  1. A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Ishii, Takako T.

    2004-03-01

    Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.

  2. Relating Vegetation Aerodynamic Roughness Length to Interferometric SAR Measurements

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; Rodriquez, Ernesto

    1998-01-01

    In this paper, we investigate the feasibility of estimating aerodynamic roughness parameter from interferometric SAR (INSAR) measurements. The relation between the interferometric correlation and the rms height of the surface is presented analytically. Model simulations performed over realistic canopy parameters obtained from field measurements in boreal forest environment demonstrate the capability of the INSAR measurements for estimating and mapping surface roughness lengths over forests and/or other vegetation types. The procedure for estimating this parameter over boreal forests using the INSAR data is discussed and the possibility of extending the methodology over tropical forests is examined.

  3. [Development of indicators for evaluating public dental healthcare services].

    PubMed

    Bueno, Vera Lucia Ribeiro de Carvalho; Cordoni Júnior, Luiz; Mesas, Arthur Eumann

    2011-07-01

    The objective of this article is to describe and analyze the development of indicators used to identify strengths and deficiencies in public dental healthcare services in the municipality of Cambé, Paraná. The methodology employed was a historical-organizational case study. A theoretical model of the service was developed for evaluation planning. To achieve this, information was collected from triangulation of methods (interviews, document analysis and observation). A matrix was then developed which presents analysis dimensions, criteria, indicators, punctuation, parameters and sources of information. Three workshops were staged during the process with local service professionals in order to verify whether both the logical model and the matrix represented the service adequately. The period for collecting data was from November 2006 through July, 2007. As a result, a flowchart of the organization of the public dental health service and a matrix with two-dimensional analysis, twelve criteria and twenty-four indicators, was developed. The development of indicators favoring the participation of people involved with the practice has enabled more comprehensive and realistic evaluation planning.

  4. The effect of inlet boundary conditions in image-based CFD modeling of aortic flow

    NASA Astrophysics Data System (ADS)

    Madhavan, Sudharsan; Kemmerling, Erica Cherry

    2016-11-01

    CFD of cardiovascular flow is a growing and useful field, but simulations are subject to a number of sources of uncertainty which must be quantified. Our work focuses on the uncertainty introduced by the selection of inlet boundary conditions in an image-based, patient-specific model of the aorta. Specifically, we examined the differences between plug flow, fully developed parabolic flow, linear shear flows, skewed parabolic flow profiles, and Womersley flow. Only the shape of the inlet velocity profile was varied-all other parameters were held constant between simulations, including the physiologically realistic inlet flow rate waveform and outlet flow resistance. We found that flow solutions with different inlet conditions did not exhibit significant differences beyond 1 . 75 inlet diameters from the aortic root. Time averaged wall shear stress (TAWSS) was also calculated. The linear shear velocity boundary condition solution exhibited the highest spatially averaged TAWSS, about 2 . 5 % higher than the fully developed parabolic velocity boundary condition, which had the lowest spatially averaged TAWSS.

  5. Using a Magnetic Flux Transport Model to Predict the Solar Cycle

    NASA Technical Reports Server (NTRS)

    Lyatskaya, S.; Hathaway, D.; Winebarger, A.

    2007-01-01

    We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.

  6. Simulation and Optimization of an Airfoil with Leading Edge Slat

    NASA Astrophysics Data System (ADS)

    Schramm, Matthias; Stoevesandt, Bernhard; Peinke, Joachim

    2016-09-01

    A gradient-based optimization is used in order to improve the shape of a leading edge slat upstream of a DU 91-W2-250 airfoil. The simulations are performed by solving the Reynolds-Averaged Navier-Stokes equations (RANS) using the open source CFD code OpenFOAM. Gradients are computed via the adjoint approach, which is suitable to deal with many design parameters, but keeping the computational costs low. The implementation is verified by comparing the gradients from the adjoint method with gradients obtained by finite differences for a NACA 0012 airfoil. The simulations of the leading edge slat are validated against measurements from the acoustic wind tunnel of Oldenburg University at a Reynolds number of Re = 6 • 105. The shape of the slat is optimized using the adjoint approach resulting in a drag reduction of 2%. Although the optimization is done for Re = 6 • 105, the improvements also hold for a higher Reynolds number of Re = 7.9 • 106, which is more realistic at modern wind turbines.

  7. Hypersonic research engine project. Phase 2: Preliminary report on the performance of the HRE/AIM at Mach 6

    NASA Technical Reports Server (NTRS)

    Sun, Y. H.; Sainio, W. C.

    1975-01-01

    Test results of the Aerothermodynamic Integration Model are presented. A program was initiated to develop a hydrogen-fueled research-oriented scramjet for operation between Mach 3 and 8. The primary objectives were to investigate the internal aerothermodynamic characteristics of the engine, to provide realistic design parameters for future hypersonic engine development as well as to evaluate the ground test facility and testing techniques. The engine was tested at the NASA hypersonic tunnel facility with synthetic air at Mach 5, 6, and 7. The hydrogen fuel was heated up to 1500 R prior to injection to simulate a regeneratively cooled system. The engine and component performance at Mach 6 is reported. Inlet performance compared very well both with theory and with subscale model tests. Combustor efficiencies up to 95 percent were attained at an equivalence ratio of unity. Nozzle performance was lower than expected. The overall engine performance was computed using two different methods. The performance was also compared with test data from other sources.

  8. Autonomous rotor heat engine

    NASA Astrophysics Data System (ADS)

    Roulet, Alexandre; Nimmrichter, Stefan; Arrazola, Juan Miguel; Seah, Stella; Scarani, Valerio

    2017-06-01

    The triumph of heat engines is their ability to convert the disordered energy of thermal sources into useful mechanical motion. In recent years, much effort has been devoted to generalizing thermodynamic notions to the quantum regime, partly motivated by the promise of surpassing classical heat engines. Here, we instead adopt a bottom-up approach: we propose a realistic autonomous heat engine that can serve as a test bed for quantum effects in the context of thermodynamics. Our model draws inspiration from actual piston engines and is built from closed-system Hamiltonians and weak bath coupling terms. We analytically derive the performance of the engine in the classical regime via a set of nonlinear Langevin equations. In the quantum case, we perform numerical simulations of the master equation. Finally, we perform a dynamic and thermodynamic analysis of the engine's behavior for several parameter regimes in both the classical and quantum case and find that the latter exhibits a consistently lower efficiency due to additional noise.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Benjamin R.; Baldridge, W. Scott; Gable, Carl W.

    Finite volume calculations of the flow of rhyolite are presented to investigate the fate of viscous magmas flowing in planar fractures with realistic length to width ratios of up to 2500:1. Heat and mass transfer for a melt with a temperature dependent viscosity and the potential to undergo phase change are considered. Magma driving pressures and dike widths are chosen to satisfy simple elastic considerations. These models are applied within a parameter space relevant to the Banco Bonito rhyolite flow, Valles caldera, New Mexico. We estimate a maximum eruption duration for the event of ~200 days, realized at a minimummore » possible dike width of 5-6 m and driving pressure of 7-8 MPa. Simplifications in the current model may warrant scaling of these results. However, we demonstrate the applicability of our model to magma dynamics issues and suggest that such models may be used to infer information about both the timing of an eruption and the evolution of the associated magma source.« less

  10. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Trapped-ion quantum simulation of excitation transport: Disordered, noisy, and long-range connected quantum networks

    NASA Astrophysics Data System (ADS)

    Trautmann, N.; Hauke, P.

    2018-02-01

    The transport of excitations governs fundamental properties of matter. Particularly rich physics emerges in the interplay between disorder and environmental noise, even in small systems such as photosynthetic biomolecules. Counterintuitively, noise can enhance coherent quantum transport, which has been proposed as a mechanism behind the high transport efficiencies observed in photosynthetic complexes. This effect has been called "environment-assisted quantum transport". Here, we propose a quantum simulation of the excitation transport in an open quantum network, taking advantage of the high controllability of current trapped-ion experiments. Our scheme allows for the controlled study of various different aspects of the excitation transfer, ranging from the influence of static disorder and interaction range, over the effect of Markovian and non-Markovian dephasing, to the impact of a continuous insertion of excitations. Our paper discusses experimental error sources and realistic parameters, showing that it can be implemented in state-of-the-art ion-chain experiments.

  12. Studying light-harvesting models with superconducting circuits.

    PubMed

    Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas

    2018-03-02

    The process of photosynthesis, the main source of energy in the living world, converts sunlight into chemical energy. The high efficiency of this process is believed to be enabled by an interplay between the quantum nature of molecular structures in photosynthetic complexes and their interaction with the environment. Investigating these effects in biological samples is challenging due to their complex and disordered structure. Here we experimentally demonstrate a technique for studying photosynthetic models based on superconducting quantum circuits, which complements existing experimental, theoretical, and computational approaches. We demonstrate a high degree of freedom in design and experimental control of our approach based on a simplified three-site model of a pigment protein complex with realistic parameters scaled down in energy by a factor of 10 5 . We show that the excitation transport between quantum-coherent sites disordered in energy can be enabled through the interaction with environmental noise. We also show that the efficiency of the process is maximized for structured noise resembling intramolecular phononic environments found in photosynthetic complexes.

  13. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  14. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  15. Inversion of ocean-bottom seismometer (OBS) waveforms for oceanic crust structure: a synthetic study

    NASA Astrophysics Data System (ADS)

    Li, Xueyan; Wang, Yanbin; Chen, Yongshun John

    2016-08-01

    The waveform inversion method is applied—using synthetic ocean-bottom seismometer (OBS) data—to study oceanic crust structure. A niching genetic algorithm (NGA) is used to implement the inversion for the thickness and P-wave velocity of each layer, and to update the model by minimizing the objective function, which consists of the misfit and cross-correlation of observed and synthetic waveforms. The influence of specific NGA method parameters is discussed, and suitable values are presented. The NGA method works well for various observation systems, such as those with irregular and sparse distribution of receivers as well as single receiver systems. A strategy is proposed to accelerate the convergence rate by a factor of five with no increase in computational complexity; this is achieved using a first inversion with several generations to impose a restriction on the preset range of each parameter and then conducting a second inversion with the new range. Despite the successes of this method, its usage is limited. A shallow water layer is not favored because the direct wave in water will suppress the useful reflection signals from the crust. A more precise calculation of the air-gun source signal should be considered in order to better simulate waveforms generated in realistic situations; further studies are required to investigate this issue.

  16. Jet Noise Source Localization Using Linear Phased Array

    NASA Technical Reports Server (NTRS)

    Agboola, Ferni A.; Bridges, James

    2004-01-01

    A study was conducted to further clarify the interpretation and application of linear phased array microphone results, for localizing aeroacoustics sources in aircraft exhaust jet. Two model engine nozzles were tested at varying power cycles with the array setup parallel to the jet axis. The array position was varied as well to determine best location for the array. The results showed that it is possible to resolve jet noise sources with bypass and other components separation. The results also showed that a focused near field image provides more realistic noise source localization at low to mid frequencies.

  17. An Optically Implemented Kalman Filter Algorithm.

    DTIC Science & Technology

    1983-12-01

    8b. OFFICE SYMOOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER 8c. ADDRESS (City, State and ZIP Code ) 10. SOURCE OF FUNDING NOS.______ PROGRAM...are completely speci- fied for the correlation stage to perform the required corre- lation in real time, and the filter stage to perform the lin- ear...performance analy- ses indicated an enhanced ability of the nonadaptive filter to track a realistic distant point source target with an error standard

  18. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  19. Grain size dependent magnetic discrimination of Iceland and South Greenland terrestrial sediments in the northern North Atlantic sediment record

    NASA Astrophysics Data System (ADS)

    Hatfield, Robert G.; Stoner, Joseph S.; Reilly, Brendan T.; Tepley, Frank J.; Wheeler, Benjamin H.; Housen, Bernard A.

    2017-09-01

    We use isothermal and temperature dependent in-field and magnetic remanence methods together with electron microscopy to characterize different sieved size fractions from terrestrial sediments collected in Iceland and southern Greenland. The magnetic fraction of Greenland silts (3-63 μm) and sands (>63 μm) is primarily composed of near-stoichiometric magnetite that may be oxidized in the finer clay (<3 μm) fractions. In contrast, all Icelandic fractions dominantly contain titanomagnetite of a range of compositions. Ferrimagnetic minerals preferentially reside in the silt-size fraction and exist as fine single-domain (SD) and pseudo-single-domain (PSD) size inclusions in Iceland samples, in contrast to coarser PSD and multi-domain (MD) discrete magnetites from southern Greenland. We demonstrate the potential of using magnetic properties of the silt fraction for source unmixing by creating known endmember mixtures and by using naturally mixed marine sediments from the Eirik Ridge south of Greenland. We develop a novel approach to ferrimagnetic source unmixing by using low temperature magnetic susceptibility curves that are sensitive to the different crystallinity and cation substitution characteristics of the different source regions. Covariation of these properties with hysteresis parameters suggests sediment source changes have driven the magnetic mineral variations observed in Eirik Ridge sediments since the last glacial maximum. These observations assist the development of a routine method and interpretative framework to quantitatively determine provenance in a geologically realistic and meaningful way and assess how different processes combine to drive magnetic variation in the North Atlantic sediment record.

  20. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  1. Interactive Web-based Floodplain Simulation System for Realistic Experiments of Flooding and Flood Damage

    NASA Astrophysics Data System (ADS)

    Demir, I.

    2013-12-01

    Recent developments in web technologies make it easy to manage and visualize large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The floodplain simulation system is a web-based 3D interactive flood simulation environment to create real world flooding scenarios. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create and modify predefined scenarios, control environmental parameters, and evaluate flood mitigation techniques. The web-based simulation system provides an environment to children and adults learn about the flooding, flood damage, and effects of development and human activity in the floodplain. The system provides various scenarios customized to fit the age and education level of the users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various flooding and land use scenarios.

  2. End-to-end simulation and verification of GNC and robotic systems considering both space segment and ground segment

    NASA Astrophysics Data System (ADS)

    Benninghoff, Heike; Rems, Florian; Risse, Eicke; Brunner, Bernhard; Stelzer, Martin; Krenn, Rainer; Reiner, Matthias; Stangl, Christian; Gnat, Marcin

    2018-01-01

    In the framework of a project called on-orbit servicing end-to-end simulation, the final approach and capture of a tumbling client satellite in an on-orbit servicing mission are simulated. The necessary components are developed and the entire end-to-end chain is tested and verified. This involves both on-board and on-ground systems. The space segment comprises a passive client satellite, and an active service satellite with its rendezvous and berthing payload. The space segment is simulated using a software satellite simulator and two robotic, hardware-in-the-loop test beds, the European Proximity Operations Simulator (EPOS) 2.0 and the OOS-Sim. The ground segment is established as for a real servicing mission, such that realistic operations can be performed from the different consoles in the control room. During the simulation of the telerobotic operation, it is important to provide a realistic communication environment with different parameters like they occur in the real world (realistic delay and jitter, for example).

  3. User's instructions for the 41-node thermoregulatory model (steady state version)

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A user's guide for the steady-state thermoregulatory model is presented. The model was modified to provide conversational interaction on a remote terminal, greater flexibility for parameter estimation, increased efficiency of convergence, greater choice of output variable and more realistic equations for respiratory and skin diffusion water losses.

  4. All-optical nanomechanical heat engine.

    PubMed

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-08

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  5. All-Optical Nanomechanical Heat Engine

    NASA Astrophysics Data System (ADS)

    Dechant, Andreas; Kiesel, Nikolai; Lutz, Eric

    2015-05-01

    We propose and theoretically investigate a nanomechanical heat engine. We show how a levitated nanoparticle in an optical trap inside a cavity can be used to realize a Stirling cycle in the underdamped regime. The all-optical approach enables fast and flexible control of all thermodynamical parameters and the efficient optimization of the performance of the engine. We develop a systematic optimization procedure to determine optimal driving protocols. Further, we perform numerical simulations with realistic parameters and evaluate the maximum power and the corresponding efficiency.

  6. Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance

    NASA Astrophysics Data System (ADS)

    Kornfeld, Gertrude H.

    1987-09-01

    Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.

  7. Alpha effect of Alfv{acute e}n waves and current drive in reversed-field pinches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litwin, C.; Prager, S.C.

    Circularly polarized Alfv{acute e}n waves give rise to an {alpha}-dynamo effect that can be exploited to drive parallel current. In a {open_quotes}laminar{close_quotes} magnetic the effect is weak and does not give rise to significant currents for realistic parameters (e.g., in tokamaks). However, in reversed-field pinches (RFPs) in which magnetic field in the plasma core is stochastic, a significant enhancement of the {alpha} effect occurs. Estimates of this effect show that it may be a realistic method of current generation in the present-day RFP experiments and possibly also in future RFP-based fusion reactors. {copyright} {ital 1998 American Institute of Physics.}

  8. Optimal Search for an Astrophysical Gravitational-Wave Background

    NASA Astrophysics Data System (ADS)

    Smith, Rory; Thrane, Eric

    2018-04-01

    Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  9. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  10. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  11. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  12. Realistic Simulations of Coronagraphic Observations with WFIRST

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)

    2018-01-01

    We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.

  13. Toward Millimagnitude Photometric Calibration (Abstract)

    NASA Astrophysics Data System (ADS)

    Dose, E.

    2014-12-01

    (Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.

  14. Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)

    EPA Science Inventory

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...

  15. Validation of an explanatory tool for data-fused displays for high-technology future aircraft

    NASA Astrophysics Data System (ADS)

    Fletcher, Georgina C. L.; Shanks, Craig R.; Selcon, Stephen J.

    1996-05-01

    As the number of sensor and data sources in the military cockpit increases, pilots will suffer high levels of workload which could result in reduced performance and the loss of situational awareness. A DRA research program has been investigating the use of data-fused displays in decision support and has developed and laboratory-tested an explanatory tool for displaying information in air combat scenarios. The tool has been designed to provide pictorial explanations of data that maintain situational awareness by involving the pilot in the hostile aircraft threat assessment task. This paper reports a study carried out to validate the success of the explanatory tool in a realistic flight simulation facility. Aircrew were asked to perform a threat assessment task, either with or without the explanatory tool providing information in the form of missile launch success zone envelopes, while concurrently flying a waypoint course within set flight parameters. The results showed that there was a significant improvement (p less than 0.01) in threat assessment accuracy of 30% when using the explanatory tool. This threat assessment performance advantage was achieved without a trade-off with flying task performance. Situational awareness measures showed no general differences between the explanatory and control conditions, but significant learning effects suggested that the explanatory tool makes the task initially more intuitive and hence less demanding on the pilots' attentional resources. The paper concludes that DRA's data-fused explanatory tool is successful at improving threat assessment accuracy in a realistic simulated flying environment, and briefly discusses the requirements for further research in the area.

  16. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  17. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Identifying Crucial Parameter Correlations Maintaining Bursting Activity

    PubMed Central

    Doloc-Mihu, Anca; Calabrese, Ronald L.

    2014-01-01

    Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358

  19. Magnetic resonance fingerprinting based on realistic vasculature in mice

    PubMed Central

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K.; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K.; Thorin, E.; Sakadzic, Sava; Boas, David A.; Lesage, Frédéric

    2017-01-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO2), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO2, mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. PMID:28043909

  20. Imperfection sensitivity of pressured buckling of biopolymer spherical shells

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ru, C. Q.

    2016-06-01

    Imperfection sensitivity is essential for mechanical behavior of biopolymer shells [such as ultrasound contrast agents (UCAs) and spherical viruses] characterized by high geometric heterogeneity. In this work, an imperfection sensitivity analysis is conducted based on a refined shell model recently developed for spherical biopolymer shells of high structural heterogeneity and thickness nonuniformity. The influence of related parameters (including the ratio of radius to average shell thickness, the ratio of transverse shear modulus to in-plane shear modulus, and the ratio of effective bending thickness to average shell thickness) on imperfection sensitivity is examined for pressured buckling. Our results show that the ratio of effective bending thickness to average shell thickness has a major effect on the imperfection sensitivity, while the effect of the ratio of transverse shear modulus to in-plane shear modulus is usually negligible. For example, with physically realistic parameters for typical imperfect spherical biopolymer shells, the present model predicts that actual maximum external pressure could be reduced to as low as 60% of that of a perfect UCA spherical shell or 55%-65% of that of a perfect spherical virus shell, respectively. The moderate imperfection sensitivity of spherical biopolymer shells with physically realistic imperfection is largely attributed to the fact that biopolymer shells are relatively thicker (defined by smaller radius-to-thickness ratio) and therefore practically realistic imperfection amplitude normalized by thickness is very small as compared to that of classical elastic thin shells which have much larger radius-to-thickness ratio.

  1. A DTI-based model for TMS using the independent impedance method with frequency-dependent tissue parameters

    NASA Astrophysics Data System (ADS)

    De Geeter, N.; Crevecoeur, G.; Dupré, L.; Van Hecke, W.; Leemans, A.

    2012-04-01

    Accurate simulations on detailed realistic head models are necessary to gain a better understanding of the response to transcranial magnetic stimulation (TMS). Hitherto, head models with simplified geometries and constant isotropic material properties are often used, whereas some biological tissues have anisotropic characteristics which vary naturally with frequency. Moreover, most computational methods do not take the tissue permittivity into account. Therefore, we calculate the electromagnetic behaviour due to TMS in a head model with realistic geometry and where realistic dispersive anisotropic tissue properties are incorporated, based on T1-weighted and diffusion-weighted magnetic resonance images. This paper studies the impact of tissue anisotropy, permittivity and frequency dependence, using the anisotropic independent impedance method. The results show that anisotropy yields differences up to 32% and 19% of the maximum induced currents and electric field, respectively. Neglecting the permittivity values leads to a decrease of about 72% and 24% of the maximum currents and field, respectively. Implementing the dispersive effects of biological tissues results in a difference of 6% of the maximum currents. The cerebral voxels show limited sensitivity of the induced electric field to changes in conductivity and permittivity, whereas the field varies approximately linearly with frequency. These findings illustrate the importance of including each of the above parameters in the model and confirm the need for accuracy in the applied patient-specific method, which can be used in computer-assisted TMS.

  2. Overflow Simulations using MPAS-Ocean in Idealized and Realistic Domains

    NASA Astrophysics Data System (ADS)

    Reckinger, S.; Petersen, M. R.; Reckinger, S. J.

    2016-02-01

    MPAS-Ocean is used to simulate an idealized, density-driven overflow using the dynamics of overflow mixing and entrainment (DOME) setup. Numerical simulations are benchmarked against other models, including the MITgcm's z-coordinate model and HIM's isopycnal coordinate model. A full parameter study is presented that looks at how sensitive overflow simulations are to vertical grid type, resolution, and viscosity. Horizontal resolutions with 50 km grid cells are under-resolved and produce poor results, regardless of other parameter settings. Vertical grids ranging in thickness from 15 m to 120 m were tested. A horizontal resolution of 10 km and a vertical resolution of 60 m are sufficient to resolve the mesoscale dynamics of the DOME configuration, which mimics real-world overflow parameters. Mixing and final buoyancy are least sensitive to horizontal viscosity, but strongly sensitive to vertical viscosity. This suggests that vertical viscosity could be adjusted in overflow water formation regions to influence mixing and product water characteristics. Also, the study shows that sigma coordinates produce much less mixing than z-type coordinates, resulting in heavier plumes that go further down slope. Sigma coordinates are less sensitive to changes in resolution but as sensitive to vertical viscosity compared to z-coordinates. Additionally, preliminary measurements of overflow diagnostics on global simulations using a realistic oceanic domain are presented.

  3. Monitoring of deep brain temperature in infants using multi-frequency microwave radiometry and thermal modelling.

    PubMed

    Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D

    2001-07-01

    In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.

  4. Variability of pulsed energy outputs from three dermatology lasers during multiple simulated treatments.

    PubMed

    Britton, Jason

    2018-01-20

    Dermatology laser treatments are undertaken at regional departments using lasers of different powers and wavelengths. In order to achieve good outcomes, there needs to be good consistency of laser output across different weeks as it is custom and practice to break down the treatments into individual fractions. Departments will also collect information from test patches to help decide on the most appropriate treatment parameters for individual patients. The objective of these experiments is to assess the variability of the energy outputs from a small number of lasers across multiple weeks at realistic parameters. The energy outputs from 3 lasers were measured at realistic treatment parameters using a thermopile detector across a period of 6 weeks. All lasers fired in single-pulse mode demonstrated good repeatability of energy output. In spite of one of the lasers being scheduled for a dye canister change in the next 2 weeks, there was good energy matching between the two devices with only a 4%-5% variation in measured energies. Based on the results presented, clinical outcomes should not be influenced by variability in the energy outputs of the dermatology lasers used as part of the treatment procedure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Environmental assessment of mining industry solid pollution in the mercurial district of Azzaba, northeast Algeria.

    PubMed

    Seklaoui, M'hamed; Boutaleb, Abdelhak; Benali, Hanafi; Alligui, Fadila; Prochaska, Walter

    2016-11-01

    To date, there have been few detailed studies regarding the impact of mining and metallogenic activities on solid fractions in the Azzaba mercurial district (northeast Algeria) despite its importance and global similarity with large Hg mines. To assess the degree, distribution, and sources of pollution, a physical inventory of apparent pollution was developed, and several samples of mining waste, process waste, sediment, and soil were collected on regional and local scales to determine the concentration of Hg and other metals according to their existing mineralogical association. Several physico-chemical parameters that are known to influence the pollution distribution are realized. The extremely high concentrations of all metals exceed all norms and predominantly characterize the metallurgic and mining areas; the metal concentrations significantly decrease at significant low distances from these sources. The geo-accumulation index, which is the most realistic assessment method, demonstrates that soils and sediments near waste dumps and abandoned Hg mines are extremely polluted by all analyzed metals. The pollution by these metals decreases significantly with distance, which indicates a limited dispersion. The results of a clustering analysis and an integrated pollution index suggest that waste dumps, which are composed of calcine and condensation wastes, are the main source of pollution. Correlations and principal component analysis reveal the important role of hosting carbonate rocks in limiting pollution and differentiating calcine wastes from condensation waste, which has an extremely high Hg concentration (˃1 %).

  6. Numerical simulation of electromagnetic fields and impedance of CERN LINAC4 H(-) source taking into account the effect of the plasma.

    PubMed

    Grudiev, A; Lettry, J; Mattei, S; Paoluzzi, M; Scrivens, R

    2014-02-01

    Numerical simulation of the CERN LINAC4 H(-) source 2 MHz RF system has been performed taking into account a realistic geometry from 3D Computer Aided Design model using commercial FEM high frequency simulation code. The effect of the plasma has been added to the model by the approximation of a homogenous electrically conducting medium. Electric and magnetic fields, RF power losses, and impedance of the circuit have been calculated for different values of the plasma conductivity. Three different regimes have been found depending on the plasma conductivity: (1) Zero or low plasma conductivity results in RF electric field induced by the RF antenna being mainly capacitive and has axial direction; (2) Intermediate conductivity results in the expulsion of capacitive electric field from plasma and the RF power coupling, which is increasing linearly with the plasma conductivity, is mainly dominated by the inductive azimuthal electric field; (3) High conductivity results in the shielding of both the electric and magnetic fields from plasma due to the skin effect, which reduces RF power coupling to plasma. From these simulations and measurements of the RF power coupling on the CERN source, a value of the plasma conductivity has been derived. It agrees well with an analytical estimate calculated from the measured plasma parameters. In addition, the simulated and measured impedances with and without plasma show very good agreement as well demonstrating validity of the plasma model used in the RF simulations.

  7. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  8. Toward seismic source imaging using seismo-ionospheric data

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Larmat, C. S.; Mikesell, D.; Sladen, A.; Khelfi, K.; Astafyeva, E.; Lognonne, P. H.

    2014-12-01

    The worldwide coverage offered by global navigation space systems (GNSS) such as GPS, GLONASS or Galileo allows seismological measurements of a new kind. GNSS-derived total electron content (TEC) measurements can be especially useful to image seismically active zones that are not covered by conventional instruments. For instance, it has been shown that the Japanese dense GPS network GEONET was able to record images of the ionosphere response to the initial coseismic sea-surface motion induced by the great Mw 9.0 2011 Tohoku-Oki earthquake less than 10 minutes after the rupture initiation (Astafyeva et al., 2013). But earthquakes of lower magnitude, down to about 6.5 would also induce measurable ionospheric perturbations, when GNSS stations are located less than 250 km away from the epicenter. In order to make use of these new data, ionospheric seismology needs to develop accurate forward models so that we can invert for quantitative seismic sources parameters. We will present our current understanding of the coupling mechanisms between the solid Earth, the ocean, the atmosphere and the ionosphere. We will also present the state-of-the-art in the modeling of coseismic ionospheric disturbances using acoustic ray theory and a new 3D modeling method based on the Spectral Element Method (SEM). This latter numerical tool will allow us to incorporate lateral variations in the solid Earth properties, the bathymetry and the atmosphere as well as realistic seismic source parameters. Furthermore, seismo-acoustic waves propagate in the atmosphere at a much slower speed (from 0.3 to ~1 km/s) than seismic waves propagate in the solid Earth. We are exploring the application of back-projection and time-reversal methods to TEC observations in order to retrieve the time and space characteristics of the acoustic emission in the seismic source area. We will first show modeling and inversion results with synthetic data. Finally, we will illustrate the imaging capability of our approach with, among other possible examples, the 2011 Mw 9.0 Tohoku-Oki earthquake, Japan, the 2012 Mw 7.8 Haida Gwaii earthquake, Canada and the 2011 Mw 7.1 Van earthquake, Eastern Turkey.

  9. Non-Spherical Source-Surface Model of the Corona and Heliosphere for a Quadrupolar Main Field of the Sun

    NASA Astrophysics Data System (ADS)

    Schulz, M.

    2008-05-01

    Different methods of modeling the coronal and heliospheric magnetic field are conveniently visualized and intercompared by applying them to ideally axisymmetric field models. Thus, for example, a dipolar main B field with its moment parallel to the Sun's rotation axis leads to a flat heliospheric current sheet. More general solar main B fields (still axisymmetric about the solar rotation axis for simplicity) typically lead to cone-shaped current sheets beyond the source surface (and presumably also in MHD models). As in the dipolar case [Schulz et al., Solar Phys., 60, 83-104, 1978], such conical current sheets can be made realistically thin by taking the source surface to be non-spherical in a way that reflects the underlying structure of the Sun's main B field. A source surface that seems to work well in this respect [Schulz, Ann. Geophysicae, 15, 1379-1387, 1997] is a surface of constant F = (1/r)kB, where B is the scalar strength of the Sun's main magnetic field and k (~ 1.4) is a shape parameter. This construction tends to flatten the source surface in regions where B is relatively weak. Thus, for example, the source surface for a dipolar B field is shaped somewhat like a Rugby football, whereas the source surface for an axisymmetric quadrupolar B field is similarly elongated but somewhat flattened (as if stuffed into a pair of co-axial cones) at mid-latitudes. A linear combination of co-axial dipolar and quadrupolar B fields generates a somewhat apple-shaped source surface. If the region surrounded by the source surface is regarded as current-free, then the source surface itself should be (as nearly as possible) an equipotential surface for the corresponding magnetic scalar potential (expanded, for example, in spherical harmonics). More generally, the mean-square tangential component of the coronal magnetic field over the source surface should be minimized with respect to any adjustable parameters of the field model. The solar wind should then flow not quite radially, but rather in a straight line along the outward normal to the source surface, and the heliospheric B field should follow a corresponding generalization of Parker's spiral [Levine et al., Solar Phys., 77, 363-392, 1982]. In this work the above program is implemented for a Sun with an axisymmetric but purely quadrupolar main magnetic field. Two heliospheric current sheets emanate from circular neutral lines at mid-latitudes on the corresponding source surface. However, because the source surface is relatively flattened in regions where these neutral lines appear, the radial component of the heliospheric B field at r ~ 1 AU and beyond is much more nearly latitude-independent in absolute value than one would expect from a model based on a spherical source surface.

  10. Probing light sterile neutrino signatures at reactor and Spallation Neutron Source neutrino experiments

    NASA Astrophysics Data System (ADS)

    Kosmas, T. S.; Papoulias, D. K.; Tórtola, M.; Valle, J. W. F.

    2017-09-01

    We investigate the impact of a fourth sterile neutrino at reactor and Spallation Neutron Source neutrino detectors. Specifically, we explore the discovery potential of the TEXONO and COHERENT experiments to subleading sterile neutrino effects through the measurement of the coherent elastic neutrino-nucleus scattering event rate. Our dedicated χ2-sensitivity analysis employs realistic nuclear structure calculations adequate for high purity sub-keV threshold Germanium detectors.

  11. Do absorption and realistic distraction influence performance of component task surgical procedure?

    PubMed

    Pluyter, Jon R; Buzink, Sonja N; Rutkowski, Anne-F; Jakimowicz, Jack J

    2010-04-01

    Surgeons perform complex tasks while exposed to multiple distracting sources that may increase stress in the operating room (e.g., music, conversation, and unadapted use of sophisticated technologies). This study aimed to examine whether such realistic social and technological distracting conditions may influence surgical performance. Twelve medical interns performed a laparoscopic cholecystectomy task with the Xitact LC 3.0 virtual reality simulator under distracting conditions (exposure to music, conversation, and nonoptimal handling of the laparoscope) versus nondistracting conditions (control condition) as part of a 2 x 2 within-subject experimental design. Under distracting conditions, the medical interns showed a significant decline in task performance (overall task score, task errors, and operating time) and significantly increased levels of irritation toward both the assistant handling the laparoscope in a nonoptimal way and the sources of social distraction. Furthermore, individual differences in cognitive style (i.e., cognitive absorption and need for cognition) significantly influenced the levels of irritation experienced by the medical interns. The results suggest careful evaluation of the social and technological sources of distraction in the operation room to reduce irritation for the surgeon and provision of proper preclinical laparoscope navigation training to increase security for the patient.

  12. LEO-to-ground polarization measurements aiming for space QKD using Small Optical TrAnsponder (SOTA).

    PubMed

    Carrasco-Casado, Alberto; Kunimori, Hiroo; Takenaka, Hideki; Kubo-Oka, Toshihiro; Akioka, Maki; Fuse, Tetsuharu; Koyama, Yoshisada; Kolev, Dimitar; Munemasa, Yasushi; Toyoshima, Morio

    2016-05-30

    Quantum communication, and more specifically Quantum Key Distribution (QKD), enables the transmission of information in a theoretically secure way, guaranteed by the laws of quantum physics. Although fiber-based QKD has been readily available since several years ago, a global quantum communication network will require the development of space links, which remains to be demonstrated. NICT launched a LEO satellite in 2014 carrying a lasercom terminal (SOTA), designed for in-orbit technological demonstrations. In this paper, we present the results of the campaign to measure the polarization characteristics of the SOTA laser sources after propagating from LEO to ground. The most-widely used property for encoding information in free-space QKD is the polarization, and especially the linear polarization. Therefore, studying its behavior in a realistic link is a fundamental step for proving the feasibility of space quantum communications. The results of the polarization preservation of two highly-polarized lasers are presented here, including the first-time measurement of a linearly-polarized source at λ = 976 nm and a circularly-polarized source at λ = 1549 nm from space using a realistic QKD-like receiver, installed in the Optical Ground Station at the NICT Headquarters, in Tokyo, Japan.

  13. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    NASA Astrophysics Data System (ADS)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.

  14. A theoretical investigation of chirp insonification of ultrasound contrast agents.

    PubMed

    Barlow, Euan; Mulholland, Anthony J; Gachagan, Anthony; Nordon, Alison

    2011-08-01

    A theoretical investigation of second harmonic imaging of an Ultrasound Contrast Agent (UCA) under chirp insonification is considered. By solving the UCA's dynamical equation analytically, the effect that the chirp signal parameters and the UCA shell parameters have on the amplitude of the second harmonic frequency are examined. This allows optimal parameter values to be identified which maximise the UCA's second harmonic response. A relationship is found for the chirp parameters which ensures that a signal can be designed to resonate a UCA for a given set of shell parameters. It is also shown that the shell thickness, shell viscosity and shell elasticity parameter should be as small as realistically possible in order to maximise the second harmonic amplitude. Keller-Herring, Second Harmonic, Chirp, Ultrasound Contrast Agent. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Truncated disc surface brightness profiles produced by flares

    NASA Astrophysics Data System (ADS)

    Borlaff, Alejandro; Eliche-Moral, M. Carmen; Beckman, John; Font, Joan

    2017-03-01

    Previous studies have discarded that flares in galactic discs may explain the truncation that are frequently observed in highly-inclined galaxies (Kregel et al. 2002). However, no study has systematically analysed this hypothesis using realistic models for the disc, the flare and the bulge. We derive edge-on and face-on surface brightness profiles for a series of realistic galaxy models with flared discs that sample a wide range of structural and photometric parameters across the Hubble Sequence, accordingly to observations. The surface brightness profile for each galaxy model has been simulated for edge-on and face-on views to find out whether the flared disc produces a significant truncation in the disc in the edge-on view compared to the face-on view or not. In order to simulate realistic images of disc galaxies, we have considered the observational distribution of the photometric parameters as a function of the morphological type for three mass bins (10 < log10(M/M ⊙) < 10.7, 10.7 < log10(M/M ⊙) < 11 and log10(M/M ⊙) > 11), and four morphological type bins (S0-Sa, Sb-Sbc, Sc-Scd and Sd-Sdm). For each mass bin, we have restricted the photometric and structural parameters of each modelled galaxy to their characteristic observational ranges (μ0, disc, μeff, bulge, B/T, M abs, r eff, n bulge, h R, disc) and the flare in the disc (h z, disc/h R, disc, ∂h z, disc/∂R, see de Grijs & Peletier 1997, Graham 2001, López-Corredoira et al. 2002, Yoachim & Dalcanton 2006, Bizyaev et al. 2014, Mosenkov et al. 2015). Contrary to previous claims, the simulations show that realistic flared disks can be responsible for the truncations observed in many edge-on systems, preserving the profile of the non-flared analogous model in face-on view. These breaks reproduce the properties of the weak-to-intermediate breaks observed in many real Type-II galaxies in the diagram relating the radial location of the break (R brkII) in units of the inner disk scale-length with the break strength S (Laine et al. 2014). Radial variation of the scale-height of the disc (flaring) can explain the existence of many breaks in edge-on galaxies, especially of those with low break strengths 10\\frac{ho}{hi} \\sim \\ [-0.3,-0.1]$ .

  16. Vacuum stress energy density and its gravitational implications

    NASA Astrophysics Data System (ADS)

    Estrada, Ricardo; Fulling, Stephen A.; Kaplan, Lev; Kirsten, Klaus; Liu, Zhonghai; Milton, Kimball A.

    2008-04-01

    In nongravitational physics the local density of energy is often regarded as merely a bookkeeping device; only total energy has an experimental meaning—and it is only modulo a constant term. But in general relativity the local stress-energy tensor is the source term in Einstein's equation. In closed universes, and those with Kaluza-Klein dimensions, theoretical consistency demands that quantum vacuum energy should exist and have gravitational effects, although there are no boundary materials giving rise to that energy by van der Waals interactions. In the lab there are boundaries, and in general the energy density has a nonintegrable singularity as a boundary is approached (for idealized boundary conditions). As pointed out long ago by Candelas and Deutsch, in this situation there is doubt about the viability of the semiclassical Einstein equation. Our goal is to show that the divergences in the linearized Einstein equation can be renormalized to yield a plausible approximation to the finite theory that presumably exists for realistic boundary conditions. For a scalar field with Dirichlet or Neumann boundary conditions inside a rectangular parallelepiped, we have calculated by the method of images all components of the stress tensor, for all values of the conformal coupling parameter and an exponential ultraviolet cutoff parameter. The qualitative features of contributions from various classes of closed classical paths are noted. Then the Estrada-Kanwal distributional theory of asymptotics, particularly the moment expansion, is used to show that the linearized Einstein equation with the stress-energy near a plane boundary as source converges to a consistent theory when the cutoff is removed. This paper reports work in progress on a project combining researchers in Texas, Louisiana and Oklahoma. It is supported by NSF Grants PHY-0554849 and PHY-0554926.

  17. Projections of health care expenditures as a share of the GDP: actuarial and macroeconomic approaches.

    PubMed Central

    Warshawsky, M J

    1994-01-01

    STUDY QUESTION. Can the steady increases in health care expenditures as a share of GDP projected by widely cited actuarial models be rationalized by a macroeconomic model with sensible parameters and specification? DATA SOURCES. National Income and Product Accounts, and Social Security and Health Care Financing Administration are the data sources used in parameters estimates. STUDY DESIGN. Health care expenditures as a share of gross domestic product (GDP) are projected using two methodological approaches--actuarial and macroeconomic--and under various assumptions. The general equilibrium macroeconomic approach has the advantage of allowing an investigation of the causes of growth in the health care sector and its consequences for the overall economy. DATA COLLECTION METHODS. Simulations are used. PRINCIPAL FINDINGS. Both models unanimously project a continued increase in the ratio of health care expenditures to GDP. Under the most conservative assumptions, that is, robust economic growth, improved demographic trends, or a significant moderation in the rate of health care price inflation, the health care sector will consume more than a quarter of national output by 2065. Under other (perhaps more realistic) assumptions, including a continuation of current trends, both approaches predict that health care expenditures will comprise between a third and a half of national output. In the macroeconomic model, the increasing use of capital goods in the health care sector explains the observed rise in relative prices. Moreover, this "capital deepening" implies that a relatively modest fraction of the labor force is employed in health care and that the rest of the economy is increasingly starved for capital, resulting in a declining standard of living. PMID:8063567

  18. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  19. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  20. Comment on ``Symmetry and structure of quantized vortices in superfluid 3'

    NASA Astrophysics Data System (ADS)

    Sauls, J. A.; Serene, J. W.

    1985-10-01

    Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.

  1. Exemplifying the Effects of Parameterization Shortcomings in the Numerical Simulation of Geological Energy and Mass Storage

    NASA Astrophysics Data System (ADS)

    Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk

    2016-04-01

    Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.

  2. Charcoal as an alternative energy source. sub-project: briquetting of charcoal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enstad, G.G.

    1982-02-02

    Charcoal briquettes have been studied both theoretically and experimentally. It appears most realistic to use binders in solution. Binders of this kind have been examined and the briquettes' mechanical properties measured. Most promising are borresperse, gum arabic, dynolex, and wood tar.

  3. Critical Perspectives on Methodology in Pedagogic Research

    ERIC Educational Resources Information Center

    Kahn, Peter

    2015-01-01

    The emancipatory dimension to higher education represents one of the sector's most compelling characteristics, but it remains important to develop understanding of the sources of determination that shape practice. Drawing on critical realist perspectives, we explore generative mechanisms by which methodology in pedagogic research affects the…

  4. Coping with Stress in the Special Education Classroom.

    ERIC Educational Resources Information Center

    Brownell, Mary

    1997-01-01

    Discusses the stress that special education teachers may feel by role overload and lack of autonomy. Stress relieving strategies are described, including setting realistic expectations, making distinctions between the job and personal life, increasing autonomy, looking for alternative sources of reinforcement, increasing efficacy, and developing…

  5. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  6. Linking the Weather Generator with Regional Climate Model: Effect of Higher Resolution

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Huth, Radan; Farda, Ales; Skalak, Petr

    2014-05-01

    This contribution builds on our last year EGU contribution, which followed two aims: (i) validation of the simulations of the present climate made by the ALADIN-Climate Regional Climate Model (RCM) at 25 km resolution, and (ii) presenting a methodology for linking the parametric weather generator (WG) with RCM output (aiming to calibrate a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations). Now we have available new higher-resolution (6.25 km) simulations with the same RCM. The main topic of this contribution is an anser to a following question: What is an effect of using a higher spatial resolution on a quality of simulating the surface weather characteristics? In the first part, the high resolution RCM simulation of the present climate will be validated in terms of selected WG parameters, which are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series. When comparing the WG parameters from the two sources (RCM vs observations), we interpolate the RCM-based parameters into the station locations while accounting for the effect of altitude. In the second part, we will discuss an effect of using the higher resolution: the results of the validation tests will be compared with those obtained with the lower-resolution RCM. Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  7. Simplifying the complexity of a coupled carbon turnover and pesticide degradation model

    NASA Astrophysics Data System (ADS)

    Marschmann, Gianna; Erhardt, André H.; Pagel, Holger; Kügler, Philipp; Streck, Thilo

    2016-04-01

    The mechanistic one-dimensional model PECCAD (PEsticide degradation Coupled to CArbon turnover in the Detritusphere; Pagel et al. 2014, Biogeochemistry 117, 185-204) has been developed as a tool to elucidate regulation mechanisms of pesticide degradation in soil. A feature of this model is that it integrates functional traits of microorganisms, identifiable by molecular tools, and physicochemical processes such as transport and sorption that control substrate availability. Predicting the behavior of microbially active interfaces demands a fundamental understanding of factors controlling their dynamics. Concepts from dynamical systems theory allow us to study general properties of the model such as its qualitative behavior, intrinsic timescales and dynamic stability: Using a Latin hypercube method we sampled the parameter space for physically realistic steady states of the PECCAD ODE system and set up a numerical continuation and bifurcation problem with the open-source toolbox MatCont in order to obtain a complete classification of the dynamical system's behaviour. Bifurcation analysis reveals an equilibrium state of the system entirely controlled by fungal kinetic parameters. The equilibrium is generally unstable in response to small perturbations except for a small band in parameter space where the pesticide pool is stable. Time scale separation is a phenomenon that occurs in almost every complex open physical system. Motivated by the notion of "initial-stage" and "late-stage" decomposers and the concept of r-, K- or L-selected microbial life strategies, we test the applicability of geometric singular perturbation theory to identify fast and slow time scales of PECCAD. Revealing a generic fast-slow structure would greatly simplify the analysis of complex models of organic matter turnover by reducing the number of unknowns and parameters and providing a systematic mathematical framework for studying their properties.

  8. Mass-loss from advective accretion disc around rotating black holes

    NASA Astrophysics Data System (ADS)

    Aktar, Ramiz; Das, Santabrata; Nandi, Anuj

    2015-11-01

    We examine the properties of the outflowing matter from an advective accretion disc around a spinning black hole. During accretion, rotating matter experiences centrifugal pressure-supported shock transition that effectively produces a virtual barrier around the black hole in the form of post-shock corona (hereafter PSC). Due to shock compression, PSC becomes hot and dense that eventually deflects a part of the inflowing matter as bipolar outflows because of the presence of extra thermal gradient force. In our approach, we study the outflow properties in terms of the inflow parameters, namely specific energy (E) and specific angular momentum (λ) considering the realistic outflow geometry around the rotating black holes. We find that spin of the black hole (ak) plays an important role in deciding the outflow rate R_{dot{m}} (ratio of mass flux of outflow to inflow); in particular, R_{dot{m}} is directly correlated with ak for the same set of inflow parameters. It is found that a large range of the inflow parameters allows global accretion-ejection solutions, and the effective area of the parameter space (E, λ) with and without outflow decreases with black hole spin (ak). We compute the maximum outflow rate (R^{max}_{dot{m}}) as a function of black hole spin (ak) and observe that R^{max}_{dot{m}} weakly depends on ak that lies in the range ˜10-18 per cent of the inflow rate for the adiabatic index (γ) with 1.5 ≥ γ ≥ 4/3. We present the observational implication of our approach while studying the steady/persistent jet activities based on the accretion states of black holes. We discuss that our formalism seems to have the potential to explain the observed jet kinetic power for several Galactic black hole sources and active galactic nuclei.

  9. A Facial Control Method Using Emotional Parameters in Sensibility Robot

    NASA Astrophysics Data System (ADS)

    Shibata, Hiroshi; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori

    The “Ifbot” robot communicates with people by considering its own “emotions”. Ifbot has many facial expressions to communicate enjoyment. These are used to express its internal emotions, purposes, reactions caused by external stimulus, and entertainment such as singing songs. All these facial expressions are developed by designers manually. Using this approach, we must design all facial motions, if we want Ifbot to express them. It, however, is not realistic. We have therefore developed a system which convert Ifbot's emotions to its facial expressions automatically. In this paper, we propose a method for creating Ifbot's facial expressions from parameters, emotional parameters, which handle its internal emotions computationally.

  10. Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.

    PubMed

    Escolano, José; López, José J; Pueo, Basilio

    2007-06-01

    Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.

  11. Experimental testing of the noise-canceling processor.

    PubMed

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  12. Quantum energy teleportation in a quantum Hall system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusa, Go; Izumida, Wataru; Hotta, Masahiro

    2011-09-15

    We propose an experimental method for a quantum protocol termed quantum energy teleportation (QET), which allows energy transportation to a remote location without physical carriers. Using a quantum Hall system as a realistic model, we discuss the physical significance of QET and estimate the order of energy gain using reasonable experimental parameters.

  13. Interdisciplinary Modeling and Dynamics of Archipelago Straits

    DTIC Science & Technology

    2009-01-01

    modeling, tidal modeling and multi-dynamics nested domains and non-hydrostatic modeling WORK COMPLETED Realistic Multiscale Simulations, Real-time...six state variables (chlorophyll, nitrate , ammonium, detritus, phytoplankton, and zooplankton) were needed to initialize simulations. Using biological...parameters from literature, climatology from World Ocean Atlas data for nitrate and chlorophyll profiles extracted from satellite data, a first

  14. Extensional channel flow revisited: a dynamical systems perspective

    PubMed Central

    Meseguer, Alvaro; Mellibovsky, Fernando; Weidman, Patrick D.

    2017-01-01

    Extensional self-similar flows in a channel are explored numerically for arbitrary stretching–shrinking rates of the confining parallel walls. The present analysis embraces time integrations, and continuations of steady and periodic solutions unfolded in the parameter space. Previous studies focused on the analysis of branches of steady solutions for particular stretching–shrinking rates, although recent studies focused also on the dynamical aspects of the problems. We have adopted a dynamical systems perspective, analysing the instabilities and bifurcations the base state undergoes when increasing the Reynolds number. It has been found that the base state becomes unstable for small Reynolds numbers, and a transitional region including complex dynamics takes place at intermediate Reynolds numbers, depending on the wall acceleration values. The base flow instabilities are constitutive parts of different codimension-two bifurcations that control the dynamics in parameter space. For large Reynolds numbers, the restriction to self-similarity results in simple flows with no realistic behaviour, but the flows obtained in the transition region can be a valuable tool for the understanding of the dynamics of realistic Navier–Stokes solutions. PMID:28690413

  15. Modern Perspectives on Numerical Modeling of Cardiac Pacemaker Cell

    PubMed Central

    Maltsev, Victor A.; Yaniv, Yael; Maltsev, Anna V.; Stern, Michael D.; Lakatta, Edward G.

    2015-01-01

    Cardiac pacemaking is a complex phenomenon that is still not completely understood. Together with experimental studies, numerical modeling has been traditionally used to acquire mechanistic insights in this research area. This review summarizes the present state of numerical modeling of the cardiac pacemaker, including approaches to resolve present paradoxes and controversies. Specifically we discuss the requirement for realistic modeling to consider symmetrical importance of both intracellular and cell membrane processes (within a recent “coupled-clock” theory). Promising future developments of the complex pacemaker system models include the introduction of local calcium control, mitochondria function, and biochemical regulation of protein phosphorylation and cAMP production. Modern numerical and theoretical methods such as multi-parameter sensitivity analyses within extended populations of models and bifurcation analyses are also important for the definition of the most realistic parameters that describe a robust, yet simultaneously flexible operation of the coupled-clock pacemaker cell system. The systems approach to exploring cardiac pacemaker function will guide development of new therapies, such as biological pacemakers for treating insufficient cardiac pacemaker function that becomes especially prevalent with advancing age. PMID:24748434

  16. Kinetics of devolatilization and oxidation of a pulverized biomass in an entrained flow reactor under realistic combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Santiago; Remacha, Pilar; Ballester, Javier

    2008-03-15

    In this paper the results of a complete set of devolatilization and combustion experiments performed with pulverized ({proportional_to}500 {mu}m) biomass in an entrained flow reactor under realistic combustion conditions are presented. The data obtained are used to derive the kinetic parameters that best fit the observed behaviors, according to a simple model of particle combustion (one-step devolatilization, apparent oxidation kinetics, thermally thin particles). The model is found to adequately reproduce the experimental trends regarding both volatile release and char oxidation rates for the range of particle sizes and combustion conditions explored. The experimental and numerical procedures, similar to those recentlymore » proposed for the combustion of pulverized coal [J. Ballester, S. Jimenez, Combust. Flame 142 (2005) 210-222], have been designed to derive the parameters required for the analysis of biomass combustion in practical pulverized fuel configurations and allow a reliable characterization of any finely pulverized biomass. Additionally, the results of a limited study on the release rate of nitrogen from the biomass particle along combustion are shown. (author)« less

  17. Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Lapusta, N.

    2017-12-01

    Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.

  18. An investigation of the role of current and future remote sensing data systems in numerical meteorology

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Smith, William L.

    1993-01-01

    The goals of this research endeavor have been to develop a flexible and relatively complete framework for the investigation of current and future satellite data sources in numerical meteorology. In order to realistically model how satellite information might be used for these purposes, it is necessary that Observing System Simulation Experiments (OSSEs) be as complete as possible. It is therefore desirable that these experiments simulate in entirety the sequence of steps involved in bringing satellite information from the radiance level through product retrieval to a realistic analysis and forecast sequence. In this project we have worked to make this sequence realistic by synthesizing raw satellite data from surrogate atmospheres, deriving satellite products from these data and subsequently producing analyses and forecasts using the retrieved products. The accomplishments made in 1991 are presented. The emphasis was on examining atmospheric soundings and microphysical products which we expect to produce with the launch of the Advanced Microwave Sounding Unit (AMSU), slated for flight in mid 1994.

  19. Realistic Affective Forecasting: The Role of Personality

    PubMed Central

    Hoerger, Michael; Chapman, Ben; Duberstein, Paul

    2016-01-01

    Affective forecasting often drives decision making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the ‘realistic paradigm’ in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesized that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine’s Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesized, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts. PMID:26212463

  20. Realistic affective forecasting: The role of personality.

    PubMed

    Hoerger, Michael; Chapman, Ben; Duberstein, Paul

    2016-11-01

    Affective forecasting often drives decision-making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the "realistic paradigm" in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesised that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine's Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesised, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts.

  1. Nonlinearly driven harmonics of Alfvén modes

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.

    2014-01-01

    In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.

  2. Monte Carlo simulation of ferroelectric domain growth

    NASA Astrophysics Data System (ADS)

    Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.

    2006-01-01

    The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.

  3. Realistic Solar Surface Convection Simulations

    NASA Technical Reports Server (NTRS)

    Stein, Robert F.; Nordlund, Ake

    2000-01-01

    We perform essentially parameter free simulations with realistic physics of convection near the solar surface. We summarize the physics that is included and compare the simulation results with observations. Excellent agreement is obtained for the depth of the convection zone, the p-mode frequencies, the p-mode excitation rate, the distribution of the emergent continuum intensity, and the profiles of weak photospheric lines. We describe how solar convection is nonlocal. It is driven from a thin surface thermal boundary layer where radiative cooling produces low entropy gas which forms the cores of the downdrafts in which most of the buoyancy work occurs. We show that turbulence and vorticity are mostly confined to the intergranular lanes and underlying downdrafts. Finally, we illustrate our current work on magneto-convection.

  4. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  5. A statistical approach to quasi-extinction forecasting.

    PubMed

    Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric

    2007-12-01

    Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.

  6. EGG: Empirical Galaxy Generator

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  7. Environments for online maritime simulators with cloud computing capabilities

    NASA Astrophysics Data System (ADS)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  8. Operational VGOS Scheduling

    NASA Astrophysics Data System (ADS)

    Searle, Anthony; Petrachenko, Bill

    2016-12-01

    The VLBI Global Observing System (VGOS) has been designed to take advantage of advances in data recording speeds and storage capacity, allowing for smaller and faster antennas, wider bandwidths, and shorter observation durations. Here, schedules for a ``realistic" VGOS network, frequency sequences, and expanded source lists are presented using a new source-based scheduling algorithm. The VGOS aim for continuous observations presents new operational challenges. As the source-based strategy is independent of the observing network, there are operational advantages which allow for more flexible scheduling of continuous VLBI observations. Using VieVS, simulations of several schedules are presented and compared with previous VGOS studies.

  9. Quasineutral plasma expansion into infinite vacuum as a model for parallel ELM transport

    NASA Astrophysics Data System (ADS)

    Moulton, D.; Ghendrih, Ph; Fundamenski, W.; Manfredi, G.; Tskhakaya, D.

    2013-08-01

    An analytic solution for the expansion of a plasma into vacuum is assessed for its relevance to the parallel transport of edge localized mode (ELM) filaments along field lines. This solution solves the 1D1V Vlasov-Poisson equations for the adiabatic (instantaneous source), collisionless expansion of a Gaussian plasma bunch into an infinite space in the quasineutral limit. The quasineutral assumption is found to hold as long as λD0/σ0 ≲ 0.01 (where λD0 is the initial Debye length at peak density and σ0 is the parallel length of the Gaussian filament), a condition that is physically realistic. The inclusion of a boundary at x = L and consequent formation of a target sheath is found to have a negligible effect when L/σ0 ≳ 5, a condition that is physically plausible. Under the same condition, the target flux densities predicted by the analytic solution are well approximated by the ‘free-streaming’ equations used in previous experimental studies, strengthening the notion that these simple equations are physically reasonable. Importantly, the analytic solution predicts a zero heat flux density so that a fluid approach to the problem can be used equally well, at least when the source is instantaneous. It is found that, even for JET-like pedestal parameters, collisions can affect the expansion dynamics via electron temperature isotropization, although this is probably a secondary effect. Finally, the effect of a finite duration, τsrc, for the plasma source is investigated. As is found for an instantaneous source, when L/σ0 ≳ 5 the presence of a target sheath has a negligible effect, at least up to the explored range of τsrc = L/cs (where cs is the sound speed at the initial temperature).

  10. Dynamics of nonreactive solute transport in the permafrost environment

    NASA Astrophysics Data System (ADS)

    Svyatskiy, D.; Coon, E. T.; Moulton, J. D.

    2017-12-01

    As part of the DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic, researchers are developing process-rich models to understand and predict the evolution of water sources and hydrologic flow pathways resulting from degrading permafrost. The sources and interaction of surface and subsurface water and flow paths are complex in space and time due to strong interplay between heterogeneous subsurface parameters, the seasonal to decadal evolution of the flow domain, climate driven melting and release of permafrost ice as a liquid water source, evolving surface topography and highly variable meteorological data. In this study, we seek to characterize the magnitude of vertical and lateral subsurface flows in a cold, wet tundra, polygonal landscape characteristic of the Barrow Peninsula, AK. To better understand the factors controlling water flux partitioning in these low gradient landscapes, NGEE researchers developed and are applying the Advanced Terrestrial Simulator (ATS), which fully couples surface and subsurface flow and energy processes, snow distribution and atmospheric forcing. Here we demonstrate the integration of a new solute transport model within the ATS, which enables the interpretation of applied and natural tracer experiments and observations aimed at quantifying water sources and flux partitioning. We examine the role of ice wedge polygon structure, freeze-thaw processes and soil properties on the seasonal transport of water within and through polygons features, and compare results to tracer experiments on 2D low-centered and high-centered transects corresponding to artificial as well as realistic topographical data from sites in polygonal tundra. These simulations demonstrate significant difference between flow patterns between permafrost and non-permafrost environments due to active layer freeze-thaw processes.

  11. Does the finite size of the proto-neutron star preclude supernova neutrino flavor scintillation due to turbulence?

    DOE PAGES

    Kneller, James P.; Mauney, Alex W.

    2013-08-23

    Here, the transition probabilities describing the evolution of a neutrino with a given energy along some ray through a turbulent supernova profile are random variates unique to each ray. If the proto-neutron-star source of the neutrinos were a point, then one might expect the evolution of the turbulence would cause the flavor composition of the neutrinos to vary in time i.e. the flavor would scintillate. But in reality the proto-neutron star is not a point source—it has a size of order ˜10km, so the neutrinos emitted from different points at the source will each have seen different turbulence. The finitemore » source size will reduce the correlation of the flavor transition probabilities along different trajectories and reduce the magnitude of the flavor scintillation. To determine whether the finite size of the proto-neutron star will preclude flavor scintillation, we calculate the correlation of the neutrino flavor transition probabilities through turbulent supernova profiles as a function of the separation δx between the emission points. The correlation will depend upon the power spectrum used for the turbulence, and we consider two cases: when the power spectrum is isotropic, and the more realistic case of a power spectrum which is anisotropic on large scales and isotropic on small. Although it is dependent on a number of uncalibrated parameters, we show the supernova neutrino source is not of sufficient size to significantly blur flavor scintillation in all mixing channels when using an isotropic spectrum, and this same result holds when using an anisotropic spectrum, except when we greatly reduce the similarity of the turbulence along parallel trajectories separated by ˜10km or less.« less

  12. Geodetic Measurements and Numerical Modeling of the Deformation Cycle for Okmok Volcano, Alaska: 1993-2008

    NASA Astrophysics Data System (ADS)

    Ohlendorf, S. J.; Feigl, K.; Thurber, C. H.; Lu, Z.; Masterlark, T.

    2011-12-01

    Okmok Volcano is an active caldera located on Umnak Island in the Aleutian Island arc. Okmok, having recently erupted in 1997 and 2008, is well suited for multidisciplinary studies of magma migration and storage because it hosts a good seismic network and has been the subject of synthetic aperture radar (SAR) images that span the recent eruption cycle. Interferometric SAR can characterize surface deformation in space and time, while data from the seismic network provides important information about the interior processes and structure of the volcano. We conduct a complete time series analysis of deformation of Okmok with images collected by the ERS and Envisat satellites on more than 100 distinct epochs between 1993 and 2008. We look for changes in inter-eruption inflation rates, which may indicate inelastic rheologic effects. For the time series analysis, we analyze the gradient of phase directly, without unwrapping, using the General Inversion of Phase Technique (GIPhT) [Feigl and Thurber, 2009]. This approach accounts for orbital and atmospheric effects and provides realistic estimates of the uncertainties of the model parameters. We consider several models for the source, including the prolate spheroid model and the Mogi model, to explain the observed deformation. Using a medium that is a homogeneous half space, we estimate the source depth to be centered at about 4 km below sea level, consistent with the findings of Masterlark et al. [2010]. As in several other geodetic studies, we find the source to be approximately centered beneath the caldera. To account for rheologic complexity, we next apply the Finite Element Method to simulate a pressurized cavity embedded in a medium with material properties derived from body wave seismic tomography. This approach allows us to address the problem of unreasonably large pressure values implied by a Mogi source with a radius of about 1 km by experimenting with larger sources. We also compare the time dependence of the source to published results that used GPS data.

  13. Magnetic resonance fingerprinting based on realistic vasculature in mice.

    PubMed

    Pouliot, Philippe; Gagnon, Louis; Lam, Tina; Avti, Pramod K; Bowen, Chris; Desjardins, Michèle; Kakkar, Ashok K; Thorin, Eric; Sakadzic, Sava; Boas, David A; Lesage, Frédéric

    2017-04-01

    Magnetic resonance fingerprinting (MRF) was recently proposed as a novel strategy for MR data acquisition and analysis. A variant of MRF called vascular MRF (vMRF) followed, that extracted maps of three parameters of physiological importance: cerebral oxygen saturation (SatO 2 ), mean vessel radius and cerebral blood volume (CBV). However, this estimation was based on idealized 2-dimensional simulations of vascular networks using random cylinders and the empirical Bloch equations convolved with a diffusion kernel. Here we focus on studying the vascular MR fingerprint using real mouse angiograms and physiological values as the substrate for the MR simulations. The MR signal is calculated ab initio with a Monte Carlo approximation, by tracking the accumulated phase from a large number of protons diffusing within the angiogram. We first study the identifiability of parameters in simulations, showing that parameters are fully estimable at realistically high signal-to-noise ratios (SNR) when the same angiogram is used for dictionary generation and parameter estimation, but that large biases in the estimates persist when the angiograms are different. Despite these biases, simulations show that differences in parameters remain estimable. We then applied this methodology to data acquired using the GESFIDE sequence with SPIONs injected into 9 young wild type and 9 old atherosclerotic mice. Both the pre injection signal and the ratio of post-to-pre injection signals were modeled, using 5-dimensional dictionaries. The vMRF methodology extracted significant differences in SatO 2 , mean vessel radius and CBV between the two groups, consistent across brain regions and dictionaries. Further validation work is essential before vMRF can gain wider application. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. 10 CFR 960.3-1-5 - Basis for site evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...

  15. 10 CFR 960.3-1-5 - Basis for site evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...

  16. 10 CFR 960.3-1-5 - Basis for site evaluations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...

  17. 10 CFR 960.3-1-5 - Basis for site evaluations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...

  18. 10 CFR 960.3-1-5 - Basis for site evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... comparative evaluations of sites in terms of the capabilities of the natural barriers for waste isolation and.... Comparative site evaluations shall place primary importance on the natural barriers of the site. In such... only to the extent necessary to obtain realistic source terms for comparative site evaluations based on...

  19. Network Centric Warfare: A Realistic Defense Alternative for Smaller Nations?

    DTIC Science & Technology

    2004-06-01

    organic information sources. The degree to which force entities are networked will determine the quality of information that is available to various...control processes will determine the extent that information is shared, as well as the nature and quality of the interactions that occur between and...

  20. Sex Differences in Cardiovascular and Subjective Stress Reactions: Prospective Evidence in a Realistic Military Setting

    DTIC Science & Technology

    2014-01-01

    procedures were held constant). After the period of quiet rest, the finger pulse oximeter (MedSource International, Mound, MN) was applied to the left...temperature were then recorded with pulse oximeter (Medline Industries, Inc., Mundelein, IL). Following standard guide- lines (Pickering et al., 2005

  1. SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM

    EPA Science Inventory

    A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...

  2. Environmentally Realistic Mixtures of the Five Regulated Haloacetic Acids Exhibit Concentration-Dependent Departures from Dose Additivity

    EPA Science Inventory

    Disinfection of water decreases waterborne disease. Disinfection byproducts (DBPs) are formed by the reaction of oxidizing disinfectants with inorganic and organic materials in the source water. The U.S. EPA regulates five haloacetic acid (HAA) DBPs as a mixture. The objective ...

  3. Electron percolation in realistic models of carbon nanotube networks

    NASA Astrophysics Data System (ADS)

    Simoneau, Louis-Philippe; Villeneuve, Jérémie; Rochefort, Alain

    2015-09-01

    The influence of penetrable and curved carbon nanotubes (CNT) on the charge percolation in three-dimensional disordered CNT networks have been studied with Monte-Carlo simulations. By considering carbon nanotubes as solid objects but where the overlap between their electron cloud can be controlled, we observed that the structural characteristics of networks containing lower aspect ratio CNT are highly sensitive to the degree of penetration between crossed nanotubes. Following our efficient strategy to displace CNT to different positions to create more realistic statistical models, we conclude that the connectivity between objects increases with the hard-core/soft-shell radii ratio. In contrast, the presence of curved CNT in the random networks leads to an increasing percolation threshold and to a decreasing electrical conductivity at saturation. The waviness of CNT decreases the effective distance between the nanotube extremities, hence reducing their connectivity and degrading their electrical properties. We present the results of our simulation in terms of thickness of the CNT network from which simple structural parameters such as the volume fraction or the carbon nanotube density can be accurately evaluated with our more realistic models.

  4. Convective dynamics and chemical disequilibrium in the atmospheres of substellar objects

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2017-11-01

    The thousands of substellar objects now known provide a unique opportunity to test our understanding of atmospheric dynamics across a range of environments. The chemical timescales of certain species transition from being much shorter than the dynamical timescales to being much longer than them at a point in the atmosphere known as the quench point. This transition leads to a state of dynamical disequilibrium, the effects of which can be used to probe the atmospheric dynamics of these objects. Unfortunately, due to computational constraints, models that inform the interpretation of these observations are run at dynamical parameters which are far from realistic values. In this study, we explore the behavior of a disequilibrium chemical process with increasingly realistic planetary conditions, to quantify the effects of the approximations used in current models. We simulate convection in 2-D, plane-parallel, polytropically-stratified atmospheres, into which we add reactive passive tracers that explore disequilibrium behavior. We find that as we increase the Rayleigh number, and thus achieve more realistic planetary conditions, the behavior of these tracers does not conform to the classical predictions of disequilibrium chemistry.

  5. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  6. Nucleon decay in non-minimal supersymmetric SO(10)

    NASA Astrophysics Data System (ADS)

    Macpherson, Alick L.

    1996-02-01

    Evaluation of nucleon decay modes and branching ratios in a non-minimal supersymmetric SO(10) grand unified theory is presented. The non-minimal GUT considered is the supersymmetrised version of the 'realistic' SO(10) model originally proposed by Harvey, Reiss and Ramond, which is realistic in that it gives acceptable charged fermion and neutrino masses within the context of a phenomenological fit to the low-energy standard model inputs. Despite a complicated Higgs sector, the SO(10) 10 Higgs superfield mass insertion is found to be the sole contribution to the tree-level F-term governing nucleon decay. The resulting dimension-5 operators that mediate nucleon decay give branching ratio predictions parameterised by a single parameter, the ratio of the Yukawa couplings of the 10 to the fermion generations. For parameter values corresponding to a lack of dominance of the third family self-coupling, the dominant nucleon decay modes are p → K + + overlineνμand n → K 0 + overlineνμ as expected. Further, the charged muon decay modes are enhanced by two orders of magnitude over the standard minimal SUSY SU(5) predictions, thus predicting a distinct spectrum of 'visible' modes. These charged muon decay modes, along with p → π + + overlineνμand n → π 0 + overlineνμ, which are moderately enhanced over the SUSY SU(5) prediction, suggest a distinguishing fingerprint of this particular GUT model, and if nucleon decay is observed at Super-KAMIOKANDE the predicted branching ratio spectrum can be used to determine the validity of this 'realistic' SO(10) SUSY GUT model.

  7. Comparison of Pore-scale CO2-water-glass System Wettability and Conventional Wettability Measurement on a Flat Plate for Geological CO2 Sequestration

    NASA Astrophysics Data System (ADS)

    Jafari, M.; Cao, S. C.; Jung, J.

    2017-12-01

    Goelogical CO2 sequestration (GCS) has been recently introduced as an effective method to mitigate carbon dioxide emission. CO2 from main producer sources is collected and then is injected underground formations layers to be stored for thousands to millions years. A safe and economical storage project depends on having an insight of trapping mechanisms, fluids dynamics, and interaction of fluids-rocks. Among different forces governing fluids mobility and distribution in GCS condition, capillary pressure is of importance, which, in turn, wettability (measured by contact angel (CA)) is the most controversial parameters affecting it. To explore the sources of discrepancy in the literature for CA measurement, we conducted a series of conventional captive bubble test on glass plates under high pressure condition. By introducing a shape factor, we concluded that surface imperfection can distort the results in such tests. Since the conventional methods of measuring the CA is affected by gravity and scale effect, we introduced a different technique to measure pore-scale CA inside a transparent glass microchip. Our method has the ability to consider pore sizes and simulate static and dynamics CA during dewetting and imbibition. Glass plates shows a water-wet behavior (CA 30° - 45°) by a conventional experiment consistent with literature. However, CA of miniature bubbles inside of the micromodel can have a weaker water-wet behavior (CA 55° - 69°). In a more realistic pore-scale condition, water- CO2 interface covers whole width of a pore throats. Under this condition, the receding CA, which is used for injectability and capillary breakthrough pressure, increases with decreasing pores size. On the other hand, advancing CA, which is important for residual or capillary trapping, does not show a correlation with throat sizes. Static CA measured in the pores during dewetting is lower than static CA on flat plate, but it is much higher when measured during imbibition implying weaker water-wet behavior. Pore-scale CA, which realistically represents rocks wettability behavior, shows weaker water-wet behavior than conventional measurement methods, which must be considered for safety of geological storage.

  8. Pulsar Timing Array Based Search for Supermassive Black Hole Binaries in the Square Kilometer Array Era

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Mohanty, Soumya D.

    2017-04-01

    The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a pulsar timing array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing 1 03 pulsars. We find that an all-sky search will be able to confidently detect nonevolving sources with a redshifted chirp mass of 1 010 M⊙ out to a redshift of about 28 (corresponding to a rest-frame chirp mass of 3.4 ×1 08 M⊙). We discuss the important implications that the large distance reach of a SKA era PTA has on GW observations from optically identified SMBHB candidates. If no SMBHB detections occur, a highly unlikely scenario in the light of our results, the sky-averaged upper limit on strain amplitude will be improved by about 3 orders of magnitude over existing limits.

  9. Using biophysical models to manage nitrogen pollution from agricultural sources: Utopic or realistic approach for non-scientist users? Case study of a drinking water catchment area in Lorraine, France.

    PubMed

    Bernard, Pierre-Yves; Benoît, Marc; Roger-Estrade, Jean; Plantureux, Sylvain

    2016-12-01

    The objectives of this comparison of two biophysical models of nitrogen losses were to evaluate first whether results were similar and second whether both were equally practical for use by non-scientist users. Results were obtained with the crop model STICS and the environmental model AGRIFLUX based on nitrogen loss simulations across a small groundwater catchment area (<1 km(2)) located in the Lorraine region in France. Both models simulate the influences of leaching and cropping systems on nitrogen losses in a relevant manner. The authors conclude that limiting the simulations to areas where soils with a greater risk of leaching cover a significant spatial extent would likely yield acceptable results because those soils have more predictable leaching of nitrogen. In addition, the choice of an environmental model such as AGRIFLUX which requires fewer parameters and input variables seems more user-friendly for agro-environmental assessment. The authors then discuss additional challenges for non-scientists such as lack of parameter optimization, which is essential to accurately assessing nitrogen fluxes and indirectly not to limit the diversity of uses of simulated results. Despite current restrictions, with some improvement, biophysical models could become useful environmental assessment tools for non-scientists. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Analytical optical scattering in clouds

    NASA Technical Reports Server (NTRS)

    Phanord, Dieudonne D.

    1989-01-01

    An analytical optical model for scattering of light due to lightning by clouds of different geometry is being developed. The self-consistent approach and the equivalent medium concept of Twersky was used to treat the case corresponding to outside illumination. Thus, the resulting multiple scattering problem is transformed with the knowledge of the bulk parameters, into scattering by a single obstacle in isolation. Based on the size parameter of a typical water droplet as compared to the incident wave length, the problem for the single scatterer equivalent to the distribution of cloud particles can be solved either by Mie or Rayleigh scattering theory. The super computing code of Wiscombe can be used immediately to produce results that can be compared to the Monte Carlo computer simulation for outside incidence. A fairly reasonable inverse approach using the solution of the outside illumination case was proposed to model analytically the situation for point sources located inside the thick optical cloud. Its mathematical details are still being investigated. When finished, it will provide scientists an enhanced capability to study more realistic clouds. For testing purposes, the direct approach to the inside illumination of clouds by lightning is under consideration. Presently, an analytical solution for the cubic cloud will soon be obtained. For cylindrical or spherical clouds, preliminary results are needed for scattering by bounded obstacles above or below a penetrable surface interface.

  11. Pulsar Timing Array Based Search for Supermassive Black Hole Binaries in the Square Kilometer Array Era.

    PubMed

    Wang, Yan; Mohanty, Soumya D

    2017-04-14

    The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a pulsar timing array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing 10^{3} pulsars. We find that an all-sky search will be able to confidently detect nonevolving sources with a redshifted chirp mass of 10^{10}  M_{⊙} out to a redshift of about 28 (corresponding to a rest-frame chirp mass of 3.4×10^{8}  M_{⊙}). We discuss the important implications that the large distance reach of a SKA era PTA has on GW observations from optically identified SMBHB candidates. If no SMBHB detections occur, a highly unlikely scenario in the light of our results, the sky-averaged upper limit on strain amplitude will be improved by about 3 orders of magnitude over existing limits.

  12. Exploratory modeling of forest disturbance scenarios in central Oregon using computational experiments in GIS

    Treesearch

    Deana D. Pennington

    2007-01-01

    Exploratory modeling is an approach used when process and/or parameter uncertainties are such that modeling attempts at realistic prediction are not appropriate. Exploratory modeling makes use of computational experimentation to test how varying model scenarios drive model outcome. The goal of exploratory modeling is to better understand the system of interest through...

  13. NLC Luminosity as a Function of Beam Parameters

    NASA Astrophysics Data System (ADS)

    Nosochkov, Y.

    2002-06-01

    Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.

  14. [Mathematical models and epidemiological analysis].

    PubMed

    Gerasimov, A N

    2010-01-01

    The limited use of mathematical simulation in epidemiology is due not only to the difficulty of monitoring the epidemic process and identifying its parameters but also to the application of oversimplified models. It is shown that realistic reproduction of actual morbidity dynamics requires taking into account heterogeneity and finiteness of the population and seasonal character of pathogen transmission mechanism.

  15. An Eight-Parameter Function for Simulating Model Rocket Engine Thrust Curves

    ERIC Educational Resources Information Center

    Dooling, Thomas A.

    2007-01-01

    The toy model rocket is used extensively as an example of a realistic physical system. Teachers from grade school to the university level use them. Many teachers and students write computer programs to investigate rocket physics since the problem involves nonlinear functions related to air resistance and mass loss. This paper describes a nonlinear…

  16. Evolution of a mini-scale biphasic dissolution model: Impact of model parameters on partitioning of dissolved API and modelling of in vivo-relevant kinetics.

    PubMed

    Locher, Kathrin; Borghardt, Jens M; Frank, Kerstin J; Kloft, Charlotte; Wagner, Karl G

    2016-08-01

    Biphasic dissolution models are proposed to have good predictive power for the in vivo absorption. The aim of this study was to improve our previously introduced mini-scale dissolution model to mimic in vivo situations more realistically and to increase the robustness of the experimental model. Six dissolved APIs (BCS II) were tested applying the improved mini-scale biphasic dissolution model (miBIdi-pH-II). The influence of experimental model parameters including various excipients, API concentrations, dual paddle and its rotation speed was investigated. The kinetics in the biphasic model was described applying a one- and four-compartment pharmacokinetic (PK) model. The improved biphasic dissolution model was robust related to differing APIs and excipient concentrations. The dual paddle guaranteed homogenous mixing in both phases; the optimal rotation speed was 25 and 75rpm for the aqueous and the octanol phase, respectively. A one-compartment PK model adequately characterised the data of fully dissolved APIs. A four-compartment PK model best quantified dissolution, precipitation, and partitioning also of undissolved amounts due to realistic pH profiles. The improved dissolution model is a powerful tool for investigating the interplay between dissolution, precipitation and partitioning of various poorly soluble APIs (BCS II). In vivo-relevant PK parameters could be estimated applying respective PK models. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Degrees of reality: airway anatomy of high-fidelity human patient simulators and airway trainers.

    PubMed

    Schebesta, Karl; Hüpfl, Michael; Rössler, Bernhard; Ringl, Helmut; Müller, Michael P; Kimberger, Oliver

    2012-06-01

    Human patient simulators and airway training manikins are widely used to train airway management skills to medical professionals. Furthermore, these patient simulators are employed as standardized "patients" to evaluate airway devices. However, little is known about how realistic these patient simulators and airway-training manikins really are. This trial aimed to evaluate the upper airway anatomy of four high-fidelity patient simulators and two airway trainers in comparison with actual patients by means of radiographic measurements. The volume of the pharyngeal airspace was the primary outcome parameter. Computed tomography scans of 20 adult trauma patients without head or neck injuries were compared with computed tomography scans of four high-fidelity patient simulators and two airway trainers. By using 14 predefined distances, two cross-sectional areas and three volume parameters of the upper airway, the manikins' similarity to a human patient was assessed. The pharyngeal airspace of all manikins differed significantly from the patients' pharyngeal airspace. The HPS Human Patient Simulator (METI®, Sarasota, FL) was the most realistic high-fidelity patient simulator (6/19 [32%] of all parameters were within the 95% CI of human airway measurements). The airway anatomy of four high-fidelity patient simulators and two airway trainers does not reflect the upper airway anatomy of actual patients. This finding may impact airway training and confound comparative airway device studies.

  18. Systematic comparison of jet energy-loss schemes in a realistic hydrodynamic medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, Steffen A.; Majumder, Abhijit; Gale, Charles

    2009-02-15

    We perform a systematic comparison of three different jet energy-loss approaches. These include the Armesto-Salgado-Wiedemann scheme based on the approach of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z/ASW), the higher twist (HT) approach and a scheme based on the Arnold-Moore-Yaffe (AMY) approach. In this comparison, an identical medium evolution will be utilized for all three approaches: this entails not only the use of the same realistic three-dimensional relativistic fluid dynamics (RFD) simulation, but also the use of identical initial parton-distribution functions and final fragmentation functions. We are, thus, in a unique position to not only isolate fundamental differences between the various approaches butmore » also make rigorous calculations for different experimental measurements using state of the art components. All three approaches are reduced to versions containing only one free tunable parameter, this is then related to the well-known transport parameter q. We find that the parameters of all three calculations can be adjusted to provide a good description of inclusive data on R{sub AA} vs transverse momentum. However, we do observe slight differences in their predictions for the centrality and azimuthal angular dependence of R{sub AA} vs p{sub T}. We also note that the values of the transport coefficient q in the three approaches to describe the data differ significantly.« less

  19. Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases

  20. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  1. Use of groundwater lifetime expectancy for the performance assessment of a deep geologic radioactive waste repository: 2. Application to a Canadian Shield environment

    NASA Astrophysics Data System (ADS)

    Park, Y.-J.; Cornaton, F. J.; Normani, S. D.; Sykes, J. F.; Sudicky, E. A.

    2008-04-01

    F. J. Cornaton et al. (2008) introduced the concept of lifetime expectancy as a performance measure of the safety of subsurface repositories, on the basis of the travel time for contaminants released at a certain point in the subsurface to reach the biosphere or compliance area. The methodologies are applied to a hypothetical but realistic Canadian Shield crystalline rock environment, which is considered to be one of the most geologically stable areas on Earth. In an approximately 10 × 10 × 1.5 km3 hypothetical study area, up to 1000 major and intermediate fracture zones are generated from surface lineament analyses and subsurface surveys. In the study area, mean and probability density of lifetime expectancy are analyzed with realistic geologic and hydrologic shield settings in order to demonstrate the applicability of the theory and the numerical model for optimally locating a deep subsurface repository for the safe storage of spent nuclear fuel. The results demonstrate that, in general, groundwater lifetime expectancy increases with depth and it is greatest inside major matrix blocks. Various sources and aspects of uncertainty are considered, specifically geometric and hydraulic parameters of permeable fracture zones. Sensitivity analyses indicate that the existence and location of permeable fracture zones and the relationship between fracture zone permeability and depth from ground surface are the most significant factors for lifetime expectancy distribution in such a crystalline rock environment. As a consequence, it is successfully demonstrated that the concept of lifetime expectancy can be applied to siting and performance assessment studies for deep geologic repositories in crystalline fractured rock settings.

  2. Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco

    2017-11-01

    Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.

  3. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  4. A radiological assessment of nuclear power and propulsion operations near Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Bolch, Wesley E.; Thomas, J. Kelly; Peddicord, K. Lee; Nelson, Paul; Marshall, David T.; Busche, Donna M.

    1990-01-01

    Scenarios were identified which involve the use of nuclear power systems in the vicinity of Space Station Freedom (SSF) and their radiological impact on the SSF crew was quantified. Several of the developed scenarios relate to the use of SSF as an evolutionary transportation node for lunar and Mars missions. In particular, radiation doses delivered to SSF crew were calculated for both the launch and subsequent return of a Nuclear Electric Propulsion (NEP) cargo vehicle and a Nuclear Thermal Rocket (NTR) personnel vehicle to low earth orbit. The use of nuclear power on co-orbiting platforms and the storage and handling issues associated with radioisotope power systems were also explored as they relate to SSF. A central philosophy in these analyses was the utilization of a radiation dose budget, defined as the difference between recommended dose limits from all radiation sources and estimated doses received by crew members from natural space radiations. Consequently, for each scenario examined, the dose budget concept was used to identify and quantify constraints on operational parameters such as launch separation distances, returned vehicle parking distances, and reactor shutdown times prior to vehicle approach. The results indicate that realistic scenarios do not exist which would preclude the use of nuclear power sources in the vicinity of SSF. The radiation dose to the SSF crew can be maintained at safe levels solely by implementing proper and reasonable operating procedures.

  5. Monte Carlo simulations of the impact of troposphere, clock and measurement errors on the repeatability of VLBI positions

    NASA Astrophysics Data System (ADS)

    Pany, A.; Böhm, J.; MacMillan, D.; Schuh, H.; Nilsson, T.; Wresnik, J.

    2011-01-01

    Within the International VLBI Service for Geodesy and Astrometry (IVS) Monte Carlo simulations have been carried out to design the next generation VLBI system ("VLBI2010"). Simulated VLBI observables were generated taking into account the three most important stochastic error sources in VLBI, i.e. wet troposphere delay, station clock, and measurement error. Based on realistic physical properties of the troposphere and clocks we ran simulations to investigate the influence of the troposphere on VLBI analyses, and to gain information about the role of clock performance and measurement errors of the receiving system in the process of reaching VLBI2010's goal of mm position accuracy on a global scale. Our simulations confirm that the wet troposphere delay is the most important of these three error sources. We did not observe significant improvement of geodetic parameters if the clocks were simulated with an Allan standard deviation better than 1 × 10-14 at 50 min and found the impact of measurement errors to be relatively small compared with the impact of the troposphere. Along with simulations to test different network sizes, scheduling strategies, and antenna slew rates these studies were used as a basis for the definition and specification of VLBI2010 antennas and recording system and might also be an example for other space geodetic techniques.

  6. Error analysis of satellite attitude determination using a vision-based approach

    NASA Astrophysics Data System (ADS)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  7. The automatic neutron guide optimizer guide_bot

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads

    2017-09-01

    The guide optimization software guide_bot is introduced, the main purpose of which is to reduce the time spent programming when performing numerical optimization of neutron guides. A limited amount of information on the overall guide geometry and a figure of merit describing the desired beam is used to generate the code necessary to solve the problem. A generated McStas instrument file performs the Monte Carlo ray-tracing, which is controlled by iFit optimization scripts. The resulting optimal guide is thoroughly characterized, both in terms of brilliance transfer from an idealized source and on a more realistic source such as the ESS Butterfly moderator. Basic MATLAB knowledge is required from the user, but no experience with McStas or iFit is necessary. This paper briefly describes how guide_bot is used and some important aspects of the code. A short validation against earlier work is performed which shows the expected agreement. In addition a scan over the vertical divergence requirement, where individual guide optimizations are performed for each corresponding figure of merit, provides valuable data on the consequences of this parameter. The guide_bot software package is best suited for the start of an instrument design project as it excels at comparing a large amount of different guide alternatives for a specific set of instrument requirements, but is still applicable in later stages as constraints can be used to optimize more specific guides.

  8. Geothermal heat flux in the Amundsen Sea sector of West Antarctica: New insights from temperature measurements, depth to the bottom of the magnetic source estimation, and thermal modeling

    NASA Astrophysics Data System (ADS)

    Dziadek, R.; Gohl, K.; Diehl, A.; Kaul, N.

    2017-07-01

    Focused research on the Pine Island and Thwaites glaciers, which drain the West Antarctic Ice Shelf (WAIS) into the Amundsen Sea Embayment (ASE), revealed strong signs of instability in recent decades that result from variety of reasons, such as inflow of warmer ocean currents and reverse bedrock topography, and has been established as the Marine Ice Sheet Instability hypothesis. Geothermal heat flux (GHF) is a poorly constrained parameter in Antarctica and suspected to affect basal conditions of ice sheets, i.e., basal melting and subglacial hydrology. Thermomechanical models demonstrate the influential boundary condition of geothermal heat flux for (paleo) ice sheet stability. Due to a complex tectonic and magmatic history of West Antarctica, the region is suspected to exhibit strong heterogeneous geothermal heat flux variations. We present an approach to investigate ranges of realistic heat fluxes in the ASE by different methods, discuss direct observations, and 3-D numerical models that incorporate boundary conditions derived from various geophysical studies, including our new Depth to the Bottom of the Magnetic Source (DBMS) estimates. Our in situ temperature measurements at 26 sites in the ASE more than triples the number of direct GHF observations in West Antarctica. We demonstrate by our numerical 3-D models that GHF spatially varies from 68 up to 110 mW m-2.

  9. Optical and X-ray luminosities of expanding nebulae around ultraluminous X-ray sources

    NASA Astrophysics Data System (ADS)

    Siwek, Magdalena; Sądowski, Aleksander; Narayan, Ramesh; Roberts, Timothy P.; Soria, Roberto

    2017-09-01

    We have performed a set of simulations of expanding, spherically symmetric nebulae inflated by winds from accreting black holes in ultraluminous X-ray sources (ULXs). We implemented a realistic cooling function to account for free-free and bound-free cooling. For all model parameters we considered, the forward shock in the interstellar medium becomes radiative at a radius ˜100 pc. The emission is primarily in optical and UV, and the radiative luminosity is about 50 per cent of the total kinetic luminosity of the wind. In contrast, the reverse shock in the wind is adiabatic so long as the terminal outflow velocity of the wind vw ≳ 0.003c. The shocked wind in these models radiates in X-rays, but with a luminosity of only ˜1035 erg s-1. For wind velocities vw ≲ 0.001c, the shocked wind becomes radiative, but it is no longer hot enough to produce X-rays. Instead it emits in optical and UV, and the radiative luminosity is comparable to 100 per cent of the wind kinetic luminosity. We suggest that measuring the optical luminosities and putting limits on the X-ray and radio emission from shock-ionized ULX bubbles may help in estimating the mass outflow rate of the central accretion disc and the velocity of the outflow.

  10. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  11. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  12. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  13. Solar Energetic Particle Transport Near a Heliospheric Current Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battarbee, Markus; Dalla, Silvia; Marsh, Mike S., E-mail: mbattarbee@uclan.ac.uk

    2017-02-10

    Solar energetic particles (SEPs), a major component of space weather, propagate through the interplanetary medium strongly guided by the interplanetary magnetic field (IMF). In this work, we analyze the implications that a flat Heliospheric Current Sheet (HCS) has on proton propagation from SEP release sites to the Earth. We simulate proton propagation by integrating fully 3D trajectories near an analytically defined flat current sheet, collecting comprehensive statistics into histograms, fluence maps, and virtual observer time profiles within an energy range of 1–800 MeV. We show that protons experience significant current sheet drift to distant longitudes, causing time profiles to exhibitmore » multiple components, which are a potential source of confusing interpretations of observations. We find that variation of the current sheet thickness within a realistic parameter range has little effect on particle propagation. We show that the IMF configuration strongly affects the deceleration of protons. We show that in our model, the presence of a flat equatorial HCS in the inner heliosphere limits the crossing of protons into the opposite hemisphere.« less

  14. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    PubMed

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Sensitivities of seismic velocities to temperature, pressure and composition in the lower mantle

    NASA Astrophysics Data System (ADS)

    Trampert, Jeannot; Vacher, Pierre; Vlaar, Nico

    2001-08-01

    We calculated temperature, pressure and compositional sensitivities of seismic velocities in the lower mantle using latest mineral physics data. The compositional variable refers to the volume proportion of perovskite in a simplified perovskite-magnesiowüstite mantle assemblage. The novelty of our approach is the exploration of a reasonable range of input parameters which enter the lower mantle extrapolations. This leads to realistic error bars on the sensitivities. Temperature variations can be inferred throughout the lower mantle within a good degree of precision. Contrary to the uppermost mantle, modest compositional changes in the lower mantle can be detected by seismic tomography, with a larger uncertainty though. A likely trade-off between temperature and composition will be largely determined by uncertainties in tomography itself. Given current sources of uncertainties on recent data, anelastic contributions to the temperature sensitivities (calculated using Karato's approach) appear less significant than previously thought. Recent seismological determinations of the ratio of relative S to P velocity heterogeneity can be entirely explain by thermal effects, although isolated spots beneath Africa and the Central Pacific in the lowermost mantle may ask for a compositional origin.

  16. Mars Tumbleweed Simulation Using Singular Perturbation Theory

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Calhoun, Phillip

    2005-01-01

    The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.

  17. Non-minimal quartic inflation in supersymmetric SO(10)

    DOE PAGES

    Leontaris, George K.; Okada, Nobuchika; Shafi, Qaisar

    2016-12-16

    Here, we describe how quartic (λφ 4) inflation with non-minimal coupling to gravity is realized in realistic supersymmetric SO(10)models. In a well-motivated example the 16 -more » $$\\overline{16}$$ Higgs multiplets, which break SO(10) to SU(5) and yield masses for the right-handed neutrinos, provide the inflaton field φ. Thus, leptogenesis is a natural outcome in this class of SO(10) models. Moreover, the adjoint (45-plet) Higgs also acquires a GUT scale value during inflation so that the monopole problem is evaded. The scalar spectral index n s in good agreement with the observations and r, the tensor to scalar ratio, is predicted for realistic values of GUT parameters to be of order 10 -3-10 -2.« less

  18. Investigation of large-area multicoil inductively coupled plasma sources using three-dimensional fluid model

    NASA Astrophysics Data System (ADS)

    Brcka, Jozef

    2016-07-01

    A multi inductively coupled plasma (ICP) system can be used to maintain the plasma uniformity and increase the area processed by a high-density plasma. This article presents a source in two different configurations. The distributed planar multi ICP (DM-ICP) source comprises individual ICP sources that are not overlapped and produce plasma independently. Mutual coupling of the ICPs may affect the distribution of the produced plasma. The integrated multicoil ICP (IMC-ICP) source consists of four low-inductance ICP antennas that are superimposed in an azimuthal manner. The identical geometry of the ICP coils was assumed in this work. Both configurations have highly asymmetric components. A three-dimensional (3D) plasma model of the multicoil ICP configurations with asymmetric features is used to investigate the plasma characteristics in a large chamber and the operation of the sources in inert and reactive gases. The feasibility of the computational calculation, the speed, and the computational resources of the coupled multiphysics solver are investigated in the framework of a large realistic geometry and complex reaction processes. It was determined that additional variables can be used to control large-area plasmas. Both configurations can form a plasma, that azimuthally moves in a controlled manner, the so-called “sweeping mode” (SM) or “polyphase mode” (PPM), and thus they have the potential for large-area and high-density plasma applications. The operation in the azimuthal mode has the potential to adjust the plasma distribution, the reaction chemistry, and increase or modulate the production of the radicals. The intrinsic asymmetry of the individual coils and their combined operation were investigated within a source assembly primarily in argon and CO gases. Limited investigations were also performed on operation in CH4 gas. The plasma parameters and the resulting chemistry are affected by the geometrical relation between individual antennas. The aim of this work is to incorporate the technological, computational, dimensional scaling, and reaction chemistry aspects of the plasma under one computational framework. The 3D simulation is utilized to geometrically scale up the reactive plasma that is produced by multiple ICP sources.

  19. ZASPE: A Code to Measure Stellar Atmospheric Parameters and their Covariance from Spectra

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordán, Andrés; Hartman, Joel; Bakos, Gáspár

    2017-05-01

    We describe the Zonal Atmospheric Stellar Parameters Estimator (zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high-resolution echelle spectra of FGK-type stars. zaspe estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fitting synthetic one. The covariances between the parameters are also estimated in the process. zaspe can in principle use any pre-calculated grid of synthetic spectra, but unbiased grids are required to obtain accurate parameters. We tested the performance of two existing libraries, and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesize a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).

  20. Mechanisms controlling primary and new production in a global ecosystem model - Part I: Validation of the biological simulation

    NASA Astrophysics Data System (ADS)

    Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.

    2006-12-01

    A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The "K profile parameterization" (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JGOFS time series sites: BATS, KERFIX, Papa and HOT. One exception is the northern North Atlantic where lower grazing rates are needed, perhaps related to the dominance of mesozooplankton there. The model is therefore not globally robust in the sense that additional parameterizations are needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem models.

  1. Dynamics of a distributed drill string system: Characteristic parameters and stability maps

    NASA Astrophysics Data System (ADS)

    Aarsnes, Ulf Jakob F.; van de Wouw, Nathan

    2018-03-01

    This paper involves the dynamic (stability) analysis of distributed drill-string systems. A minimal set of parameters characterizing the linearized, axial-torsional dynamics of a distributed drill string coupled through the bit-rock interaction is derived. This is found to correspond to five parameters for a simple drill string and eight parameters for a two-sectioned drill-string (e.g., corresponding to the pipe and collar sections of a drilling system). These dynamic characterizations are used to plot the inverse gain margin of the system, parametrized in the non-dimensional parameters, effectively creating a stability map covering the full range of realistic physical parameters. This analysis reveals a complex spectrum of dynamics not evident in stability analysis with lumped models, thus indicating the importance of analysis using distributed models. Moreover, it reveals trends concerning stability properties depending on key system parameters useful in the context of system and control design aiming at the mitigation of vibrations.

  2. Exploring the Uncanny Valley to Find the Edge of Play

    ERIC Educational Resources Information Center

    Eberle, Scott G.

    2009-01-01

    Play often rewards us with a thrill or a sense of wonder. But, just over the edge of play, uncanny objects like dolls, automata, robots, and realistic animations may become monstrous rather than marvelous. Drawing from diverse sources, literary evidence, psychological and psychoanalytic theory, new insights in neuroscience, marketing literature,…

  3. System Measures Thermal Noise In A Microphone

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J.; Ngo, Kim Chi T.

    1994-01-01

    Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.

  4. Renewable Energy Can Help Reduce Oil Dependency

    ScienceCinema

    Arvizu, Dan

    2017-12-21

    In a speech to the Economic Club of Kansas City on June 23, 2010, NREL Director Dan Arvizu takes a realistic look at how renewable energy can help reduce America's dependence on oil, pointing out that the country gets as much energy from renewable sources now as it does from offshore oil production.

  5. Integration of Geodata in Documenting Castle Ruins

    NASA Astrophysics Data System (ADS)

    Delis, P.; Wojtkowska, M.; Nerc, P.; Ewiak, I.; Lada, A.

    2016-06-01

    Textured three dimensional models are currently the one of the standard methods of representing the results of photogrammetric works. A realistic 3D model combines the geometrical relations between the structure's elements with realistic textures of each of its elements. Data used to create 3D models of structures can be derived from many different sources. The most commonly used tool for documentation purposes, is a digital camera and nowadays terrestrial laser scanning (TLS). Integration of data acquired from different sources allows modelling and visualization of 3D models historical structures. Additional aspect of data integration is possibility of complementing of missing points for example in point clouds. The paper shows the possibility of integrating data from terrestrial laser scanning with digital imagery and an analysis of the accuracy of the presented methods. The paper describes results obtained from raw data consisting of a point cloud measured using terrestrial laser scanning acquired from a Leica ScanStation2 and digital imagery taken using a Kodak DCS Pro 14N camera. The studied structure is the ruins of the Ilza castle in Poland.

  6. A preliminary study of head-up display assessment techniques. 2: HUD symbology and panel information search time

    NASA Technical Reports Server (NTRS)

    Guercio, J. G.; Haines, R. F.

    1978-01-01

    Twelve commercial pilots were shown 50 high-fidelity slides of a standard aircraft instrument panel with the airspeed, altitude, ADI, VSI, and RMI needles in various realistic orientations. Fifty slides showing an integrated head-up display (HUD) symbology containing an equivalent number of flight parameters as above (with flight path replacing VSI) were also shown. Each subject was told what flight parameter to search for just before each slide was exposed and was given as long as needed (12 sec maximum) to respond by verbalizing the parameter's displayed value. The results for the 100-percent correct data indicated that: there was no significant difference in mean reaction time (averaged across all five flight parameters) between the instrument panel and HUD slides; and a statistically significant difference in mean reaction time was found in responding to different flight parameters.

  7. A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.

    PubMed

    Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie

    2016-09-01

    Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.

  8. Keno-21: Fundamental Issues in the Design of Geophysical Simulation Experiments and Resource Allocation in Climate Modelling

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2001-05-01

    Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.

  9. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise

    PubMed Central

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons. PMID:29209192

  10. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise.

    PubMed

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.

  11. A gridded global description of the ionosphere and thermosphere for 1996 - 2000

    NASA Astrophysics Data System (ADS)

    Ridley, A.; Kihn, E.; Kroehl, H.

    The modeling and simulation community has asked for a realistic representation of the near-Earth space environment covering a significant number of years to be used in scientific and engineering applications. The data, data management systems, assimilation techniques, physical models, and computer resources are now available to construct a realistic description of the ionosphere and thermosphere over a 5 year period. DMSP and NOAA POES satellite data and solar emissions were used to compute Hall and Pederson conductances in the ionosphere. Interplanetary magnetic field measurements on the ACE satellite define average electrostatic potential patterns over the northern and southern Polar Regions. These conductances, electric field patterns, and ground-based magnetometer data were input to the Assimilative Mapping of Ionospheric Electrodynamics model to compute the distribution of electric fields and currents in the ionosphere. The Global Thermosphere Ionosphere Model (GITM) used the ionospheric electrodynamic parameters to compute the distribution of particles and fields in the ionosphere and thermosphere. GITM uses a general circulation approach to solve the fundamental equations. Model results offer a unique opportunity to assess the relative importance of different forcing terms under a variety of conditions as well as the accuracies of different estimates of ionospheric electrodynamic parameters.

  12. Shells, orbit bifurcations, and symmetry restorations in Fermi systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magner, A. G., E-mail: magner@kinr.kiev.ua; Koliesnik, M. V.; Arita, K.

    The periodic-orbit theory based on the improved stationary-phase method within the phase-space path integral approach is presented for the semiclassical description of the nuclear shell structure, concerning themain topics of the fruitful activity ofV.G. Soloviev. We apply this theory to study bifurcations and symmetry breaking phenomena in a radial power-law potential which is close to the realistic Woods–Saxon one up to about the Fermi energy. Using the realistic parametrization of nuclear shapes we explain the origin of the double-humped fission barrier and the asymmetry in the fission isomer shapes by the bifurcations of periodic orbits. The semiclassical origin of themore » oblate–prolate shape asymmetry and tetrahedral shapes is also suggested within the improved periodic-orbit approach. The enhancement of shell structures at some surface diffuseness and deformation parameters of such shapes are explained by existence of the simple local bifurcations and new non-local bridge-orbit bifurcations in integrable and partially integrable Fermi-systems. We obtained good agreement between the semiclassical and quantum shell-structure components of the level density and energy for several surface diffuseness and deformation parameters of the potentials, including their symmetry breaking and bifurcation values.« less

  13. Fully 3D modeling of tokamak vertical displacement events with realistic parameters

    NASA Astrophysics Data System (ADS)

    Pfefferle, David; Ferraro, Nathaniel; Jardin, Stephen; Bhattacharjee, Amitava

    2016-10-01

    In this work, we model the complex multi-domain and highly non-linear physics of Vertical Displacement Events (VDEs), one of the most damaging off-normal events in tokamaks, with the implicit 3D extended MHD code M3D-C1. The code has recently acquired the capability to include finite thickness conducting structures within the computational domain. By exploiting the possibility of running a linear 3D calculation on top of a non-linear 2D simulation, we monitor the non-axisymmetric stability and assess the eigen-structure of kink modes as the simulation proceeds. Once a stability boundary is crossed, a fully 3D non-linear calculation is launched for the remainder of the simulation, starting from an earlier time of the 2D run. This procedure, along with adaptive zoning, greatly increases the efficiency of the calculation, and allows to perform VDE simulations with realistic parameters and high resolution. Simulations are being validated with NSTX data where both axisymmetric (toroidally averaged) and non-axisymmetric induced and conductive (halo) currents have been measured. This work is supported by US DOE Grant DE-AC02-09CH11466.

  14. Calibration of phoswich-based lung counting system using realistic chest phantom.

    PubMed

    Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R

    2011-03-01

    A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.

  15. anyFish 2.0: An open-source software platform to generate and share animated fish models to study behavior

    NASA Astrophysics Data System (ADS)

    Ingley, Spencer J.; Rahmani Asl, Mohammad; Wu, Chengde; Cui, Rongfeng; Gadelhak, Mahmoud; Li, Wen; Zhang, Ji; Simpson, Jon; Hash, Chelsea; Butkowski, Trisha; Veen, Thor; Johnson, Jerald B.; Yan, Wei; Rosenthal, Gil G.

    2015-12-01

    Experimental approaches to studying behaviors based on visual signals are ubiquitous, yet these studies are limited by the difficulty of combining realistic models with the manipulation of signals in isolation. Computer animations are a promising way to break this trade-off. However, animations are often prohibitively expensive and difficult to program, thus limiting their utility in behavioral research. We present anyFish 2.0, a user-friendly platform for creating realistic animated 3D fish. anyFish 2.0 dramatically expands anyFish's utility by allowing users to create animations of members of several groups of fish from model systems in ecology and evolution (e.g., sticklebacks, Poeciliids, and zebrafish). The visual appearance and behaviors of the model can easily be modified. We have added several features that facilitate more rapid creation of realistic behavioral sequences. anyFish 2.0 provides a powerful tool that will be of broad use in animal behavior and evolution and serves as a model for transparency, repeatability, and collaboration.

  16. Primary combination of phase-field and discrete dislocation dynamics methods for investigating athermal plastic deformation in various realistic Ni-base single crystal superalloy microstructures

    NASA Astrophysics Data System (ADS)

    Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo

    2015-10-01

    Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.

  17. Three-dimensional skyrmions in spin-2 Bose–Einstein condensates

    NASA Astrophysics Data System (ADS)

    Tiurev, Konstantin; Ollikainen, Tuomas; Kuopanportti, Pekko; Nakahara, Mikio; Hall, David S.; Möttönen, Mikko

    2018-05-01

    We introduce topologically stable three-dimensional skyrmions in the cyclic and biaxial nematic phases of a spin-2 Bose–Einstein condensate. These skyrmions exhibit exceptionally high mapping degrees resulting from the versatile symmetries of the corresponding order parameters. We show how these structures can be created in existing experimental setups and study their temporal evolution and lifetime by numerically solving the three-dimensional Gross–Pitaevskii equations for realistic parameter values. Although the biaxial nematic and cyclic phases are observed to be unstable against transition towards the ferromagnetic phase, their lifetimes are long enough for the skyrmions to be imprinted and detected experimentally.

  18. How well do mean field theories of spiking quadratic-integrate-and-fire networks work in realistic parameter regimes?

    PubMed

    Grabska-Barwińska, Agnieszka; Latham, Peter E

    2014-06-01

    We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.

  19. Numerical experiments on short-term meteorological effects on solar variability

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.; Hansen, J. E.; Stone, P. H.; Quirk, W. J.; Lacis, A. A.

    1975-01-01

    A set of numerical experiments was conducted to test the short-range sensitivity of a large atmospheric general circulation model to changes in solar constant and ozone amount. On the basis of the results of 12-day sets of integrations with very large variations in these parameters, it is concluded that realistic variations would produce insignificant meteorological effects. Any causal relationships between solar variability and weather, for time scales of two weeks or less, rely upon changes in parameters other than solar constant or ozone amounts, or upon mechanisms not yet incorporated in the model.

  20. A new technology for determining transport parameters in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conca, J.L.; Wright, J.

    The UFA Method can directly and rapidly measure transport parameters for any porous medium over a wide range of water contents and conditions. UFA results for subsurface sediments at a mixed-waste disposal site at the Hanford Site in Washington State provided the data necessary for detailed hydrostratigraphic mapping, subsurface flux and recharge distributions, and subsurface chemical mapping. Seven hundred unsaturated conductivity measurements along with pristine pore water extractions were obtained in only six months using the UFA. These data are used to provide realistic information to conceptual models, predictive models and restoration strategies.

  1. Using Perturbative Least Action to Reconstruct Redshift-Space Distortions

    NASA Astrophysics Data System (ADS)

    Goldberg, David M.

    2001-05-01

    In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.

  2. Depigmented skin and phantom color measurements for realistic prostheses.

    PubMed

    Tanner, Paul; Leachman, Sancy; Boucher, Kenneth; Ozçelik, Tunçer Burak

    2014-02-01

    The purpose of this study was to test the hypothesis that regardless of human skin phototype, areas of depigmented skin, as seen in vitiligo, are optically indistinguishable among skin phototypes. The average of the depigmented skin measurements can be used to develop the base color of realistic prostheses. Data was analyzed from 20 of 32 recruited vitiligo study participants. Diffuse reflectance spectroscopy measurements were made from depigmented skin and adjacent pigmented skin, then compared with 66 pigmented polydimethylsiloxane phantoms to determine pigment concentrations in turbid media for making realistic facial prostheses. The Area Under spectral intensity Curve (AUC) was calculated for average spectroscopy measurements of pigmented sites in relation to skin phototype (P = 0.0505) and depigmented skin in relation to skin phototype (P = 0.59). No significant relationship exists between skin phototypes and depigmented skin spectroscopy measurements. The average of the depigmented skin measurements (AUC 19,129) was the closest match to phantom 6.4 (AUC 19,162). Areas of depigmented skin are visibly indistinguishable per skin phototype, yet spectrometry shows that depigmented skin measurements varied and were unrelated to skin phototype. Possible sources of optical variation of depigmented skin include age, body site, blood flow, quantity/quality of collagen, and other chromophores. The average of all depigmented skin measurements can be used to derive the pigment composition and concentration for realistic facial prostheses. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  4. Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1994-01-01

    I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).

  5. The foodscape: classification and field validation of secondary data sources.

    PubMed

    Lake, Amelia A; Burgoine, Thomas; Greenhalgh, Fiona; Stamp, Elaine; Tyrrell, Rachel

    2010-07-01

    The aims were to: develop a food environment classification tool and to test the acceptability and validity of three secondary sources of food environment data within a defined urban area of Newcastle-Upon-Tyne, using a field validation method. A 21 point (with 77 sub-categories) classification tool was developed. The fieldwork recorded 617 establishments selling food and/or food products. The sensitivity analysis of the secondary sources against fieldwork for the Newcastle City Council data was good (83.6%), while Yell.com and the Yellow Pages were low (51.2% and 50.9%, respectively). To improve the quality of secondary data, multiple sources should be used in order to achieve a realistic picture of the foodscape. 2010 Elsevier Ltd. All rights reserved.

  6. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  7. Neurobiologically realistic determinants of self-organized criticality in networks of spiking neurons.

    PubMed

    Rubinov, Mikail; Sporns, Olaf; Thivierge, Jean-Philippe; Breakspear, Michael

    2011-06-01

    Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for neuronal information processing.

  8. Confirmation of saturation equilibrium conditions in crater populations

    NASA Technical Reports Server (NTRS)

    Hartmann, William K.; Gaskell, Robert W.

    1993-01-01

    We have continued work on realistic numerical models of cratered surfaces, as first reported at last year's LPSC. We confirm the saturation equilibrium level with a new, independent test. One of us has developed a realistic computer simulation of a cratered surface. The model starts with a smooth surface or fractal topography, and adds primary craters according to the cumulative power law with exponent -1.83, as observed on lunar maria and Martian plains. Each crater has an ejecta blanket with the volume of the crater, feathering out to a distance of 4 crater radii. We use the model to test the levels of saturation equilibrium reached in naturally occurring systems, by increasing crater density and observing its dependence on various parameters. In particular, we have tested to see if these artificial systems reach the level found by Hartmann on heavily cratered planetary surfaces, hypothesized to be the natural saturation equilibrium level. This year's work gives the first results of a crater population that includes secondaries. Our model 'Gaskell-4' (September, 1992) includes primaries as described above, but also includes a secondary population, defined by exponent -4. We allowed the largest secondary from each primary to be 0.10 times the size of the primary. These parameters will be changed to test their effects in future models. The model gives realistic images of a cratered surface although it appears richer in secondaries than real surfaces are. The effect of running the model toward saturation gives interesting results for the diameter distribution. Our most heavily cratered surface had the input number of primary craters reach about 0.65 times the hypothesized saturation equilibrium, but the input number rises to more than 100 times that level for secondaries below 1.4 km in size.

  9. A phantom design for assessment of detectability in PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollenweber, Scott D., E-mail: scott.wollenweber@g

    2016-09-15

    Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of {sup 18}F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The featuresmore » filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.« less

  10. Chemically Realistic Tetrahedral Lattice Models for Polymer Chains: Application to Polyethylene Oxide.

    PubMed

    Dietschreit, Johannes C B; Diestler, Dennis J; Knapp, Ernst W

    2016-05-10

    To speed up the generation of an ensemble of poly(ethylene oxide) (PEO) polymer chains in solution, a tetrahedral lattice model possessing the appropriate bond angles is used. The distance between noncovalently bonded atoms is maintained at realistic values by generating chains with an enhanced degree of self-avoidance by a very efficient Monte Carlo (MC) algorithm. Potential energy parameters characterizing this lattice model are adjusted so as to mimic realistic PEO polymer chains in water simulated by molecular dynamics (MD), which serves as a benchmark. The MD data show that PEO chains have a fractal dimension of about two, in contrast to self-avoiding walk lattice models, which exhibit the fractal dimension of 1.7. The potential energy accounts for a mild hydrophobic effect (HYEF) of PEO and for a proper setting of the distribution between trans and gauche conformers. The potential energy parameters are determined by matching the Flory radius, the radius of gyration, and the fraction of trans torsion angles in the chain. A gratifying result is the excellent agreement of the pair distribution function and the angular correlation for the lattice model with the benchmark distribution. The lattice model allows for the precise computation of the torsional entropy of the chain. The generation of polymer conformations of the adjusted lattice model is at least 2 orders of magnitude more efficient than MD simulations of the PEO chain in explicit water. This method of generating chain conformations on a tetrahedral lattice can also be applied to other types of polymers with appropriate adjustment of the potential energy function. The efficient MC algorithm for generating chain conformations on a tetrahedral lattice is available for download at https://github.com/Roulattice/Roulattice .

  11. Analysis of the surface heat balance over the world ocean

    NASA Technical Reports Server (NTRS)

    Esbenson, S. K.

    1981-01-01

    The net surface heat fluxes over the global ocean for all calendar months were evaluated. To obtain a formula in the form Qs = Q2(T*A - Ts), where Qs is the net surface heat flux, Ts is the sea surface temperature, T*A is the apparent atmospheric equilibrium temperature, and Q2 is the proportionality constant. Here T*A and Q2, derived from the original heat flux formulas, are functions of the surface meteorological parameters (e.g., surface wind speed, air temperature, dew point, etc.) and the surface radiation parameters. This formulation of the net surface heat flux together with climatological atmospheric parameters provides a realistic and computationally efficient upper boundary condition for oceanic climate modeling.

  12. Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm

    NASA Astrophysics Data System (ADS)

    Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David

    2015-04-01

    The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.

  13. Global and Regional Impacts of HONO on the Chemical Composition of Clouds and Aerosols

    NASA Technical Reports Server (NTRS)

    Elshorbany, Y. F.; Crutzen, P. J.; Steil, B.; Pozzer, A.; Tost, H.; Lelieveld, J.

    2014-01-01

    Recently, realistic simulation of nitrous acid (HONO) based on the HONO / NOx ratio of 0.02 was found to have a significant impact on the global budgets of HOx (OH + HO2) and gas phase oxidation products in polluted regions, especially in winter when other photolytic sources are of minor importance. It has been reported that chemistry-transport models underestimate sulphate concentrations, mostly during winter. Here we show that simulating realistic HONO levels can significantly enhance aerosol sulphate (S(VI)) due to the increased formation of H2SO4. Even though in-cloud aqueous phase oxidation of dissolved SO2 (S(IV)) is the main source of S(VI), it appears that HONO related enhancement of H2O2 does not significantly affect sulphate because of the predominantly S(IV) limited conditions, except over eastern Asia. Nitrate is also increased via enhanced gaseous HNO3 formation and N2O5 hydrolysis on aerosol particles. Ammonium nitrate is enhanced in ammonia-rich regions but not under ammonia-limited conditions. Furthermore, particle number concentrations are also higher, accompanied by the transfer from hydrophobic to hydrophilic aerosol modes. This implies a significant impact on the particle lifetime and cloud nucleating properties. The HONO induced enhancements of all species studied are relatively strong in winter though negligible in summer. Simulating realistic HONO levels is found to improve the model measurement agreement of sulphate aerosols, most apparent over the US. Our results underscore the importance of HONO for the atmospheric oxidizing capacity and corroborate the central role of cloud chemical processing in S(IV) formation

  14. Measuring neutron star tidal deformability with Advanced LIGO: A Bayesian analysis of neutron star-black hole binary observations

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Pürrer, Michael; Pfeiffer, Harald P.

    2017-02-01

    The pioneering discovery of gravitational waves (GWs) by Advanced LIGO has ushered us into an era of observational GW astrophysics. Compact binaries remain the primary target sources for GW observation, of which neutron star-black hole (NSBH) binaries form an important subset. GWs from NSBH sources carry signatures of (a) the tidal distortion of the neutron star by its companion black hole during inspiral, and (b) its potential tidal disruption near merger. In this paper, we present a Bayesian study of the measurability of neutron star tidal deformability ΛNS∝(R /M )NS5 using observation(s) of inspiral-merger GW signals from disruptive NSBH coalescences, taking into account the crucial effect of black hole spins. First, we find that if nontidal templates are used to estimate source parameters for an NSBH signal, the bias introduced in the estimation of nontidal physical parameters will only be significant for loud signals with signal-to-noise ratios greater than ≃30 . For similarly loud signals, we also find that we can begin to put interesting constraints on ΛNS (factor of 1-2) with individual observations. Next, we study how a population of realistic NSBH detections will improve our measurement of neutron star tidal deformability. For an astrophysically likely population of disruptive NSBH coalescences, we find that 20-35 events are sufficient to constrain ΛNS within ±25 %- 50 % , depending on the neutron star equation of state. For these calculations we assume that LIGO will detect black holes with masses within the astrophysical mass gap. In case the mass gap remains preserved in NSBHs detected by LIGO, we estimate that approximately 25% additional detections will furnish comparable ΛNS measurement accuracy. In both cases, we find that it is the loudest 5-10 events that provide most of the tidal information, and not the combination of tens of low-SNR events, thereby facilitating targeted numerical-GR follow-ups of NSBHs. We find these results encouraging, and recommend that an effort to measure ΛNS be planned for upcoming NSBH observations with the LIGO-Virgo instruments.

  15. Simulation of radiofrequency ablation in real human anatomy.

    PubMed

    Zorbas, George; Samaras, Theodoros

    2014-12-01

    The objective of the current work was to simulate radiofrequency ablation treatment in computational models with realistic human anatomy, in order to investigate the effect of realistic geometry in the treatment outcome. The body sites considered in the study were liver, lung and kidney. One numerical model for each body site was obtained from Duke, member of the IT'IS Virtual Family. A spherical tumour was embedded in each model and a single electrode was inserted into the tumour. The same excitation voltage was used in all cases to underline the differences in the resulting temperature rise, due to different anatomy at each body site investigated. The same numerical calculations were performed for a two-compartment model of the tissue geometry, as well as with the use of an analytical approximation for a single tissue compartment. Radiofrequency ablation (RFA) therapy appears efficient for tumours in liver and lung, but less efficient in kidney. Moreover, the time evolution of temperature for a realistic geometry differs from that for a two-compartment model, but even more for an infinite homogenous tissue model. However, it appears that the most critical parameters of computational models for RFA treatment planning are tissue properties rather than tissue geometry. Computational simulations of realistic anatomy models show that the conventional technique of a single electrode inside the tumour volume requires a careful choice of both the excitation voltage and treatment time in order to achieve effective treatment, since the ablation zone differs considerably for various body sites.

  16. Accurate Ray-tracing of Realistic Neutron Star Atmospheres for Constraining Their Parameters

    NASA Astrophysics Data System (ADS)

    Vincent, Frederic H.; Bejger, Michał; Różańska, Agata; Straub, Odele; Paumard, Thibaut; Fortin, Morgane; Madej, Jerzy; Majczyna, Agnieszka; Gourgoulhon, Eric; Haensel, Paweł; Zdunik, Leszek; Beldycki, Bartosz

    2018-03-01

    Thermal-dominated X-ray spectra of neutron stars in quiescent, transient X-ray binaries and neutron stars that undergo thermonuclear bursts are sensitive to mass and radius. The mass–radius relation of neutron stars depends on the equation of state (EoS) that governs their interior. Constraining this relation accurately is therefore of fundamental importance to understand the nature of dense matter. In this context, we introduce a pipeline to calculate realistic model spectra of rotating neutron stars with hydrogen and helium atmospheres. An arbitrarily fast-rotating neutron star with a given EoS generates the spacetime in which the atmosphere emits radiation. We use the LORENE/NROTSTAR code to compute the spacetime numerically and the ATM24 code to solve the radiative transfer equations self-consistently. Emerging specific intensity spectra are then ray-traced through the neutron star’s spacetime from the atmosphere to a distant observer with the GYOTO code. Here, we present and test our fully relativistic numerical pipeline. To discuss and illustrate the importance of realistic atmosphere models, we compare our model spectra to simpler models like the commonly used isotropic color-corrected blackbody emission. We highlight the importance of considering realistic model-atmosphere spectra together with relativistic ray-tracing to obtain accurate predictions. We also insist upon the crucial impact of the star’s rotation on the observables. Finally, we close a controversy that has been ongoing in the literature in the recent years, regarding the validity of the ATM24 code.

  17. Modeling Supermassive Black Holes in Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  18. Accelerator-based BNCT.

    PubMed

    Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O

    2014-06-01

    The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases. © 2013 Elsevier Ltd. All rights reserved.

  19. Parameter calibration for synthesizing realistic-looking variability in offline handwriting

    NASA Astrophysics Data System (ADS)

    Cheng, Wen; Lopresti, Dan

    2011-01-01

    Motivated by the widely accepted principle that the more training data, the better a recognition system performs, we conducted experiments asking human subjects to do evaluate a mixture of real English handwritten text lines and text lines altered from existing handwriting with various distortion degrees. The idea of generating synthetic handwriting is based on a perturbation method by T. Varga and H. Bunke that distorts an entire text line. There are two purposes of our experiments. First, we want to calibrate distortion parameter settings for Varga and Bunke's perturbation model. Second, we intend to compare the effects of parameter settings on different writing styles: block, cursive and mixed. From the preliminary experimental results, we determined appropriate ranges for parameter amplitude, and found that parameter settings should be altered for different handwriting styles. With the proper parameter settings, it should be possible to generate large amount of training and testing data for building better off-line handwriting recognition systems.

  20. Atomistic analysis of valley-orbit hybrid states and inter-dot tunnel rates in a Si double quantum dot

    NASA Astrophysics Data System (ADS)

    Ferdous, Rifat; Rahman, Rajib; Klimeck, Gerhard

    2014-03-01

    Silicon quantum dots are promising candidates for solid-state quantum computing due to the long spin coherence times in silicon, arising from small spin-orbit interaction and a nearly spin free host lattice. However, the conduction band valley degeneracy adds an additional degree of freedom to the electronic structure, complicating the encoding and operation of qubits. Although the valley and the orbital indices can be uniquely identified in an ideal silicon quantum dot, atomic-scale disorder mixes valley and orbital states in realistic dots. Such valley-orbit hybridization, strongly influences the inter-dot tunnel rates.Using a full-band atomistic tight-binding method, we analyze the effect of atomic-scale interface disorder in a silicon double quantum dot. Fourier transform of the tight-binding wavefunctions helps to analyze the effect of disorder on valley-orbit hybridization. We also calculate and compare inter-dot inter-valley and intra-valley tunneling, in the presence of realistic disorder, such as interface tilt, surface roughness, alloy disorder, and interface charges. The method provides a useful way to compute electronic states in realistically disordered systems without any posteriori fitting parameters.

  1. A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model

    NASA Astrophysics Data System (ADS)

    Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge

    2016-12-01

    A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.

  2. Mapping algorithm for freeform construction using non-ideal light sources

    NASA Astrophysics Data System (ADS)

    Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.

    2015-09-01

    Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.

  3. Cortical sources of ERP in prosaccade and antisaccade eye movements using realistic source models

    PubMed Central

    Richards, John E.

    2013-01-01

    The cortical sources of event-related-potentials (ERP) using realistic source models were examined in a prosaccade and antisaccade procedure. College-age participants were presented with a preparatory interval and a target that indicated the direction of the eye movement that was to be made. In some blocks a cue was given in the peripheral location where the target was to be presented and in other blocks no cue was given. In Experiment 1 the prosaccade and antisaccade trials were presented randomly within a block; in Experiment 2 procedures were compared in which either prosaccade and antisaccade trials were mixed in the same block, or trials were presented in separate blocks with only one type of eye movement. There was a central negative slow wave occurring prior to the target, a slow positive wave over the parietal scalp prior to the saccade, and a parietal spike potential immediately prior to saccade onset. Cortical source analysis of these ERP components showed a common set of sources in the ventral anterior cingulate and orbital frontal gyrus for the presaccadic positive slow wave and the spike potential. In Experiment 2 the same cued- and non-cued blocks were used, but prosaccade and antisaccade trials were presented in separate blocks. This resulted in a smaller difference in reaction time between prosaccade and antisaccade trials. Unlike the first experiment, the central negative slow wave was larger on antisaccade than on prosaccade trials, and this effect on the ERP component had its cortical source primarily in the parietal and mid-central cortical areas contralateral to the direction of the eye movement. These results suggest that blocked prosaccade and antisaccade trials results in preparatory or set effects that decreases reaction time, eliminates some cueing effects, and is based on contralateral parietal-central brain areas. PMID:23847476

  4. Training the EFL Teacher--An Illustrated Commentary.

    ERIC Educational Resources Information Center

    Rees, Alun L. W.

    1970-01-01

    Despite current interest in the field of teaching English as a foreign language, there is still cause for dissatisfaction with the training of EFL teachers, both in Britain and abroad. The author presents, in the form of a "duologue," some pertinent views from a variety of sources, and stresses the need for a more realistic approach to…

  5. Woody biomass outreach in the southern United States: A case study

    Treesearch

    Martha Monroe; Annie Oxarart

    2011-01-01

    Woody biomass is one potential renewable energy source that is technically feasible where environmental and economic factors are promising. It becomes a realistic option when it is also socially acceptable. Public acceptance and support of wood to energy proposals require community education and outreach. The Wood to Energy Outreach Program provides science-based...

  6. Make Your Own Paint Chart: A Realistic Context for Developing Proportional Reasoning with Ratios

    ERIC Educational Resources Information Center

    Beswick, Kim

    2011-01-01

    Proportional reasoning has been recognised as a crucial focus of mathematics in the middle years and also as a frequent source of difficulty for students (Lamon, 2007). Proportional reasoning concerns the equivalence of pairs of quantities that are related multiplicatively; that is, equivalent ratios including those expressed as fractions and…

  7. The Foreign Language Teacher as "Con Artist."

    ERIC Educational Resources Information Center

    Hughes, Jean S.

    A teacher's experiences in acquiring realia (mostly food-related) for her junior high school French classes are described. The collection of realistic props proved to be both a small adventure in itself and the source of a rewarding change in classroom instruction. The use of a simulated store, in which students bought and sold the imitation food,…

  8. Statistical techniques for sampling and monitoring natural resources

    Treesearch

    Hans T. Schreuder; Richard Ernst; Hugo Ramirez-Maldonado

    2004-01-01

    We present the statistical theory of inventory and monitoring from a probabilistic point of view. We start with the basics and show the interrelationships between designs and estimators illustrating the methods with a small artificial population as well as with a mapped realistic population. For such applications, useful open source software is given in Appendix 4....

  9. Fire, ice, water, and dirt: A simple climate model

    NASA Astrophysics Data System (ADS)

    Kroll, John

    2017-07-01

    A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.

  10. Fire, ice, water, and dirt: A simple climate model.

    PubMed

    Kroll, John

    2017-07-01

    A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.

  11. Realistic loophole-free Bell test with atom-photon entanglement

    NASA Astrophysics Data System (ADS)

    Teo, C.; Araújo, M.; Quintino, M. T.; Minář, J.; Cavalcanti, D.; Scarani, V.; Terra Cunha, M.; França Santos, M.

    2013-07-01

    The establishment of nonlocal correlations, guaranteed through the violation of a Bell inequality, is not only important from a fundamental point of view but constitutes the basis for device-independent quantum information technologies. Although several nonlocality tests have been conducted so far, all of them suffered from either locality or detection loopholes. Among the proposals for overcoming these problems are the use of atom-photon entanglement and hybrid photonic measurements (for example, photodetection and homodyning). Recent studies have suggested that the use of atom-photon entanglement can lead to Bell inequality violations with moderate transmission and detection efficiencies. Here we combine these ideas and propose an experimental setup realizing a simple atom-photon entangled state that can be used to obtain nonlocality when considering realistic experimental parameters including detection efficiencies and losses due to required propagation distances.

  12. Simulation of HLNC and NCC measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.; De Ridder, P.

    1994-03-01

    This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less

  13. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.

  14. 2.5D Modeling of TEM Data Applied to Hidrogeological Studies in PARANÁ Basin, Brazil

    NASA Astrophysics Data System (ADS)

    Bortolozo, C. A.; Porsani, J. L.; Santos, F. M.

    2013-12-01

    The transient electromagnetic method (TEM) is used all over the world and has shown great potential in hydrological, hazardous waste site characterization, mineral exploration, general geological mapping, and geophysical reconnaissance. However, the behavior of TEM fields are very complex and is not yet fully understood. Forward modeling is one of the most common and effective methods to understand the physical behavior and significance of the electromagnetics responses of a TEM sounding. Until now, there are a limited number of solutions for the 2D forward problem for TEM. More rare are the descriptions of a three-component response of a 3D source over 2D earth, which is the so-called 2.5D. The 2.5D approach is more realistic than the conventional 2D source previous used, once normally the source cannot be realistic represented for a 2D approximation (normally source are square loops). At present the 2.5D model represents the only way of interpreting TEM data in terms of a complex earth, due to the prohibitive amount of computer time and storage required for a full 3D model. In this work we developed a TEM modeling program for understanding the different responses and how the magnetic and electric fields, produced by loop sources at air-earth interface, behave in different geoelectrical distributions. The models used in the examples are proposed focusing hydrogeological studies, once the main objective of this work is for detecting different kinds of aquifers in Paraná sedimentary basin, in São Paulo State - Brazil. The program was developed in MATLAB, a widespread language very common in the scientific community.

  15. Toward more realistic projections of soil carbon dynamics by Earth system models

    USGS Publications Warehouse

    Luo, Y.; Ahlström, Anders; Allison, Steven D.; Batjes, Niels H.; Brovkin, V.; Carvalhais, Nuno; Chappell, Adrian; Ciais, Philippe; Davidson, Eric A.; Finzi, Adien; Georgiou, Katerina; Guenet, Bertrand; Hararuk, Oleksandra; Harden, Jennifer; He, Yujie; Hopkins, Francesca; Jiang, L.; Koven, Charles; Jackson, Robert B.; Jones, Chris D.; Lara, M.; Liang, J.; McGuire, A. David; Parton, William; Peng, Changhui; Randerson, J.; Salazar, Alejandro; Sierra, Carlos A.; Smith, Matthew J.; Tian, Hanqin; Todd-Brown, Katherine E. O; Torn, Margaret S.; van Groenigen, Kees Jan; Wang, Ying; West, Tristram O.; Wei, Yaxing; Wieder, William R.; Xia, Jianyang; Xu, Xia; Xu, Xiaofeng; Zhou, T.

    2016-01-01

    Soil carbon (C) is a critical component of Earth system models (ESMs), and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the third to fifth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe the environmental conditions that soils experience. First, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by first-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic soil organic C (SOC) dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Second, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based data sets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Third, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable data sets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.

  16. How much expert knowledge is it worth to put in conceptual hydrological models?

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Zappa, Massimiliano

    2017-04-01

    Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.

  17. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  18. The role of interior watershed processes in improving parameter estimation and performance of watershed models.

    PubMed

    Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G

    2014-09-01

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  19. Model parameters for representative wetland plant functional groups

    USGS Publications Warehouse

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in adjacent areas as they affect wetlands.

  20. Usage of ensemble geothermal models to consider geological uncertainties

    NASA Astrophysics Data System (ADS)

    Rühaak, Wolfram; Steiner, Sarah; Welsch, Bastian; Sass, Ingo

    2015-04-01

    The usage of geothermal energy for instance by borehole heat exchangers (BHE) is a promising concept for a sustainable supply of heat for buildings. BHE are closed pipe systems, in which a fluid is circulating. Heat from the surrounding rocks is transferred to the fluid purely by conduction. The fluid carries the heat to the surface, where it can be utilized. Larger arrays of BHE require typically previous numerical models. Motivations are the design of the system (number and depth of the required BHE) but also regulatory reasons. Especially such regulatory operating permissions often require maximum realistic models. Although such realistic models are possible in many cases with today's codes and computer resources, they are often expensive in terms of time and effort. A particular problem is the knowledge about the accuracy of the achieved results. An issue, which is often neglected while dealing with highly complex models, is the quantification of parameter uncertainties as a consequence of the natural heterogeneity of the geological subsurface. Experience has shown, that these heterogeneities can lead to wrong forecasts. But also variations in the technical realization and especially of the operational parameters (which are mainly a consequence of the regional climate) can lead to strong variations in the simulation results. Instead of one very detailed single forecast model, it should be considered, to model numerous more simple models. By varying parameters, the presumed subsurface uncertainties, but also the uncertainties in the presumed operational parameters can be reflected. Finally not only one single result should be reported, but instead the range of possible solutions and their respective probabilities. In meteorology such an approach is well known as ensemble-modeling. The concept is demonstrated at a real world data set and discussed.

Top