Sample records for time-dependent source term

  1. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  2. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  3. A two-dimensional transient analytical solution for a ponded ditch drainage system under the influence of source/sink

    NASA Astrophysics Data System (ADS)

    Sarmah, Ratan; Tiwari, Shubham

    2018-03-01

    An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.

  4. Hydraulic transients: a seismic source in volcanoes and glaciers.

    PubMed

    Lawrence, W S; Qamar, A

    1979-02-16

    A source for certain low-frequency seismic waves is postulated in terms of the water hammer effect. The time-dependent displacement of a water-filled sub-glacial conduit is analyzed to demonstrate the nature of the source. Preliminary energy calculations and the observation of hydraulically generated seismic radiation from a dam indicate the plausibility of the proposed source.

  5. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms

    PubMed Central

    2014-01-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581

  6. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  7. Solution of the equation of heat conduction with time dependent sources: Programmed application to planetary thermal history

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1975-01-01

    A computer program (Program SPHERE) solving the inhomogeneous equation of heat conduction with radiation boundary condition on a thermally homogeneous sphere is described. The source terms are taken to be exponential functions of the time. Thermal properties are independent of temperature. The solutions are appropriate to studying certain classes of planetary thermal history. Special application to the moon is discussed.

  8. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  9. Improved tomographic reconstructions using adaptive time-dependent intensity normalization.

    PubMed

    Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui

    2010-09-01

    The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.

  10. Porous elastic system with nonlinear damping and sources terms

    NASA Astrophysics Data System (ADS)

    Freitas, Mirelson M.; Santos, M. L.; Langa, José A.

    2018-02-01

    We study the long-time behavior of porous-elastic system, focusing on the interplay between nonlinear damping and source terms. The sources may represent restoring forces, but may also be focusing thus potentially amplifying the total energy which is the primary scenario of interest. By employing nonlinear semigroups and the theory of monotone operators, we obtain several results on the existence of local and global weak solutions, and uniqueness of weak solutions. Moreover, we prove that such unique solutions depend continuously on the initial data. Under some restrictions on the parameters, we also prove that every weak solution to our system blows up in finite time, provided the initial energy is negative and the sources are more dominant than the damping in the system. Additional results are obtained via careful analysis involving the Nehari Manifold. Specifically, we prove the existence of a unique global weak solution with initial data coming from the "good" part of the potential well. For such a global solution, we prove that the total energy of the system decays exponentially or algebraically, depending on the behavior of the dissipation in the system near the origin. We also prove the existence of a global attractor.

  11. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  12. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  13. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  14. Two Coincidence Detectors for Spike Timing-Dependent Plasticity in Somatosensory Cortex

    PubMed Central

    Bender, Vanessa A.; Bender, Kevin J.; Brasier, Daniel J.; Feldman, Daniel E.

    2011-01-01

    Many cortical synapses exhibit spike timing-dependent plasticity (STDP) in which the precise timing of presynaptic and postsynaptic spikes induces synaptic strengthening [long-term potentiation (LTP)] or weakening [long-term depression (LTD)]. Standard models posit a single, postsynaptic, NMDA receptor-based coincidence detector for LTP and LTD components of STDP. We show instead that STDP at layer 4 to layer 2/3 synapses in somatosensory (S1) cortex involves separate calcium sources and coincidence detection mechanisms for LTP and LTD. LTP showed classical NMDA receptor dependence. LTD was independent of postsynaptic NMDA receptors and instead required group I metabotropic glutamate receptors and calcium from voltage-sensitive channels and IP3 receptor-gated stores. Downstream of postsynaptic calcium, LTD required retrograde endocannabinoid signaling, leading to presynaptic LTD expression, and also required activation of apparently presynaptic NMDA receptors. These LTP and LTD mechanisms detected firing coincidence on ~25 and ~125 ms time scales, respectively, and combined to implement the overall STDP rule. These findings indicate that STDP is not a unitary process and suggest that endocannabinoid-dependent LTD may be relevant to cortical map plasticity. PMID:16624937

  15. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  16. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    PubMed Central

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2012-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems. The model accommodates the requirement for measurable physical agents in the remedy (source nanoparticles and/or source adsorbed to silica nanoparticles). Hormetic adaptive responses in the organism, triggered by nanoparticles; bipolar, metaplastic change, dependent on the history of the organism. Clinical matching of the patient’s symptom picture, including modalities, to the symptom pattern that the source material can cause (cross-adaptation and cross-sensitization). Evidence for nanoparticle-related quantum macro-entanglement in homeopathic pathogenetic trials. This paper examines research implications of the model, discussing the following hypotheses: Variability in nanoparticle size, morphology, and aggregation affects remedy properties and reproducibility of findings. Homeopathic remedies modulate adaptive allostatic responses, with multiple dynamic short- and long-term effects. Simillimum remedy nanoparticles, as novel mild stressors corresponding to the organism’s dysfunction initiate time-dependent cross-sensitization, reversing the direction of dysfunctional reactivity to environmental stressors. The NPCAS model suggests a way forward for systematic research on homeopathy. The central proposition is that homeopathic treatment is a form of nanomedicine acting by modulation of endogenous adaptation and metaplastic amplification processes in the organism to enhance long-term systemic resilience and health. PMID:23290882

  17. Spatially varying density dependence drives a shifting mosaic of survival in a recovering apex predator (Canis lupus).

    PubMed

    O'Neil, Shawn T; Bump, Joseph K; Beyer, Dean E

    2017-11-01

    Understanding landscape patterns in mortality risk is crucial for promoting recovery of threatened and endangered species. Humans affect mortality risk in large carnivores such as wolves ( Canis lupus ), but spatiotemporally varying density dependence can significantly influence the landscape of survival. This potentially occurs when density varies spatially and risk is unevenly distributed. We quantified spatiotemporal sources of variation in survival rates of gray wolves ( C. lupus ) during a 21-year period of population recovery in the Upper Peninsula of Michigan, USA. We focused on mapping risk across time using Cox Proportional Hazards (CPH) models with time-dependent covariates, thus exploring a shifting mosaic of survival. Extended CPH models and time-dependent covariates revealed influences of seasonality, density dependence and experience, as well as individual-level factors and landscape predictors of risk. We used results to predict the shifting landscape of risk at the beginning, middle, and end of the wolf recovery time series. Survival rates varied spatially and declined over time. Long-term change was density-dependent, with landscape predictors such as agricultural land cover and edge densities contributing negatively to survival. Survival also varied seasonally and depended on individual experience, sex, and resident versus transient status. The shifting landscape of survival suggested that increasing density contributed to greater potential for human conflict and wolf mortality risk. Long-term spatial variation in key population vital rates is largely unquantified in many threatened, endangered, and recovering species. Variation in risk may indicate potential for source-sink population dynamics, especially where individuals preemptively occupy suitable territories, which forces new individuals into riskier habitat types as density increases. We encourage managers to explore relationships between adult survival and localized changes in population density. Density-dependent risk maps can identify increasing conflict areas or potential habitat sinks which may persist due to high recruitment in adjacent habitats.

  18. Spurious Solutions Of Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.; Griffiths, D. F.

    1992-01-01

    Report utilizes nonlinear-dynamics approach to investigate possible sources of errors and slow convergence and non-convergence of steady-state numerical solutions when using time-dependent approach for problems containing nonlinear source terms. Emphasizes implications for development of algorithms in CFD and computational sciences in general. Main fundamental conclusion of study is that qualitative features of nonlinear differential equations cannot be adequately represented by finite-difference method and vice versa.

  19. Wide localized solutions of the parity-time-symmetric nonautonomous nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Meza, L. E. Arroyo; Dutra, A. de Souza; Hott, M. B.; Roy, P.

    2015-01-01

    By using canonical transformations we obtain localized (in space) exact solutions of the nonlinear Schrödinger equation (NLSE) with cubic and quintic space and time modulated nonlinearities and in the presence of time-dependent and inhomogeneous external potentials and amplification or absorption (source or drain) coefficients. We obtain a class of wide localized exact solutions of NLSE in the presence of a number of non-Hermitian parity-time (PT )-symmetric external potentials, which are constituted by a mixing of external potentials and source or drain terms. The exact solutions found here can be applied to theoretical studies of ultrashort pulse propagation in optical fibers with focusing and defocusing nonlinearities. We show that, even in the presence of gain or loss terms, stable solutions can be found and that the PT symmetry is an important feature to guarantee the conservation of the average energy of the system.

  20. Direct computation of turbulence and noise

    NASA Technical Reports Server (NTRS)

    Berman, C.; Gordon, G.; Karniadakis, G.; Batcho, P.; Jackson, E.; Orszag, S.

    1991-01-01

    Jet exhaust turbulence noise is computed using a time dependent solution of the three dimensional Navier-Stokes equations to supply the source terms for an acoustic computation based on the Phillips convected wave equation. An extrapolation procedure is then used to determine the far field noise spectrum in terms of the near field sound. This will lay the groundwork for studies of more complex flows typical of noise suppression nozzles.

  1. The influence of initial conditions on dispersion and reactions

    NASA Astrophysics Data System (ADS)

    Wood, B. D.

    2016-12-01

    In various generalizations of the reaction-dispersion problem, researchers have developed frameworks in which the apparent dispersion coefficient can be negative. Such dispersion coefficients raise several difficult questions. Most importantly, the presence of a negative dispersion coefficient at the macroscale leads to a macroscale representation that illustrates an apparent decrease in entropy with increasing time; this, then, appears to be in violation of basic thermodynamic principles. In addition, the proposition of a negative dispersion coefficient leads to an inherently ill-posed mathematical transport equation. The ill-posedness of the problem arises because there is no unique initial condition that corresponds to a later-time concentration distribution (assuming that if discontinuous initial conditions are allowed). In this presentation, we explain how the phenomena of negative dispersion coefficients actually arise because the governing differential equation for early times should, when derived correctly, incorporate a term that depends upon the initial and boundary conditions. The process of reactions introduces a similar phenomena, where the structure of the initial and boundary condition influences the form of the macroscopic balance equations. When upscaling is done properly, new equations are developed that include source terms that are not present in the classical (late-time) reaction-dispersion equation. These source terms depend upon the structure of the initial condition of the reacting species, and they decrease exponentially in time (thus, they converge to the conventional equations at asymptotic times). With this formulation, the resulting dispersion tensor is always positive-semi-definite, and the reaction terms directly incorporate information about the state of mixedness of the system. This formulation avoids many of the problems that would be engendered by defining negative-definite dispersion tensors, and properly represents the effective rate of reaction at early times.

  2. The prediction of the noise of supersonic propellers in time domain - New theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1983-01-01

    In this paper, a new formula for the prediction of the noise of supersonic propellers is derived in the time domain which is superior to the previous formulations in several respects. The governing equation is based on the Ffowcs Williams-Hawkings (FW-H) equation with the thickness source term replaced by an equivalent loading source term derived by Isom (1975). Using some results of generalized function theory and simple four-dimensional space-time geometry, the formal solution of the governing equation is manipulated to a form requiring only the knowledge of blade surface pressure data and geometry. The final form of the main result of this paper consists of some surface and line integrals. The surface integrals depend on the surface pressure, time rate of change of surface pressure, and surface pressure gradient. These integrals also involve blade surface curvatures. The line integrals which depend on local surface pressure are along the trailing edge, the shock traces on the blade, and the perimeter of the airfoil section at the inner radius of the blade. The new formulation is for the full blade surface and does not involve any numerical observer time differentiation. The method of implementation on a computer for numerical work is also discussed.

  3. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  4. The evolution of methods for noise prediction of high speed rotors and propellers in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1986-01-01

    Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.

  5. Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS

    NASA Astrophysics Data System (ADS)

    Decker, Glenn; Rosenbaum, Gerd; Singh, Om

    2006-11-01

    Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.

  6. The long-term performance degradation of a radioisotope thermoelectric generator using silicon germanium

    NASA Technical Reports Server (NTRS)

    Stapfer, G.; Truscello, V. C.

    1976-01-01

    The successful utilization of a radioisotope thermoelectric generator (RTG) as the power source for spaceflight missions requires that the performance of such an RTG be predictable throughout the mission. Several mechanisms occur within the generator which tend to degrade the performance as a function of operating time. The impact which these mechanisms have on the available output power of an RTG depends primarily on such factors as time, temperature and self-limiting effects. The relative magnitudes, rates and temperature dependency of these various degradation mechanisms have been investigated separately by coupon experiments as well as 4-couple and 18-couple module experiments. This paper discusses the different individual mechanisms and summarizes their combined influence on the performance of an RTG. Also presented as part of the RTG long-term performance characteristics is the sensitivity of the available RTG output power to variations of the individual degradation mechanisms thus identifying the areas of greatest concern for a successful long-term mission.

  7. An interpretation of induced electric currents in long pipelines caused by natural geomagnetic sources of the upper atmosphere

    USGS Publications Warehouse

    Campbell, W.H.

    1986-01-01

    Electric currents in long pipelines can contribute to corrosion effects that limit the pipe's lifetime. One cause of such electric currents is the geomagnetic field variations that have sources in the Earth's upper atmosphere. Knowledge of the general behavior of the sources allows a prediction of the occurrence times, favorable locations for the pipeline effects, and long-term projections of corrosion contributions. The source spectral characteristics, the Earth's conductivity profile, and a corrosion-frequency dependence limit the period range of the natural field changes that affect the pipe. The corrosion contribution by induced currents from geomagnetic sources should be evaluated for pipelines that are located at high and at equatorial latitudes. At midlatitude locations, the times of these natural current maxima should be avoided for the necessary accurate monitoring of the pipe-to-soil potential. ?? 1986 D. Reidel Publishing Company.

  8. Inverse source problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  9. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  10. Linear response of entanglement entropy from holography

    NASA Astrophysics Data System (ADS)

    Lokhande, Sagar F.; Oling, Gerben W. J.; Pedraza, Juan F.

    2017-10-01

    For time-independent excited states in conformal field theories, the entanglement entropy of small subsystems satisfies a `first law'-like relation, in which the change in entanglement is proportional to the energy within the entangling region. Such a law holds for time-dependent scenarios as long as the state is perturbatively close to the vacuum, but is not expected otherwise. In this paper we use holography to investigate the spread of entanglement entropy for unitary evolutions of special physical interest, the so-called global quenches. We model these using AdS-Vaidya geometries. We find that the first law of entanglement is replaced by a linear response relation, in which the energy density takes the role of the source and is integrated against a time-dependent kernel with compact support. For adiabatic quenches the standard first law is recovered, while for rapid quenches the linear response includes an extra term that encodes the process of thermalization. This extra term has properties that resemble a time-dependent `relative entropy'. We propose that this quantity serves as a useful order parameter to characterize far-from-equilibrium excited states. We illustrate our findings with concrete examples, including generic power-law and periodically driven quenches.

  11. The generation of gravitational waves. I - Weak-field sources

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1975-01-01

    This paper derives and summarizes a 'plug-in-and-grind' formalism for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, the formalism reduces to standard 'linearized theory'. Independent of the effects of gravity on the motions, the formalism reduces to the standard 'quadrupole-moment formalism' if the motions are slow and internal stresses are weak. In the general case, the formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime and breaks the Green's function integral into five easily understood pieces: direct radiation, produced directly by the motions of the source; whump radiation, produced by the 'gravitational stresses' of the source; transition radiation, produced by a time-changing time delay ('Shapiro effect') in the propagation of the nonradiative 1/r field of the source; focusing radiation, produced when one portion of the source focuses, in a time-dependent way, the nonradiative field of another portion of the source; and tail radiation, produced by 'back-scatter' of the nonradiative field in regions of focusing.

  12. The generation of gravitational waves. 1. Weak-field sources: A plug-in-and-grind formalism

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1974-01-01

    A plug-in-and-grind formalism is derived for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, then the formalism reduces to standard linearized theory. Whether or not gravity affects the motions, if the motions are slow and internal stresses are weak, then the new formalism reduces to the standard quadrupole-moment formalism. In the general case the new formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime, and then breaks the Green's-function integral into five easily understood pieces: direct radiation, produced directly by the motions of the sources; whump radiation, produced by the the gravitational stresses of the source; transition radiation, produced by a time-changing time delay (Shapiro effect) in the propagation of the nonradiative, 1/r field of the source; focussing radiation produced when one portion of the source focusses, in a time-dependent way, the nonradiative field of another portion of the source, and tail radiation, produced by backscatter of the nonradiative field in regions of focussing.

  13. EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young

    2003-02-27

    Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less

  14. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less

  15. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  16. A new aerodynamic integral equation based on an acoustic formula in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    An aerodynamic integral equation for bodies moving at transonic and supersonic speeds is presented. Based on a time-dependent acoustic formula for calculating the noise emanating from the outer portion of a propeller blade travelling at high speed (the Ffowcs Williams-Hawking formulation), the loading terms and a conventional thickness source terms are retained. Two surface and three line integrals are employed to solve an equation for the loading noise. The near-field term is regularized using the collapsing sphere approach to obtain semiconvergence on the blade surface. A singular integral equation is thereby derived for the unknown surface pressure, and is amenable to numerical solutions using Galerkin or collocation methods. The technique is useful for studying the nonuniform inflow to the propeller.

  17. Time Variations in Forecasts and Occurrences of Large Solar Energetic Particle Events

    NASA Astrophysics Data System (ADS)

    Kahler, S. W.

    2015-12-01

    The onsets and development of large solar energetic (E > 10 MeV) particle (SEP) events have been characterized in many studies. The statistics of SEP event onset delay times from associated solar flares and coronal mass ejections (CMEs), which depend on solar source longitudes, can be used to provide better predictions of whether a SEP event will occur following a large flare or fast CME. In addition, size distributions of peak SEP event intensities provide a means for a probabilistic forecast of peak intensities attained in observed SEP increases. SEP event peak intensities have been compared with their rise and decay times for insight into the acceleration and transport processes. These two time scales are generally treated as independent parameters describing the development of a SEP event, but we can invoke an alternative two-parameter description based on the assumption that decay times exceed rise times for all events. These two parameters, from the well known Weibull distribution, provide an event description in terms of its basic shape and duration. We apply this distribution to several large SEP events and ask what the characteristic parameters and their dependence on source longitudes can tell us about the origins of these important events.

  18. Local time dependence of turbulent magnetic fields in Saturn's magnetodisc

    NASA Astrophysics Data System (ADS)

    Kaminker, V.; Delamere, P. A.; Ng, C. S.; Dennis, T.; Otto, A.; Ma, X.

    2017-04-01

    Net plasma transport in magnetodiscs around giant planets is outward. Observations of plasma temperature have shown that the expanding plasma is heating nonadiabatically during this process. Turbulence has been suggested as a source of heating. However, the mechanism and distribution of magnetic fluctuations in giant magnetospheres are poorly understood. In this study we attempt to quantify the radial and local time dependence of fluctuating magnetic field signatures that are suggestive of turbulence, quantifying the fluctuations in terms of a plasma heating rate density. In addition, the inferred heating rate density is correlated with magnetic field configurations that include azimuthal bend forward/back and magnitude of the equatorial normal component of magnetic field relative to the dipole. We find a significant local time dependence in magnetic fluctuations that is consistent with flux transport triggered in the subsolar and dusk sectors due to magnetodisc reconnection.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabrera-Palmer, Belkis

    Predicting the performance of radiation detection systems at field sites based on measured performance acquired under controlled conditions at test locations, e.g., the Nevada National Security Site (NNSS), remains an unsolved and standing issue within DNDO’s testing methodology. Detector performance can be defined in terms of the system’s ability to detect and/or identify a given source or set of sources, and depends on the signal generated by the detector for the given measurement configuration (i.e., source strength, distance, time, surrounding materials, etc.) and on the quality of the detection algorithm. Detector performance is usually evaluated in the performance and operationalmore » testing phases, where the measurement configurations are selected to represent radiation source and background configurations of interest to security applications.« less

  20. Tuning Into Brown Dwarfs: Long-Term Radio Monitoring of Two Very Low Mass Dwarfs

    NASA Astrophysics Data System (ADS)

    Van Linge, Russell; Burgasser, Adam J.; Melis, Carl; Williams, Peter K. G.

    2017-01-01

    The very lowest-mass (VLM) stars and brown dwarfs, with effective temperatures T < 3000 K, exhibit mixed magnetic activity trends, with H-alpha and X-ray emission that declines rapidly beyond type M7/M8, but persistent radio emission in roughly 10-20% of sources. The dozen or so VLM radio emitters known show a broad range of emission characteristics and time-dependent behavior, including steady persistent emission, periodic oscillations, periodic polarized bursts, and aperiodic flares. Understanding the evolution of these variability patterns, and in particular whether they undergo solar-like cycles, requires long-term monitoring. We report the results of a long-term JVLA monitoring program of two magnetically-active VLM dwarf binaries, the young M7 2MASS 1314+1320AB and older L5 2MASS 1315-2649AB. On the bi-weekly cadence, 2MASS 1314 continues to show variability by revealing regular flaring while 2MASS 1315 continues to be a quiescent emitter. On the daily time scale, both sources show a mean flux density that can vary significantly just over a few days. These results suggest long-term radio behavior in radio-emitting VLM dwarfs is just as diverse and complex as short-term behavior.

  1. Comparative analysis between saliva and buccal swabs as source of DNA: lesson from HLA-B*57:01 testing.

    PubMed

    Cascella, Raffaella; Stocchi, Laura; Strafella, Claudia; Mezzaroma, Ivano; Mannazzu, Marco; Vullo, Vincenzo; Montella, Francesco; Parruti, Giustino; Borgiani, Paola; Sangiuolo, Federica; Novelli, Giuseppe; Pirazzoli, Antonella; Zampatti, Stefania; Giardina, Emiliano

    2015-01-01

    Our work aimed to designate the optimal DNA source for pharmacogenetic assays, such as the screening for HLA-B*57:01 allele. A saliva and four buccal swab samples were taken from 104 patients. All the samples were stored at different time and temperature conditions and then genotyped for the HLA-B*57:01 allele by SSP-PCR and classical/capillary electrophoresis. The genotyping analysis reported different performance rates depending on the storage conditions of the samples. Given our results, the buccal swab demonstrated to be more resistant and stable in time with respect to the saliva. Our investigation designates the buccal swab as the optimal DNA source for pharmacogenetic assays in terms of resistance, low infectivity, low-invasiveness and easy sampling, and safe transport in centralized medical centers providing specialized pharmacogenetic tests.

  2. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  3. Radiation equivalent dose simulations for long-term interplanetary flights

    NASA Astrophysics Data System (ADS)

    Dobynde, M. I.; Drozdov, A.; Shprits, Y. Y.

    2016-12-01

    Cosmic particle radiation is a limiting factor for the human interplanetary flights. The unmanned flights inside heliosphere and human flights inside of magnetosphere tend to become a routine procedure, whereas there have been only few shot term human flights out of it (Apollo missions 1969-1972) with maximum duration less than a month. Long-term human flights set much higher requirements to the radiation shielding, primarily because of long exposition to cosmic radiation. Inside the helosphere there are two main sources of cosmic radiation: galactic cosmic rays (GCR) and soalr particle events (SPE). GCR come from the outside of heliosphere forming a background of overall radiation that affects the spacecraft. The intensity of GCR is varied according to solar activity, increasing with solar activity decrease and backward, with the modulation time (time between nearest maxima) of 11 yeas. SPE are shot term events, comparing to GCR modulation time, but particle fluxes are much more higher. The probability of SPE increases with the increase of solar activity. Time dependences of the intensity of these two components encourage looking for a time window of flight, when intensity and effect of GCR and SPE would be minimized. Combining GEANT4 Monte Carlo simulations with time dependent model of GCR spectra and data on SPE spectra we show the time dependence of the radiation dose in an anthropomorphic human phantom inside the shielding capsule. Different types of particles affect differently on the human providing more or less harm to the tissues. We use quality factors to recalculate absorbed dose into biological equivalent dose, which give more information about risks for astronaut's health. Incident particles provide a large amount of secondary particles while propagating through the shielding capsule. We try to find an optimal combination of shielding material and thickness, that will effectively decrease the incident particle energy, at the same time minimizing flow of secondary induced particles and minimizing most harmful particle types flows.

  4. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  5. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chun; Schenke, Bjorn

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  6. Dynamical initial-state model for relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Shen, Chun; Schenke, Björn

    2018-02-01

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy-ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a fluctuating time depending on sampled final rapidities. Energy is deposited in space time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directly from the initial-state model, including net-baryon rapidity distributions, two-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. We also present the implementation of the model with 3+1-dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial-state model at proper times greater than the initial time for the hydrodynamic simulation.

  7. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE PAGES

    Shen, Chun; Schenke, Bjorn

    2018-02-15

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  8. Neuroimaging Evidence for Agenda-Dependent Monitoring of Different Features during Short-Term Source Memory Tests

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.

    2008-01-01

    A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…

  9. SEARCHES FOR TIME-DEPENDENT NEUTRINO SOURCES WITH ICECUBE DATA FROM 2008 TO 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Ackermann, M.; Adams, J.

    2015-07-01

    In this paper searches for flaring astrophysical neutrino sources and sources with periodic emission with the IceCube neutrino telescope are presented. In contrast to time-integrated searches, where steady emission is assumed, the analyses presented here look for a time-dependent signal of neutrinos using the information from the neutrino arrival times to enhance the discovery potential. A search was performed for correlations between neutrino arrival times and directions, as well as neutrino emission following time-dependent light curves, sporadic emission, or periodicities of candidate sources. These include active galactic nuclei, soft γ-ray repeaters, supernova remnants hosting pulsars, microquasars, and X-ray binaries. Themore » work presented here updates and extends previously published results to a longer period that covers 4 years of data from 2008 April 5 to 2012 May 16, including the first year of operation of the completed 86 string detector. The analyses did not find any significant time-dependent point sources of neutrinos, and the results were used to set upper limits on the neutrino flux from source candidates.« less

  10. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  11. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  12. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  13. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  14. Modelling of Dictyostelium discoideum movement in a linear gradient of chemoattractant.

    PubMed

    Eidi, Zahra; Mohammad-Rafiee, Farshid; Khorrami, Mohammad; Gholami, Azam

    2017-11-15

    Chemotaxis is a ubiquitous biological phenomenon in which cells detect a spatial gradient of chemoattractant, and then move towards the source. Here we present a position-dependent advection-diffusion model that quantitatively describes the statistical features of the chemotactic motion of the social amoeba Dictyostelium discoideum in a linear gradient of cAMP (cyclic adenosine monophosphate). We fit the model to experimental trajectories that are recorded in a microfluidic setup with stationary cAMP gradients and extract the diffusion and drift coefficients in the gradient direction. Our analysis shows that for the majority of gradients, both coefficients decrease over time and become negative as the cells crawl up the gradient. The extracted model parameters also show that besides the expected drift in the direction of the chemoattractant gradient, we observe a nonlinear dependency of the corresponding variance on time, which can be explained by the model. Furthermore, the results of the model show that the non-linear term in the mean squared displacement of the cell trajectories can dominate the linear term on large time scales.

  15. ADER schemes for scalar non-linear hyperbolic conservation laws with source terms in three-space dimensions

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2005-01-01

    In this paper we develop non-linear ADER schemes for time-dependent scalar linear and non-linear conservation laws in one-, two- and three-space dimensions. Numerical results of schemes of up to fifth order of accuracy in both time and space illustrate that the designed order of accuracy is achieved in all space dimensions for a fixed Courant number and essentially non-oscillatory results are obtained for solutions with discontinuities. We also present preliminary results for two-dimensional non-linear systems.

  16. Wave packet dynamics for a non-linear Schrödinger equation describing continuous position measurements

    NASA Astrophysics Data System (ADS)

    Zander, C.; Plastino, A. R.; Díaz-Alonso, J.

    2015-11-01

    We investigate time-dependent solutions for a non-linear Schrödinger equation recently proposed by Nassar and Miret-Artés (NM) to describe the continuous measurement of the position of a quantum particle (Nassar, 2013; Nassar and Miret-Artés, 2013). Here we extend these previous studies in two different directions. On the one hand, we incorporate a potential energy term in the NM equation and explore the corresponding wave packet dynamics, while in the previous works the analysis was restricted to the free-particle case. On the other hand, we investigate time-dependent solutions while previous studies focused on a stationary one. We obtain exact wave packet solutions for linear and quadratic potentials, and approximate solutions for the Morse potential. The free-particle case is also revisited from a time-dependent point of view. Our analysis of time-dependent solutions allows us to determine the stability properties of the stationary solution considered in Nassar (2013), Nassar and Miret-Artés (2013). On the basis of these results we reconsider the Bohmian approach to the NM equation, taking into account the fact that the evolution equation for the probability density ρ =| ψ | 2 is not a continuity equation. We show that the effect of the source term appearing in the evolution equation for ρ has to be explicitly taken into account when interpreting the NM equation from a Bohmian point of view.

  17. Varying efficacy of superdisintegrants in orally disintegrating tablets among different manufacturers.

    PubMed

    Mittapalli, R K; Qhattal, H S Sha; Lockman, P R; Yamsani, M R

    2010-11-01

    The main objective of the present study was to develop an orally disintegrating tablet formulation of domperidone and to study the functionality differences of superdisintegrants each obtained from two different sources on the tablet properties. Domperidone tablets were formulated with different superdisintegrants by direct compression. The effect of the type of superdisintegrant, its concentration and source was studied by measuring the in-vitro disintegration time, wetting time, water absorption ratios, drug release by dissolution and in-vivo oral disintegration time. Tablets prepared with crospovidone had lower disintegration times than tablets prepared from sodium starchglycolate and croscarmellose sodium. Formulations prepared with Polyplasdone XL, Ac-Di-Sol, and Explotab (D series) were better than formulations prepared with superdisintegrants obtained from other sources (DL series) which had longer disintegration times and lower water uptake ratios. The in-vivo disintegration time of formulation D-106 containing polyplasdone XL was significantly lower than that of the marketed formulation Domel-MT. The results from this study suggest that disintegration of orally disintegrating tablets is dependent on the nature of superdisintegrant, concentration in the formulation and its source. Even though a superdisintegrant meets USP standards there can be a variance among manufacturers in terms of performance. This is not only limited to in-vitro studies but carries over to disintegration times in the human population.

  18. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    DOE PAGES

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less

  19. Time-dependent radiation dose estimations during interplanetary space flights

    NASA Astrophysics Data System (ADS)

    Dobynde, M. I.; Shprits, Y.; Drozdov, A.

    2015-12-01

    Time-dependent radiation dose estimations during interplanetary space flights 1,2Dobynde M.I., 2,3Drozdov A.Y., 2,4Shprits Y.Y.1Skolkovo institute of science and technology, Moscow, Russia 2University of California Los Angeles, Los Angeles, USA 3Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics, Moscow, Russia4Massachusetts Institute of Technology, Cambridge, USASpace radiation is the main restriction for long-term interplanetary space missions. It induces degradation of external components and propagates inside providing damage to internal environment. Space radiation particles and induced secondary particle showers can lead to variety of damage to astronauts in short- and long- term perspective. Contribution of two main sources of space radiation- Sun and out-of-heliosphere space varies in time in opposite phase due to the solar activity state. Currently the only habituated mission is the international interplanetary station that flights on the low Earth orbit. Besides station shell astronauts are protected with the Earth magnetosphere- a natural shield that prevents significant damage for all humanity. Current progress in space exploration tends to lead humanity out of magnetosphere bounds. With the current study we make estimations of spacecraft parameters and astronauts damage for long-term interplanetary flights. Applying time dependent model of GCR spectra and data on SEP spectra we show the time dependence of the radiation in a human phantom inside the shielding capsule. We pay attention to the shielding capsule design, looking for an optimal geometry parameters and materials. Different types of particles affect differently on the human providing more or less harm to the tissues. Incident particles provide a large amount of secondary particles while propagating through the shielding capsule. We make an attempt to find an optimal combination of shielding capsule parameters, namely material and thickness, that will effectively decrease the incident particle energy, at the same time minimizing flow of secondary induced particles and minimizing most harmful particle types flows.

  20. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  1. Lagrangian descriptors in dissipative systems.

    PubMed

    Junginger, Andrej; Hernandez, Rigoberto

    2016-11-09

    The reaction dynamics of time-dependent systems can be resolved through a recrossing-free dividing surface associated with the transition state trajectory-that is, the unique trajectory which is bound to the barrier region for all time in response to a given time-dependent potential. A general procedure based on the minimization of Lagrangian descriptors has recently been developed by Craven and Hernandez [Phys. Rev. Lett., 2015, 115, 148301] to construct this particular trajectory without requiring perturbative expansions relative to the naive transition state point at the top of the barrier. The extension of the method to account for dissipation in the equations of motion requires additional considerations established in this paper because the calculation of the Lagrangian descriptor involves the integration of trajectories in forward and backward time. The two contributions are in general very different because the friction term can act as a source (in backward time) or sink (in forward time) of energy, leading to the possibility that information about the phase space structure may be lost due to the dominance of only one of the terms. To compensate for this effect, we introduce a weighting scheme within the Lagrangian descriptor and demonstrate that for thermal Langevin dynamics it preserves the essential phase space structures, while they are lost in the nonweighted case.

  2. The relationship between CDOM and salinity in estuaries: An analytical and graphical solution

    NASA Astrophysics Data System (ADS)

    Bowers, D. G.; Brett, H. L.

    2008-09-01

    The relationship between coloured dissolved organic matter (CDOM) and salinity in an estuary is explored using a simple box model in which the river discharge and concentration of CDOM in the river are allowed to vary with time. The results are presented as analytical and graphical solutions. The behaviour of the estuary depends upon the ratio, β, of the flushing time of the estuary to the timescale of the source variation. For small values of β, the variation in CDOM concentration in the estuary tracks that in the source, producing a linear relationship on a CDOM-salinity plot. As β increases, the estuary struggles to keep up with the changes in the source; and a curved CDOM-salinity plot results. For very large values of β, however, corresponding to estuaries with a long flushing time, the CDOM concentration in the estuary settles down to a mean value which again lies on a straight line on a CDOM-salinity plot (and extrapolates to the time-mean concentration in the source). The results are discussed in terms of the mapping of surface salinity in estuaries through the visible band remote sensing of CDOM.

  3. Time-dependent wave splitting and source separation

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck

    2017-02-01

    Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.

  4. Solar Radiation Pressure Estimation and Analysis of a GEO Class of High Area-to-Mass Ratio Debris Objects

    NASA Technical Reports Server (NTRS)

    Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene

    2007-01-01

    A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.

  5. Classification of light sources and their interaction with active and passive environments

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-03-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  6. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  7. Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.

    PubMed

    Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K

    2018-03-13

    Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.

  8. MSW-resonant fermion mixing during reheating

    NASA Astrophysics Data System (ADS)

    Kanai, Tsuneto; Tsujikawa, Shinji

    2003-10-01

    We study the dynamics of reheating in which an inflaton field couples two flavor fermions through Yukawa-couplings. When two fermions have a mixing term with a constant coupling, we show that the Mikheyev-Smirnov-Wolfenstein (MSW)-type resonance emerges due to a time-dependent background in addition to the standard fermion creation via parametric resonance. This MSW resonance not only alters the number densities of fermions generated by a preheating process but also can lead to the larger energy transfer from the inflaton to fermions. Our mechanism can provide additional source terms for the creation of superheavy fermions which may be relevant for the leptogenesis scenario.

  9. Refined Source Terms in Wave Watch 3 with Wave Breaking and Sea Spray Forecasts

    DTIC Science & Technology

    2016-08-05

    Farmer at IOS Canada involved a novel scale analysis of breaking waves. This was motivated by the results of the model study of wave breaking onset by...timely development that needs careful examination. 4.11 Highlights of the SPANDEX study SPANDEX, the Spray Production and Dynamics Experiment, is...speed alone. To accomplish this goal, a parallel laboratory study (SPANDEX II) was undertaken to parameterize sea spray flux dependences on breaking

  10. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  11. About Block Dynamic Model of Earthquake Source.

    NASA Astrophysics Data System (ADS)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising deformation the dependence of life-time on noise amplitude is investigated. Also for the initial shock we have chosen the amplitudes, when it determined the life-time, as principal cause. For this case it appeared, that life-time had non-monotonous dependence on the noise amplitude ("temperature"). There was the domain of the "temperatures", where the life-time reached a maximum. The comparison of different dissipation intensities was performed.

  12. Effect of Americium-241 Content on Plutonium Radiation Source Terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    1998-12-28

    The management of excess plutonium by the US Department of Energy includes a number of storage and disposition alternatives. Savannah River Site (SRS) is supporting DOE with plutonium disposition efforts, including the immobilization of certain plutonium materials in a borosilicate glass matrix. Surplus plutonium inventories slated for vitrification include materials with elevated levels of Americium-241. The Am-241 content of plutonium materials generally reflects in-growth of the isotope due to decay of plutonium and is age-dependent. However, select plutonium inventories have Am-241 levels considerably above the age-based levels. Elevated levels of americium significantly impact radiation source terms of plutonium materials andmore » will make handling of the materials more difficult. Plutonium materials are normally handled in shielded glove boxes, and the work entails both extremity and whole body exposures. This paper reports results of an SRS analysis of plutonium materials source terms vs. the Americium-241 content of the materials. Data with respect to dependence and magnitude of source terms on/vs. Am-241 levels are presented and discussed. The investigation encompasses both vitrified and un-vitrified plutonium oxide (PuO2) batches.« less

  13. Nonlinearly driven harmonics of Alfvén modes

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.

    2014-01-01

    In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.

  14. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  15. A NUMERICAL SCHEME FOR SPECIAL RELATIVISTIC RADIATION MAGNETOHYDRODYNAMICS BASED ON SOLVING THE TIME-DEPENDENT RADIATIVE TRANSFER EQUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohsuga, Ken; Takahashi, Hiroyuki R.

    2016-02-20

    We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less

  16. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  17. Stable sulfur isotope hydrogeochemical studies using desert shrubs and tree rings, Death Valley, California, USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Wenbo; Spencer, R.J.; Krouse, H.R.

    1996-08-01

    The {delta}{sup 34}S values of two dominant xerophytes, Atriplex hymenehytra and Larrea tridentata, in Death Valley, California, vary similarly from +7 to +18{per_thousand}, corresponding isotopically to sulfate in the water supplies at a given location. Going radially outwards, tree ring data from a phreatophyte tree, Tamarix aphylla, show a distinct time dependence, with {delta}{sup 34}S values increasing from +13.5 to +18{per_thousand} for soluble sulfate and from +12 to +17% for total sulfur. These data are interpreted in terms of sulfur sources, water sources and flow paths, and tree root growth. 32 refs., 3 figs., 3 tabs.

  18. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  19. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  20. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  1. Unequal-Strength Source zROC Slopes Reflect Criteria Placement and Not (Necessarily) Memory Processes

    ERIC Educational Resources Information Center

    Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.

    2013-01-01

    Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope…

  2. Theory for source-responsive and free-surface film modeling of unsaturated flow

    USGS Publications Warehouse

    Nimmo, J.R.

    2010-01-01

    A new model explicitly incorporates the possibility of rapid response, across significant distance, to substantial water input. It is useful for unsaturated flow processes that are not inherently diffusive, or that do not progress through a series of equilibrium states. The term source-responsive is used to mean that flow responds sensitively to changing conditions at the source of water input (e.g., rainfall, irrigation, or ponded infiltration). The domain of preferential flow can be conceptualized as laminar flow in free-surface films along the walls of pores. These films may be considered to have uniform thickness, as suggested by field evidence that preferential flow moves at an approximately uniform rate when generated by a continuous and ample water supply. An effective facial area per unit volume quantitatively characterizes the medium with respect to source-responsive flow. A flow-intensity factor dependent on conditions within the medium represents the amount of source-responsive flow at a given time and position. Laminar flow theory provides relations for the velocity and thickness of flowing source-responsive films. Combination with the Darcy-Buckingham law and the continuity equation leads to expressions for both fluxes and dynamic water contents. Where preferential flow is sometimes or always significant, the interactive combination of source-responsive and diffuse flow has the potential to improve prediction of unsaturated-zone fluxes in response to hydraulic inputs and the evolving distribution of soil moisture. Examples for which this approach is efficient and physically plausible include (i) rainstorm-generated rapid fluctuations of a deep water table and (ii) space- and time-dependent soil water content response to infiltration in a macroporous soil. ?? Soil Science Society of America.

  3. Correlating non-linear properties with spectral states of RXTE data: possible observational evidences for four different accretion modes around compact objects

    NASA Astrophysics Data System (ADS)

    Adegoke, Oluwashina; Dhang, Prasun; Mukhopadhyay, Banibrata; Ramadevi, M. C.; Bhattacharya, Debbijoy

    2018-05-01

    By analysing the time series of RXTE/PCA data, the non-linear variabilities of compact sources have been repeatedly established. Depending on the variation in temporal classes, compact sources exhibit different non-linear features. Sometimes they show low correlation/fractal dimension, but in other classes or intervals of time they exhibit stochastic nature. This could be because the accretion flow around a compact object is a non-linear general relativistic system involving magnetohydrodynamics. However, the more conventional way of addressing a compact source is the analysis of its spectral state. Therefore, the question arises: What is the connection of non-linearity to the underlying spectral properties of the flow when the non-linear properties are related to the associated transport mechanisms describing the geometry of the flow? This work is aimed at addressing this question. Based on the connection between observed spectral and non-linear (time series) properties of two X-ray binaries: GRS 1915+105 and Sco X-1, we attempt to diagnose the underlying accretion modes of the sources in terms of known accretion classes, namely, Keplerian disc, slim disc, advection dominated accretion flow and general advective accretion flow. We explore the possible transition of the sources from one accretion mode to others with time. We further argue that the accretion rate must play an important role in transition between these modes.

  4. The Role of Soft Power in China’s Security Strategy: Case Studies on the South China Sea and Taiwan

    DTIC Science & Technology

    2017-06-09

    of power. The study also shows that the interplay between soft and hard power varies significantly depending on the context. 15. SUBJECT TERMS China...sources of power. The study also shows that the interplay between soft and hard power varies significantly depending on the context. v...policies.12 Nye qualifies that the magnitude of soft power that is derived from these sources is situationally dependent . For example, political values

  5. A Dynamical View of High School Attendance: An Assessment of Short-term and Long-term Dependencies in Five Urban Schools.

    PubMed

    Koopmans, Matthijs

    2015-01-01

    While school attendance is a critical mediator to academic achievement, its time dependent characteristics are rarely investigated. To remedy situation, this paper reports on the analysis of daily attendance rates in five urban high schools over a seven-year period. Traditional time series analyses were conducted to estimate short-range and cyclical dependencies in the data. An Autoregressive Fractional Integrated Moving Average (ARFIMA) approach was used to address long-range correlational patterns, and detect signs of self-organized criticality. The analysis reveals a strong cyclical pattern (weekly) in all five schools, and evidence for self-organized criticality in one of the five. These findings illustrate the insufficiency of traditional statistical summary measures to characterize the distribution of daily attendance, and they suggest that daily attendance is not necessarily the stable and predictable feature of school effectiveness it is conventionally assumed to be. While educational practitioners can probably attest to the many of the irregularities in attendance patterns as well as some of their sources, a systematic description of these temporal aspects needs to be included in our assessment of daily attendance behavior to inform policy decisions, if only to better align formal research in this area with existing local knowledge about those patterns.

  6. Ultra-Sensitive Elemental Analysis Using Plasmas 4.Application of Inductively Coupled Plasma Mass Spectrometry to the Study of Environmental Radioactivity

    NASA Astrophysics Data System (ADS)

    Yoshida, Satoshi

    Applications of inductively coupled plasma mass spectrometry (ICP-MS) to the determination of long-lived radionuclides in environmental samples were summarized. In order to predict the long-term behavior of the radionuclides, related stable elements were also determined. Compared with radioactivity measurements, the ICP-MS method has advantages in terms of its simple analytical procedures, prompt measurement time, and capability of determining the isotope ratio such as240Pu/239Pu, which can not be separated by radiation. Concentration of U and Th in Japanese surface soils were determined in order to determine the background level of the natural radionuclides. The 235U/238U ratio was successfully used to detect the release of enriched U from reconversion facilities to the environment and to understand the source term. The 240Pu/239Pu ratios in environmental samples varied widely depending on the Pu sources. Applications of ICP-MS to the measurement of I and Tc isotopes were also described. The ratio between radiocesium and stable Cs is useful for judging the equilibrium of deposited radiocesium in a forest ecosystem.

  7. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  8. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  9. Simulations of cold electroweak baryogenesis: dependence on the source of CP-violation

    NASA Astrophysics Data System (ADS)

    Mou, Zong-Gang; Saffin, Paul M.; Tranberg, Anders

    2018-05-01

    We compute the baryon asymmetry created in a tachyonic electroweak symmetry breaking transition, focusing on the dependence on the source of effective CP-violation. Earlier simulations of Cold Electroweak Baryogenesis have almost exclusively considered a very specific CP-violating term explicitly biasing Chern-Simons number. We compare four different dimension six, scalar-gauge CP-violating terms, involving both the Higgs field and another dynamical scalar coupled to SU(2) or U(1) gauge fields. We find that for sensible values of parameters, all implementations can generate a baryon asymmetry consistent with observations, showing that baryogenesis is a generic outcome of a fast tachyonic electroweak transition.

  10. Two-relaxation-time lattice Boltzmann method for the anisotropic dispersive Henry problem

    NASA Astrophysics Data System (ADS)

    Servan-Camas, Borja; Tsai, Frank T.-C.

    2010-02-01

    This study develops a lattice Boltzmann method (LBM) with a two-relaxation-time collision operator (TRT) to cope with anisotropic heterogeneous hydraulic conductivity and anisotropic velocity-dependent hydrodynamic dispersion in the saltwater intrusion problem. The directional-speed-of-sound technique is further developed to address anisotropic hydraulic conductivity and dispersion tensors. Forcing terms are introduced in the LBM to correct numerical errors that arise during the recovery procedure and to describe the sink/source terms in the flow and transport equations. In order to facilitate the LBM implementation, the forcing terms are combined with the equilibrium distribution functions (EDFs) to create pseudo-EDFs. This study performs linear stability analysis and derives LBM stability domains to solve the anisotropic advection-dispersion equation. The stability domains are used to select the time step at which the lattice Boltzmann method provides stable solutions to the numerical examples. The LBM was implemented for the anisotropic dispersive Henry problem with high ratios of longitudinal to transverse dispersivities, and the results compared well to the solutions in the work of Abarca et al. (2007).

  11. THE NANOGRAV NINE-YEAR DATA SET: OBSERVATIONS, ARRIVAL TIME MEASUREMENTS, AND ANALYSIS OF 37 MILLISECOND PULSARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arzoumanian, Zaven; Brazier, Adam; Chatterjee, Shami

    2015-11-01

    We present high-precision timing observations spanning up to nine years for 37 millisecond pulsars monitored with the Green Bank and Arecibo radio telescopes as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project. We describe the observational and instrumental setups used to collect the data, and methodology applied for calculating pulse times of arrival; these include novel methods for measuring instrumental offsets and characterizing low signal-to-noise ratio timing results. The time of arrival data are fit to a physical timing model for each source, including terms that characterize time-variable dispersion measure and frequency-dependent pulse shape evolution. Inmore » conjunction with the timing model fit, we have performed a Bayesian analysis of a parameterized timing noise model for each source, and detect evidence for excess low-frequency, or “red,” timing noise in 10 of the pulsars. For 5 of these cases this is likely due to interstellar medium propagation effects rather than intrisic spin variations. Subsequent papers in this series will present further analysis of this data set aimed at detecting or limiting the presence of nanohertz-frequency gravitational wave signals.« less

  12. CO2 Flux From Antarctic Dry Valley Soils: Determining the Source and Environmental Controls

    NASA Astrophysics Data System (ADS)

    Risk, D. A.; Macintyre, C. M.; Shanhun, F.; Almond, P. C.; Lee, C.; Cary, C.

    2014-12-01

    Soils within the McMurdo Dry Valleys are known to respire carbon dioxide (CO2), but considerable debate surrounds the contributing sources and mechanisms that drive temporal variability. While some of the CO2 is of biological origin, other known contributors to variability include geochemical sources within, or beneath, the soil column. The relative contribution from each of these sources will depend on seasonal and environmental drivers such as temperature and wind that exert influence on temporal dynamics. To supplement a long term CO2­ surface flux monitoring station that has now recorded fluxes over three full annual cycles, in January 2014 an automated flux and depth concentration monitoring system was installed in the Spaulding Pond area of Taylor Valley, along with standard meteorological sensors, to assist in defining source contributions through time. During two weeks of data we observed marked diel variability in CO2 concentrations within the profile (~100 ppm CO2 above or below atmospheric), and of CO2 moving across the soil surface. The pattern at many depths suggested an alternating diel-scale transition from source to sink that seemed clearly correlated with temperature-driven changes in the solubility of CO2 in water films. This CO2 solution storage flux was very highly coupled to soil temperature. A small depth source of unknown origin also appeared to be present. A controlled laboratory soil experiment was conducted to confirm the magnitude of fluxes into and out of soil water films, and confirmed the field results and temperature dependence. Ultimately, this solution storage flux needs to be well understood if the small biological fluxes from these soils are to be properly quantified and monitored for change. Here, we present results from the 2013/2014 field season and these supplementary experiments, placed in the context of 3 year long term continuous measurement of soil CO2 flux within the Dry Valleys.

  13. Correcting STIS CCD Point-Source Spectra for CTE Loss

    NASA Technical Reports Server (NTRS)

    Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus

    2006-01-01

    We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.

  14. Open-Source Unionism: New Workers, New Strategies

    ERIC Educational Resources Information Center

    Schmid, Julie M.

    2004-01-01

    In "Open-Source Unionism: Beyond Exclusive Collective Bargaining," published in fall 2002 in the journal Working USA, labor scholars Richard B. Freeman and Joel Rogers use the term "open-source unionism" to describe a form of unionization that uses Web technology to organize in hard-to-unionize workplaces. Rather than depend on the traditional…

  15. PAGAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, M.S.Y.

    1990-12-01

    The PAGAN code system is a part of the performance assessment methodology developed for use by the U.S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1. has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simple ground-water transport analysismore » and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time and location-dependent radionuclide concentration at a well in the aquifer, or a time and location-dependent radionuclide flux into a surface-water body.« less

  16. The value and limitations of global air-sampling networks for improving our understanding trace gas behavior

    NASA Astrophysics Data System (ADS)

    Montzka, S. A.

    2016-12-01

    Measurements from global surface-based air sampling networks provide a fundamental understanding of how and why concentrations of long-lived trace gases are changing over time. Results from these networks are used to quantify trace-gas concentrations and their time-dependent changes on global and smaller scales, and thus provide a means to quantify emission rates, loss frequencies, and mixing processes. Substantial advances in measurement and sampling technologies and the ability of these programs to create and maintain reliable gas standards mean that spatial concentration gradients and time-dependent changes are often very reliably measured. The presence of multiple independent networks allows an assessment of this reliability. Furthermore, recent global `snap-shot' surveys (e.g., HIPPO and ATom) and ongoing atmospheric profiling programs help us assess the ability of surface-based data to describe concentration distributions throughout most of the atmosphere ( 80% of its mass). In this overview talk, I'll explore the usefulness and limitations of existing long-term, ongoing sampling network programs and their advantages and disadvantages for characterizing concentrations on global and regional scales, and how recent advances (and short-term sampling programs) help us assess the accuracy of the surface networks to provide estimates of source and sink magnitudes, and inter-annual variability in both.

  17. Incorporating the eruptive history in a stochastic model for volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Bebbington, Mark

    2008-08-01

    We show how a stochastic version of a general load-and-discharge model for volcanic eruptions can be implemented. The model tracks the history of the volcano through a quantity proportional to stored magma volume. Thus large eruptions can influence the activity rate for a considerable time following, rather than only the next repose as in the time-predictable model. The model can be fitted to data using point-process methods. Applied to flank eruptions of Mount Etna, it exhibits possible long-term quasi-cyclic behavior, and to Mauna Loa, a long-term decrease in activity. An extension to multiple interacting sources is outlined, which may be different eruption styles or locations, or different volcanoes. This can be used to identify an 'average interaction' between the sources. We find significant evidence that summit eruptions of Mount Etna are dependent on preceding flank eruptions, with both flank and summit eruptions being triggered by the other type. Fitted to Mauna Loa and Kilauea, the model had a marginally significant relationship between eruptions of Mauna Loa and Kilauea, consistent with the invasion of the latter's plumbing system by magma from the former.

  18. Integral representations of solutions of the wave equation based on relativistic wavelets

    NASA Astrophysics Data System (ADS)

    Perel, Maria; Gorodnitskiy, Evgeny

    2012-09-01

    A representation of solutions of the wave equation with two spatial coordinates in terms of localized elementary ones is presented. Elementary solutions are constructed from four solutions with the help of transformations of the affine Poincaré group, i.e. with the help of translations, dilations in space and time and Lorentz transformations. The representation can be interpreted in terms of the initial-boundary value problem for the wave equation in a half-plane. It gives the solution as an integral representation of two types of solutions: propagating localized solutions running away from the boundary under different angles and packet-like surface waves running along the boundary and exponentially decreasing away from the boundary. Properties of elementary solutions are discussed. A numerical investigation of coefficients of the decomposition is carried out. An example of the decomposition of the field created by sources moving along a line with different speeds is considered, and the dependence of coefficients on speeds of sources is discussed.

  19. Natural and Induced Environment in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Kim, Myung-Hee Y.; Clowdsley, Martha S.; Heinbockel, John H.; Cucinotta, Francis A.; Badhwar, Gautam D.; Atwell, William; Huston, Stuart L.

    2002-01-01

    The long-term exposure of astronauts on the developing International Space Station (ISS) requires an accurate knowledge of the internal exposure environment for human risk assessment and other onboard processes. The natural environment is moderated by the solar wind which varies over the solar cycle. The neutron environment within the Shuttle in low Earth orbit has two sources. A time dependent model for the ambient environment is used to evaluate the natural and induced environment. The induced neutron environment is evaluated using measurements on STS-31 and STS-36 near the 1990 solar maximum.

  20. The Journal of Physical Chemistry A. Time-Dependent Quantum Molecular Dynamics Workshop, Brian Head, Utah, March 13-17, 1999. Volume 103, Number 47

    DTIC Science & Technology

    1999-11-25

    reactions the situation is more complicated since many of the modes are in the process of changing from free rotors to nearly harmonic bending motions ...are dihedral angles between the CH3 planes and the CC axis (see text). Heavy solid contours denote repulsive regions ( energies higher than that of...while vi is the source term describing the rate of formation of ethane in energy state i from the free methyl radicals. The effective bimolecular

  1. Finite Moment Tensors of Southern California Earthquakes

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Chen, P.; Zhao, L.

    2003-12-01

    We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables; this projection operation effectively corrects the mainshock data for path-related amplitude anomalies in a way similar to, but more flexible than, empirical Green function (EGF) techniques.

  2. Positive and negative sources of emotional arousal enhance long-term word-list retention when induced as long as 30 min after learning.

    PubMed

    Nielson, Kristy A; Powless, Mark

    2007-07-01

    The consolidation of newly formed memories occurs slowly, allowing memories to be altered by experience for some time after their formation. Various treatments, including arousal, can modulate memory consolidation when given soon after learning, but the degree of time-dependency of these treatments in humans has not been studied. Thus, 212 participants learned a word list, which was followed by either a positively or negatively valenced arousing video clip (i.e., comedy or surgery, respectively) after delays of 0, 10, 30 or 45 min. Arousal of either valence induced up to 30 min after learning, but not after 45 min, significantly enhanced one-week retrieval. The findings support (1) the time-dependency of memory modulation in humans and (2) other studies that suggest that it is the degree of arousal, rather than valence that modulates memory. Important implications for developing memory intervention strategies and for preserving and validating witness testimony are discussed.

  3. Multi-Scale Analysis of Trends in Northeastern Temperate Forest Springtime Phenology

    NASA Astrophysics Data System (ADS)

    Moon, M.; Melaas, E. K.; Sulla-menashe, D. J.; Friedl, M. A.

    2017-12-01

    The timing of spring leaf emergence is highly variable in many ecosystems, exerts first-order control growing season length, and significantly modulates seasonally-integrated photosynthesis. Numerous studies have reported trends toward earlier spring phenology in temperate forests, with some papers indicating that this trend is also leading to increased carbon uptake. At broad spatial scales, however, most of these studies have used data from coarse spatial resolution instruments such as MODIS, which does not resolve ecologically important landscape-scale patterns in phenology. In this work, we examine how long-term trends in spring phenology differ across three data sources acquired at different scales of measurements at the Harvard Forest in central Massachusetts. Specifically, we compared trends in the timing of phenology based on long-term in-situ measurements of phenology, estimates based on eddy-covariance measurements of net carbon uptake transition dates, and from two sources of satellite-based remote sensing (MODIS and Landsat) land surface phenology (LSP) data. Our analysis focused on the flux footprint surrounding the Harvard Forest Environmental Measurements (EMS) tower. Our results reveal clearly defined trends toward earlier springtime phenology in Landsat LSP and in the timing of tower-based net carbon uptake. However, we find no statistically significant trend in springtime phenology measured from MODIS LSP data products, possibly because the time series of MODIS observations is relatively short (13 years). The trend in tower-based transition data exhibited a larger negative value than the trend derived from Landsat LSP data (-0.42 and -0.28 days per year for 21 and 28 years, respectively). More importantly, these results have two key implications regarding how changes in spring phenology are impacting carbon uptake at landscape-scale. First, long-term trends in spring phenology can be quite different, depending on what data source is used to estimate the trend, and 2) the response of carbon uptake to climate change may be more sensitive than the response of land surface phenology itself.

  4. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    NASA Astrophysics Data System (ADS)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  5. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  6. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  7. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  8. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  9. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  10. History dependence in insect flight decisions during odor tracking.

    PubMed

    Pang, Rich; van Breugel, Floris; Dickinson, Michael; Riffell, Jeffrey A; Fairhall, Adrienne

    2018-02-01

    Natural decision-making often involves extended decision sequences in response to variable stimuli with complex structure. As an example, many animals follow odor plumes to locate food sources or mates, but turbulence breaks up the advected odor signal into intermittent filaments and puffs. This scenario provides an opportunity to ask how animals use sparse, instantaneous, and stochastic signal encounters to generate goal-oriented behavioral sequences. Here we examined the trajectories of flying fruit flies (Drosophila melanogaster) and mosquitoes (Aedes aegypti) navigating in controlled plumes of attractive odorants. While it is known that mean odor-triggered flight responses are dominated by upwind turns, individual responses are highly variable. We asked whether deviations from mean responses depended on specific features of odor encounters, and found that odor-triggered turns were slightly but significantly modulated by two features of odor encounters. First, encounters with higher concentrations triggered stronger upwind turns. Second, encounters occurring later in a sequence triggered weaker upwind turns. To contextualize the latter history dependence theoretically, we examined trajectories simulated from three normative tracking strategies. We found that neither a purely reactive strategy nor a strategy in which the tracker learned the plume centerline over time captured the observed history dependence. In contrast, "infotaxis", in which flight decisions maximized expected information gain about source location, exhibited a history dependence aligned in sign with the data, though much larger in magnitude. These findings suggest that while true plume tracking is dominated by a reactive odor response it might also involve a history-dependent modulation of responses consistent with the accumulation of information about a source over multi-encounter timescales. This suggests that short-term memory processes modulating decision sequences may play a role in natural plume tracking.

  11. History dependence in insect flight decisions during odor tracking

    PubMed Central

    van Breugel, Floris; Dickinson, Michael; Riffell, Jeffrey A.; Fairhall, Adrienne

    2018-01-01

    Natural decision-making often involves extended decision sequences in response to variable stimuli with complex structure. As an example, many animals follow odor plumes to locate food sources or mates, but turbulence breaks up the advected odor signal into intermittent filaments and puffs. This scenario provides an opportunity to ask how animals use sparse, instantaneous, and stochastic signal encounters to generate goal-oriented behavioral sequences. Here we examined the trajectories of flying fruit flies (Drosophila melanogaster) and mosquitoes (Aedes aegypti) navigating in controlled plumes of attractive odorants. While it is known that mean odor-triggered flight responses are dominated by upwind turns, individual responses are highly variable. We asked whether deviations from mean responses depended on specific features of odor encounters, and found that odor-triggered turns were slightly but significantly modulated by two features of odor encounters. First, encounters with higher concentrations triggered stronger upwind turns. Second, encounters occurring later in a sequence triggered weaker upwind turns. To contextualize the latter history dependence theoretically, we examined trajectories simulated from three normative tracking strategies. We found that neither a purely reactive strategy nor a strategy in which the tracker learned the plume centerline over time captured the observed history dependence. In contrast, “infotaxis”, in which flight decisions maximized expected information gain about source location, exhibited a history dependence aligned in sign with the data, though much larger in magnitude. These findings suggest that while true plume tracking is dominated by a reactive odor response it might also involve a history-dependent modulation of responses consistent with the accumulation of information about a source over multi-encounter timescales. This suggests that short-term memory processes modulating decision sequences may play a role in natural plume tracking. PMID:29432454

  12. OrChem - An open source chemistry search engine for Oracle(R).

    PubMed

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  13. Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.

    PubMed

    de Groot-Hedlin, C D

    2012-08-01

    An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.

  14. Seven deadly sins in trauma outcomes research: an epidemiologic post mortem for major causes of bias.

    PubMed

    del Junco, Deborah J; Fox, Erin E; Camp, Elizabeth A; Rahbar, Mohammad H; Holcomb, John B

    2013-07-01

    Because randomized clinical trials in trauma outcomes research are expensive and complex, they have rarely been the basis for the clinical care of trauma patients. Most published findings are derived from retrospective and occasionally prospective observational studies that may be particularly susceptible to bias. The sources of bias include some common to other clinical domains, such as heterogeneous patient populations with competing and interdependent short- and long-term outcomes. Other sources of bias are unique to trauma, such as rapidly changing multisystem responses to injury that necessitate highly dynamic treatment regimens such as blood product transfusion. The standard research design and analysis strategies applied in published observational studies are often inadequate to address these biases. Drawing on recent experience in the design, data collection, monitoring, and analysis of the 10-site observational PRospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study, 7 common and sometimes overlapping biases are described through examples and resolution strategies. Sources of bias in trauma research include ignoring (1) variation in patients' indications for treatment (indication bias), (2) the dependency of intervention delivery on patient survival (survival bias), (3) time-varying treatment, (4) time-dependent confounding, (5) nonuniform intervention effects over time, (6) nonrandom missing data mechanisms, and (7) imperfectly defined variables. This list is not exhaustive. The mitigation strategies to overcome these threats to validity require epidemiologic and statistical vigilance. Minimizing the highlighted types of bias in trauma research will facilitate clinical translation of more accurate and reproducible findings and improve the evidence-base that clinicians apply in their care of injured patients.

  15. Seven Deadly Sins in Trauma Outcomes Research: An Epidemiologic Post-Mortem for Major Causes of Bias

    PubMed Central

    del Junco, Deborah J.; Fox, Erin E.; Camp, Elizabeth A.; Rahbar, Mohammad H.; Holcomb, John B.

    2013-01-01

    Background Because randomized clinical trials (RCTs) in trauma outcomes research are expensive and complex, they have rarely been the basis for the clinical care of trauma patients. Most published findings are derived from retrospective and occasionally prospective observational studies that may be particularly susceptible to bias. The sources of bias include some common to other clinical domains, such as heterogeneous patient populations with competing and interdependent short- and long-term outcomes. Other sources of bias are unique to trauma, such as rapidly changing multi-system responses to injury that necessitate highly dynamic treatment regimes like blood product transfusion. The standard research design and analysis strategies applied in published observational studies are often inadequate to address these biases. Methods Drawing on recent experience in the design, data collection, monitoring and analysis of the 10-site observational PROMMTT study, seven common and sometimes overlapping biases are described through examples and resolution strategies. Results Sources of bias in trauma research include ignoring 1) variation in patients’ indications for treatment (indication bias), 2) the dependency of intervention delivery on patient survival (survival bias), 3) time-varying treatment, 4) time-dependent confounding, 5) non-uniform intervention effects over time, 6) non-random missing data mechanisms, and 7) imperfectly defined variables. This list is not exhaustive. Conclusion The mitigation strategies to overcome these threats to validity require epidemiologic and statistical vigilance. Minimizing the highlighted types of bias in trauma research will facilitate clinical translation of more accurate and reproducible findings and improve the evidence-base that clinicians apply in their care of injured patients. PMID:23778519

  16. Dependence on Excitation Density of Multiphonon Decay in Er-doped ZBLAN Glass

    NASA Astrophysics Data System (ADS)

    Bycenski, Kenneth; Collins, John

    2001-11-01

    The dependence of multiphonon decay of rare earth ions in solids on the intensity of the pump beam, first reported by Auzel et al., is examined for the 4S3/2 and 2H11/2 levels of Er-doped ZBLAN glass. Using a frequency-doubled, Q-switched Nd:YAG laser as a pump source, the kinetics of the 4S3/2 level was studied at different pump intensities and temperatures. Lifetime curves show a rise time, which represents the feeding of the 4S3/2 level by the 2H11/2, and a decay time that vary with the intensity of the pump beam, i.e. on the concentration of excited centers. The measured decay times of the 4S3/2 are consistent with those previously reported [2]. In this poster we report on the temperature dependence of this process, and we look at the dependence of the feeding of the 4S3/2 level as pump intensity changes. A rate equation model shows that the intensity dependence of the rise time on pump intensity is due, in part, to a slowing down of the nonradiative decay from the 2H11/2 level as the pump intensity is increased. We discuss these results in terms of the phonon bottleneck mechanism proposed in reference 1. 1. F. Auzel and F. Pelle, Phys. Rev. B 55, 17 (1106-09) 1997. 2. F Auzel, private communications.

  17. Influence of heat conducting substrates on explosive crystallization in thin layers

    NASA Astrophysics Data System (ADS)

    Schneider, Wilhelm

    2017-09-01

    Crystallization in a thin, initially amorphous layer is considered. The layer is in thermal contact with a substrate of very large dimensions. The energy equation of the layer contains source and sink terms. The source term is due to liberation of latent heat in the crystallization process, while the sink term is due to conduction of heat into the substrate. To determine the latter, the heat diffusion equation for the substrate is solved by applying Duhamel's integral. Thus, the energy equation of the layer becomes a heat diffusion equation with a time integral as an additional term. The latter term indicates that the heat loss due to the substrate depends on the history of the process. To complete the set of equations, the crystallization process is described by a rate equation for the degree of crystallization. The governing equations are then transformed to a moving co-ordinate system in order to analyze crystallization waves that propagate with invariant properties. Dual solutions are found by an asymptotic expansion for large activation energies of molecular diffusion. By introducing suitable variables, the results can be presented in a universal form that comprises the influence of all non-dimensional parameters that govern the process. Of particular interest for applications is the prediction of a critical heat loss parameter for the existence of crystallization waves with invariant properties.

  18. Statistical analyses of soil properties on a quaternary terrace sequence in the upper sava river valley, Slovenia, Yugoslavia

    USGS Publications Warehouse

    Vidic, N.; Pavich, M.; Lobnik, F.

    1991-01-01

    Alpine glaciations, climatic changes and tectonic movements have created a Quaternary sequence of gravely carbonate sediments in the upper Sava River Valley, Slovenia, Yugoslavia. The names for terraces, assigned in this model, Gu??nz, Mindel, Riss and Wu??rm in order of decreasing age, are used as morphostratigraphic terms. Soil chronosequence on the terraces was examined to evaluate which soil properties are time dependent and can be used to help constrain the ages of glaciofluvial sedimentation. Soil thickness, thickness of Bt horizons, amount and continuity of clay coatings and amount of Fe and Me concretions increase with soil age. The main source of variability consists of solutions of carbonate, leaching of basic cations and acidification of soils, which are time dependent and increase with the age of soils. The second source of variability is the content of organic matter, which is less time dependent, but varies more within soil profiles. Textural changes are significant, presented by solution of carbonate pebbles and sand, and formation is silt loam matrix, which with age becomes finer, with clay loam or clayey texture. The oldest, Gu??nz, terrace shows slight deviation from general progressive trends of changes of soil properties with time. The hypothesis of single versus multiple depositional periods of deposition was tested with one-way analysis of variance (ANOVA) on a staggered, nested hierarchical sampling design on a terrace of largest extent and greatest gravel volume, the Wu??rm terrace. The variability of soil properties is generally higher within subareas than between areas of the terrace, except for the soil thickness. Observed differences in soil thickness between the areas of the terrace could be due to multiple periods of gravel deposition, or to the initial differences of texture of the deposits. ?? 1991.

  19. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  20. Prioritized packet video transmission over time-varying wireless channel using proactive FEC

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.

  1. Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog

    NASA Technical Reports Server (NTRS)

    Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.

    1995-01-01

    A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.

  2. Localized Enzymatic Degradation of Polymers: Physics and Scaling Laws

    NASA Astrophysics Data System (ADS)

    Lalitha Sridhar, Shankar; Vernerey, Franck

    2018-03-01

    Biodegradable polymers are naturally abundant in living matter and have led to great advances in controlling environmental pollution due to synthetic polymer products, harnessing renewable energy from biofuels, and in the field of biomedicine. One of the most prevalent mechanisms of biodegradation involves enzyme-catalyzed depolymerization by biological agents. Despite numerous studies dedicated to understanding polymer biodegradation in different environments, a simple model that predicts the macroscopic behavior (mass and structural loss) in terms of microphysical processes (enzyme transport and reaction) is lacking. An interesting phenomenon occurs when an enzyme source (released by a biological agent) attacks a tight polymer mesh that restricts free diffusion. A fuzzy interface separating the intact and fully degraded polymer propagates away from the source and into the polymer as the enzymes diffuse and react in time. Understanding the characteristics of this interface will provide crucial insight into the biodegradation process and potential ways to precisely control it. In this work, we present a centrosymmetric model of biodegradation by characterizing the moving fuzzy interface in terms of its speed and width. The model predicts that the characteristics of this interface are governed by two time scales, namely the polymer degradation and enzyme transport times, which in turn depend on four main polymer and enzyme properties. A key finding of this work is simple scaling laws that can be used to guide biodegradation of polymers in different applications.

  3. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercialmore » spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.« less

  4. GPS Block 2R Time Standard Assembly (TSA) architecture

    NASA Technical Reports Server (NTRS)

    Baker, Anthony P.

    1990-01-01

    The underlying philosophy of the Global Positioning System (GPS) 2R Time Standard Assembly (TSA) architecture is to utilize two frequency sources, one fixed frequency reference source and one system frequency source, and to couple the system frequency source to the reference frequency source via a sample data loop. The system source is used to provide the basic clock frequency and timing for the space vehicle (SV) and it uses a voltage controlled crystal oscillator (VCXO) with high short term stability. The reference source is an atomic frequency standard (AFS) with high long term stability. The architecture can support any type of frequency standard. In the system design rubidium, cesium, and H2 masers outputting a canonical frequency were accommodated. The architecture is software intensive. All VCXO adjustments are digital and are calculated by a processor. They are applied to the VCXO via a digital to analog converter.

  5. The effect of a hot, spherical scattering cloud on quasi-periodic oscillation behavior

    NASA Astrophysics Data System (ADS)

    Bussard, R. W.; Weisskopf, M. C.; Elsner, R. F.; Shibazaki, N.

    1988-04-01

    A Monte Carlo technique is used to investigate the effects of a hot electron scattering cloud surrounding a time-dependent X-ray source. Results are presented for the time-averaged emergent energy spectra and the mean residence time in the cloud as a function of energy. Moreover, after Fourier transforming the scattering Green's function, it is shown how the cloud affects both the observed power spectrum of a time-dependent source and the cross spectrum (Fourier transform of a cross correlation between energy bands). It is found that the power spectra intrinsic to the source are related to those observed by a relatively simple frequency-dependent multiplicative factor (a transmission function). The cloud can severely attenuate high frequencies in the power spectra, depending on optical depth, and, at lower frequencies, the transmission function has roughly a Lorentzian shape. It is also found that if the intrinsic energy spectrum is constant in time, the phase of the cross spectrum is determined entirely by scattering. Finally, the implications of the results for studies of the X-ray quasi-periodic oscillators are discussed.

  6. DaDyn-RS: a tool for the time-dependent simulation of damage, fluid pressure and long-term instability in alpine rock slopes

    NASA Astrophysics Data System (ADS)

    Riva, Federico; Agliardi, Federico; Amitrano, David; Crosta, Giovanni B.

    2017-04-01

    Large mountain slopes in alpine environments undergo a complex long-term evolution from glacial to postglacial environments, through a transient period of paraglacial readjustment. During and after this transition, the interplay among rock strength, topographic relief, and morpho-climatic drivers varying in space and time can lead to the development of different types of slope instability, from sudden catastrophic failures to large, slow, long-lasting yet potentially catastrophic rockslides. Understanding the long-term evolution of large rock slopes requires accounting for the time-dependence of deglaciation unloading, permeability and fluid pressure distribution, displacements and failure mechanisms. In turn, this is related to a convincing description of rock mass damage processes and to their transition from a sub-critical (progressive failure) to a critical (catastrophic failure) character. Although mechanisms of damage occurrence in rocks have been extensively studied in the laboratory, the description of time-dependent damage under gravitational load and variable external actions remains difficult. In this perspective, starting from a time-dependent model conceived for laboratory rock deformation, we developed Dadyn-RS, a tool to simulate the long-term evolution of real, large rock slopes. Dadyn-RS is a 2D, FEM model programmed in Matlab, which combines damage and time-to-failure laws to reproduce both diffused damage and strain localization meanwhile tracking long-term slope displacements from primary to tertiary creep stages. We implemented in the model the ability to account for rock mass heterogeneity and property upscaling, time-dependent deglaciation, as well as damage-dependent fluid pressure occurrence and stress corrosion. We first tested DaDyn-RS performance on synthetic case studies, to investigate the effect of the different model parameters on the mechanisms and timing of long-term slope behavior. The model reproduces complex interactions between topography, deglaciation rate, mechanical properties and fluid pressure occurrence, resulting in different kinematics, damage patterns and timing of slope instabilities. We assessed the role of groundwater on slope damage and deformation mechanisms by introducing time-dependent pressure cycling within simulations. Then, we applied DaDyn-RS to real slopes located in the Italian Central Alps, affected by an active rockslide and a Deep Seated Gravitational Slope Deformation, respectively. From Last Glacial Maximum to present conditions, our model allows reproducing in an explicitly time-dependent framework the progressive development of damage-induced permeability, strain localization and shear band differentiation at different times between the Lateglacial period and the Mid-Holocene climatic transition. Different mechanisms and timings characterize different styles of slope deformations, consistently with available dating constraints. DaDyn-RS is able to account for different long-term slope dynamics, from slow creep to the delayed transition to fast-moving rockslides.

  7. Reading a 400,000-year record of earthquake frequency for an intraplate fault

    NASA Astrophysics Data System (ADS)

    Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.

    2017-05-01

    Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.

  8. Emulsion chamber observations of primary cosmic-ray electrons in the energy range 30-1000 GeV

    NASA Technical Reports Server (NTRS)

    Nishimura, J.; Fujii, M.; Taira, T.; Aizu, E.; Hiraiwa, H.; Kobayashi, T.; Niu, K.; Ohta, I.; Golden, R. L.; Koss, T. A.

    1980-01-01

    The results of a series of emulsion exposures, beginning in Japan in 1968 and continued in the U.S. since 1975, which have yielded a total balloon-altitude exposure of 98,700 sq m sr s, are presented. The data are discussed in terms of several models of cosmic-ray propagation. Interpreted in terms of the energy-dependent leaky-box model, the spectrum results suggest a galactic electron residence time of 1.0(+2.0, -0.5) x 10 to the 7th yr, which is consistent with results from Be-10 observations. Finally, the possibility that departures from smooth power law behavior in the spectrum due to individual nearby sources will be observable in the energy range above 1 TeV is discussed.

  9. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  10. Evaluation and long-term monitoring of the time-dependent characteristics of self-consolidating concrete in an instrumented Kansas prestressed concrete bridge.

    DOT National Transportation Integrated Search

    2014-01-01

    Construction of a new prestressed bridge with Self-Consolidating Concrete (SCC) provided the opportunity to further study the time-dependent properties of SCC mix and its long-term performance; considering the results and recommendations of previous ...

  11. Evaluation and long-term monitoring of the time-dependent characteristics of self-consolidating concrete in an instrumented Kansas prestressed concrete bridge : [technical summary].

    DOT National Transportation Integrated Search

    2014-01-01

    Construction of a new prestressed bridge with Self-Consolidating Concrete (SCC) : provided the opportunity to further study the time-dependent properties of SCC mix and : its long-term performance; considering the results and recommendations of previ...

  12. Field quantization and squeezed states generation in resonators with time-dependent parameters

    NASA Technical Reports Server (NTRS)

    Dodonov, V. V.; Klimov, A. B.; Nikonov, D. E.

    1992-01-01

    The problem of electromagnetic field quantization is usually considered in textbooks under the assumption that the field occupies some empty box. The case when a nonuniform time-dependent dielectric medium is confined in some space region with time-dependent boundaries is studied. The basis of the subsequent consideration is the system of Maxwell's equations in linear passive time-dependent dielectric and magnetic medium without sources.

  13. Time dependent data, time independent models: challenges of updating Australia's National Seismic Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.

    2017-12-01

    Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.

  14. Diffraction-based optical correlator

    NASA Technical Reports Server (NTRS)

    Spremo, Stevan M. (Inventor); Fuhr, Peter L. (Inventor); Schipper, John F. (Inventor)

    2005-01-01

    Method and system for wavelength-based processing of a light beam. A light beam, produced at a chemical or physical reaction site and having at least first and second wavelengths, ?1 and ?2, is received and diffracted at a first diffraction grating to provide first and second diffracted beams, which are received and analyzed in terms of wavelength and/or time at two spaced apart light detectors. In a second embodiment, light from first and second sources is diffracted and compared in terms of wavelength and/or time to determine if the two beams arise from the same source. In a third embodiment, a light beam is split and diffracted and passed through first and second environments to study differential effects. In a fourth embodiment, diffracted light beam components, having first and second wavelengths, are received sequentially at a reaction site to determine whether a specified reaction is promoted, based on order of receipt of the beams. In a fifth embodiment, a cylindrically shaped diffraction grating (uniform or chirped) is rotated and translated to provide a sequence of diffracted beams with different wavelengths. In a sixth embodiment, incident light, representing one or more symbols, is successively diffracted from first and second diffraction gratings and is received at different light detectors, depending upon the wavelengths present in the incident light.

  15. Numerical simulation of two-dimensional flow over a heated carbon surface with coupled heterogeneous and homogeneous reactions

    NASA Astrophysics Data System (ADS)

    Johnson, Ryan Federick; Chelliah, Harsha Kumar

    2017-01-01

    For a range of flow and chemical timescales, numerical simulations of two-dimensional laminar flow over a reacting carbon surface were performed to understand further the complex coupling between heterogeneous and homogeneous reactions. An open-source computational package (OpenFOAM®) was used with previously developed lumped heterogeneous reaction models for carbon surfaces and a detailed homogeneous reaction model for CO oxidation. The influence of finite-rate chemical kinetics was explored by varying the surface temperatures from 1800 to 2600 K, while flow residence time effects were explored by varying the free-stream velocity up to 50 m/s. The reacting boundary layer structure dependence on the residence time was analysed by extracting the ratio of chemical source and species diffusion terms. The important contributions of radical species reactions on overall carbon removal rate, which is often neglected in multi-dimensional simulations, are highlighted. The results provide a framework for future development and validation of lumped heterogeneous reaction models based on multi-dimensional reacting flow configurations.

  16. Retrieval Can Increase or Decrease Suggestibility Depending on How Memory Is Tested: The Importance of Source Complexity

    ERIC Educational Resources Information Center

    Chan, Jason C. K.; Wilford, Miko M.; Hughes, Katharine L.

    2012-01-01

    Taking an intervening test between learning episodes can enhance later source recollection. Paradoxically, testing can also increase people's susceptibility to the misinformation effect--a finding termed retrieval-enhanced suggestibility (RES, Chan, Thomas, & Bulevich, 2009). We conducted three experiments to examine this apparent contradiction.…

  17. Extreme Unconditional Dependence Vs. Multivariate GARCH Effect in the Analysis of Dependence Between High Losses on Polish and German Stock Indexes

    NASA Astrophysics Data System (ADS)

    Rokita, Pawel

    Classical portfolio diversification methods do not take account of any dependence between extreme returns (losses). Many researchers provide, however, some empirical evidence for various assets that extreme-losses co-occur. If the co-occurrence is frequent enough to be statistically significant, it may seriously influence portfolio risk. Such effects may result from a few different properties of financial time series, like for instance: (1) extreme dependence in a (long-term) unconditional distribution, (2) extreme dependence in subsequent conditional distributions, (3) time-varying conditional covariance, (4) time-varying (long-term) unconditional covariance, (5) market contagion. Moreover, a mix of these properties may be present in return time series. Modeling each of them requires different approaches. It seams reasonable to investigate whether distinguishing between the properties is highly significant for portfolio risk measurement. If it is, identifying the effect responsible for high loss co-occurrence would be of a great importance. If it is not, the best solution would be selecting the easiest-to-apply model. This article concentrates on two of the aforementioned properties: extreme dependence (in a long-term unconditional distribution) and time-varying conditional covariance.

  18. Residence times in river basins as determined by analysis of long-term tritium records

    USGS Publications Warehouse

    Michel, R.L.

    1992-01-01

    The US Geological Survey has maintained a network of stations to collect samples for the measurement of tritium concentrations in precipitation and streamflow since the early 1960s. Tritium data from outflow waters of river basins draining 4500-75000 km2 are used to determine average residence times of water within the basins. The basins studied are the Colorado River above Cisco, Utah; the Kissimmee River above Lake Okeechobee, Florida; the Mississippi River above Anoka, Minnesota; the Neuse River above Streets Ferry Bridge near Vanceboro, North Carolina; the Potomac River above Point of Rocks, Maryland; the Sacramento River above Sacramento, California; the Susquehanna River above Harrisburg, Pennsylvania. The basins are modeled with the assumption that the outflow in the river comes from two sources-prompt (within-year) runoff from precipitation, and flow from the long-term reservoirs of the basin. Tritium concentration in the outflow water of the basin is dependent on three factors: (1) tritium concentration in runoff from the long-term reservoir, which depends on the residence time for the reservoir and historical tritium concentrations in precipitation; (2) tritium concentrations in precipitation (the within-year runoff component); (3) relative contributions of flow from the long-term and within-year components. Predicted tritium concentrations for the outflow water in the river basins were calculated for different residence times and for different relative contributions from the two reservoirs. A box model was used to calculate tritium concentrations in the long-term reservoir. Calculated values of outflow tritium concentrations for the basin were regressed against the measured data to obtain a slope as close as possible to 1. These regressions assumed an intercept of zero and were carried out for different values of residence time and reservoir contribution to maximize the fit of modeled versus actual data for all the above rivers. The final slopes of the fitted regression lines ranged from 0.95 to 1.01 (correlation coefficient > 0.96) for the basins studied. Values for the residence time of waters within the basins and average relative contributions of the within-year and long-term reservoirs to outflow were obtained. Values for river basin residence times ranged from 2 years for the Kissimmee River basin to 20 years for the Potomac River basin. The residence times indicate the time scale in which the basin responds to anthropogenic inputs. The modeled tritium concentrations for the basins also furnish input data for urban and agricultural settings where these river waters are used. ?? 1992.

  19. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang)

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t=0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  20. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  1. Time-Dependent Behavior of a Graphite/Thermoplastic Composite and the Effects of Stress and Physical Aging

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Feldman, Mark

    1995-01-01

    Experimental studies were performed to determine the effects of stress and physical aging on the matrix dominated time dependent properties of IM7/8320 composite. Isothermal tensile creep/aging test techniques developed for polymers were adapted for testing of the composite material. Time dependent transverse and shear compliance's for an orthotropic plate were found from short term creep compliance measurements at constant, sub-T(8) temperatures. These compliance terms were shown to be affected by physical aging. Aging time shift factors and shift rates were found to be a function of temperature and applied stress.

  2. Bile salt-stimulated lipase of human milk: characterization of the enzyme from preterm and term milk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freed, L.M.; Hamosh, P.; Hamosh, M.

    1986-03-01

    The bile salt-stimulated lipase (BSSL) of human milk is an important digestive enzyme in the newborn whose pancreatic function is immature. Milk from mothers delivering premature infants (preterm milk) has similar levels of BSSL activity to that of mothers of term infants (term milk). This study has determined whether the BSSL in preterm milk has the same characteristics as that in term milk. Milk samples were collected during the first 12 wk of lactation from seven mothers of infants born at 26-30 wk (very preterm, VPT), 31-37 wk (preterm, PT) and 37-42 wk (term, T) gestation. BSSL activity was measuredmore » using /sup 3/H-triolein emulsion as substrate. Time course, bile salt and enzyme concentration, pH and pH stability were studied, as well as inhibition of BSSL by eserine. The characteristics of BSSL from preterm and term milk were identical as were comparisons between colostrum and mature milk BSSL. BSSL from all milk sources had a neutral-to-alkaline pH optimum (pH 7.3-8.9), was stable at low pH for 60 min, and was 95-100% inhibited by eserine (greater than or equal to 0.6 mM). BSSL activity, regardless of enzyme source, was bile-salt dependent and was stimulated only by primary bile salts (taurocholate, glycocholate). The data indicate that the BSSL in milks of mothers delivering as early as 26 wk gestation is identical to that in term milk.« less

  3. Helicon plasma generator-assisted surface conversion ion source for the production of H- ion beams at the Los Alamos Neutron Science Centera)

    NASA Astrophysics Data System (ADS)

    Tarvainen, O.; Rouleau, G.; Keller, R.; Geros, E.; Stelzer, J.; Ferris, J.

    2008-02-01

    The converter-type negative ion source currently employed at the Los Alamos Neutron Science Center (LANSCE) is based on cesium enhanced surface production of H- ion beams in a filament-driven discharge. In this kind of an ion source the extracted H- beam current is limited by the achievable plasma density which depends primarily on the electron emission current from the filaments. The emission current can be increased by increasing the filament temperature but, unfortunately, this leads not only to shorter filament lifetime but also to an increase in metal evaporation from the filament, which deposits on the H- converter surface and degrades its performance. Therefore, we have started an ion source development project focused on replacing these thermionic cathodes (filaments) of the converter source by a helicon plasma generator capable of producing high-density hydrogen plasmas with low electron energy. In our studies which have so far shown that the plasma density of the surface conversion source can be increased significantly by exciting a helicon wave in the plasma, and we expect to improve the performance of the surface converter H- ion source in terms of beam brightness and time between services. The design of this new source and preliminary results are presented, along with a discussion of physical processes relevant for H- ion beam production with this novel design. Ultimately, we perceive this approach as an interim step towards our long-term goal, combining a helicon plasma generator with an SNS-type main discharge chamber, which will allow us to individually optimize the plasma properties of the plasma cathode (helicon) and H- production (main discharge) in order to further improve the brightness of extracted H- ion beams.

  4. Helicon plasma generator-assisted surface conversion ion source for the production of H(-) ion beams at the Los Alamos Neutron Science Center.

    PubMed

    Tarvainen, O; Rouleau, G; Keller, R; Geros, E; Stelzer, J; Ferris, J

    2008-02-01

    The converter-type negative ion source currently employed at the Los Alamos Neutron Science Center (LANSCE) is based on cesium enhanced surface production of H(-) ion beams in a filament-driven discharge. In this kind of an ion source the extracted H(-) beam current is limited by the achievable plasma density which depends primarily on the electron emission current from the filaments. The emission current can be increased by increasing the filament temperature but, unfortunately, this leads not only to shorter filament lifetime but also to an increase in metal evaporation from the filament, which deposits on the H(-) converter surface and degrades its performance. Therefore, we have started an ion source development project focused on replacing these thermionic cathodes (filaments) of the converter source by a helicon plasma generator capable of producing high-density hydrogen plasmas with low electron energy. In our studies which have so far shown that the plasma density of the surface conversion source can be increased significantly by exciting a helicon wave in the plasma, and we expect to improve the performance of the surface converter H(-) ion source in terms of beam brightness and time between services. The design of this new source and preliminary results are presented, along with a discussion of physical processes relevant for H(-) ion beam production with this novel design. Ultimately, we perceive this approach as an interim step towards our long-term goal, combining a helicon plasma generator with an SNS-type main discharge chamber, which will allow us to individually optimize the plasma properties of the plasma cathode (helicon) and H(-) production (main discharge) in order to further improve the brightness of extracted H(-) ion beams.

  5. Improvements of PKU PMECRIS for continuous hundred hours CW proton beam operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, S. X., E-mail: sxpeng@pku.edu.cn; Ren, H. T.; Zhang, T.

    2016-02-15

    In order to improve the source stability, a long term continuous wave (CW) proton beam experiment has been carried out with Peking University compact permanent magnet 2.45 GHz ECR ion source (PKU PMECRIS). Before such an experiment a lot of improvements and modifications were completed on the source body, the Faraday cup and the PKU ion source test bench. At the beginning of 2015, a continuous operation of PKU PMECRIS for 306 h with more than 50 mA CW beam was carried out after success of many short term tests. No plasma generator failure or high voltage breakdown was observedmore » during that running period and the proton source reliability is near 100%. Total beam availability, which is defined as 35-keV beam-on time divided by elapsed time, was higher than 99% [S. X. Peng et al., Chin. Phys. B 24(7), 075203 (2015)]. A re-inspection was performed after another additional 100 h operation (counting time) and no obvious sign of component failure was observed. Counting the previous source testing time together, this PMECRs longevity is now demonstrated to be greater than 460 h. This paper is mainly concentrated on the improvements for this long term experiment.« less

  6. Code System for Performance Assessment Ground-water Analysis for Low-level Nuclear Waste.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MATTHEW,; KOZAK, W.

    1994-02-09

    Version 00 The PAGAN code system is a part of the performance assessment methodology developed for use by the U. S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1 has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simplemore » ground-water transport analysis and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time- and location-dependent radionuclide concentration at a well in the aquifer, or a time- and location-dependent radionuclide flux into a surface-water body.« less

  7. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Elgindy, Tarek; Hodge, Brian S

    A framework for assessing the performance of short-term solar forecasting is presented in conjunction with a range of numerical results using global horizontal irradiation (GHI) from the open-source Surface Radiation Budget (SURFRAD) data network. A suite of popular machine learning algorithms is compared according to a set of statistically distinct metrics and benchmarked against the persistence-of-cloudiness forecast and a cloud motion forecast. Results show significant improvement compared to the benchmarks with trade-offs among the machine learning algorithms depending on the desired error metric. Training inputs include time series observations of GHI for a history of years, historical weather and atmosphericmore » measurements, and corresponding date and time stamps such that training sensitivities might be inferred. Prediction outputs are GHI forecasts for 1, 2, 3, and 4 hours ahead of the issue time, and they are made for every month of the year for 7 locations. Photovoltaic power and energy outputs can then be made using the solar forecasts to better understand power system impacts.« less

  8. Learning complex temporal patterns with resource-dependent spike timing-dependent plasticity.

    PubMed

    Hunzinger, Jason F; Chan, Victor H; Froemke, Robert C

    2012-07-01

    Studies of spike timing-dependent plasticity (STDP) have revealed that long-term changes in the strength of a synapse may be modulated substantially by temporal relationships between multiple presynaptic and postsynaptic spikes. Whereas long-term potentiation (LTP) and long-term depression (LTD) of synaptic strength have been modeled as distinct or separate functional mechanisms, here, we propose a new shared resource model. A functional consequence of our model is fast, stable, and diverse unsupervised learning of temporal multispike patterns with a biologically consistent spiking neural network. Due to interdependencies between LTP and LTD, dendritic delays, and proactive homeostatic aspects of the model, neurons are equipped to learn to decode temporally coded information within spike bursts. Moreover, neurons learn spike timing with few exposures in substantial noise and jitter. Surprisingly, despite having only one parameter, the model also accurately predicts in vitro observations of STDP in more complex multispike trains, as well as rate-dependent effects. We discuss candidate commonalities in natural long-term plasticity mechanisms.

  9. Applications of Cosmological Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Christopherson, Adam J.

    2011-06-01

    Cosmological perturbation theory is crucial for our understanding of the universe. The linear theory has been well understood for some time, however developing and applying the theory beyond linear order is currently at the forefront of research in theoretical cosmology. This thesis studies the applications of perturbation theory to cosmology and, specifically, to the early universe. Starting with some background material introducing the well-tested 'standard model' of cosmology, we move on to develop the formalism for perturbation theory up to second order giving evolution equations for all types of scalar, vector and tensor perturbations, both in gauge dependent and gauge invariant form. We then move on to the main result of the thesis, showing that, at second order in perturbation theory, vorticity is sourced by a coupling term quadratic in energy density and entropy perturbations. This source term implies a qualitative difference to linear order. Thus, while at linear order vorticity decays with the expansion of the universe, the same is not true at higher orders. This will have important implications on future measurements of the polarisation of the Cosmic Microwave Background, and could give rise to the generation of a primordial seed magnetic field. Having derived this qualitative result, we then estimate the scale dependence and magnitude of the vorticity power spectrum, finding, for simple power law inputs a small, blue spectrum. The final part of this thesis concerns higher order perturbation theory, deriving, for the first time, the metric tensor, gauge transformation rules and governing equations for fully general third order perturbations. We close with a discussion of natural extensions to this work and other possible ideas for off-shooting projects in this continually growing field.

  10. A Wearable Inertial Measurement Unit for Long-Term Monitoring in the Dependency Care Area

    PubMed Central

    Rodríguez-Martín, Daniel; Pérez-López, Carlos; Samà, Albert; Cabestany, Joan; Català, Andreu

    2013-01-01

    Human movement analysis is a field of wide interest since it enables the assessment of a large variety of variables related to quality of life. Human movement can be accurately evaluated through Inertial Measurement Units (IMU), which are wearable and comfortable devices with long battery life. The IMU's movement signals might be, on the one hand, stored in a digital support, in which an analysis is performed a posteriori. On the other hand, the signal analysis might take place in the same IMU at the same time as the signal acquisition through online classifiers. The new sensor system presented in this paper is designed for both collecting movement signals and analyzing them in real-time. This system is a flexible platform useful for collecting data via a triaxial accelerometer, a gyroscope and a magnetometer, with the possibility to incorporate other information sources in real-time. A μSD card can store all inertial data and a Bluetooth module is able to send information to other external devices and receive data from other sources. The system presented is being used in the real-time detection and analysis of Parkinson's disease symptoms, in gait analysis, and in a fall detection system. PMID:24145917

  11. A wearable inertial measurement unit for long-term monitoring in the dependency care area.

    PubMed

    Rodríguez-Martín, Daniel; Pérez-López, Carlos; Samà, Albert; Cabestany, Joan; Català, Andreu

    2013-10-18

    Human movement analysis is a field of wide interest since it enables the assessment of a large variety of variables related to quality of life. Human movement can be accurately evaluated through Inertial Measurement Units (IMU), which are wearable and comfortable devices with long battery life. The IMU's movement signals might be, on the one hand, stored in a digital support, in which an analysis is performed a posteriori. On the other hand, the signal analysis might take place in the same IMU at the same time as the signal acquisition through online classifiers. The new sensor system presented in this paper is designed for both collecting movement signals and analyzing them in real-time. This system is a flexible platform useful for collecting data via a triaxial accelerometer, a gyroscope and a magnetometer, with the possibility to incorporate other information sources in real-time. A µSD card can store all inertial data and a Bluetooth module is able to send information to other external devices and receive data from other sources. The system presented is being used in the real-time detection and analysis of Parkinson's disease symptoms, in gait analysis, and in a fall detection system.

  12. Empirical Green's functions from small earthquakes: A waveform study of locally recorded aftershocks of the 1971 San Fernando earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchings, L.; Wu, F.

    1990-02-10

    Seismograms from 52 aftershocks of the 1971 San Fernando earthquake recorded at 25 stations distributed across the San Fernando Valley are examined to identify empirical Green's functions, and characterize the dependence of their waveforms on moment, focal mechanism, source and recording site spatial variations, recording site geology, and recorded frequency band. Recording distances ranged from 3.0 to 33.0 km, hypocentral separations ranged from 0.22 to 28.4 km, and recording site separations ranged from 0.185 to 24.2 km. The recording site geologies are diorite gneiss, marine and nonmarine sediments, and alluvium of varying thicknesses. Waveforms of events with moment below aboutmore » 1.5 {times} 10{sup 21} dyn cm are independent of the source-time function and are termed empirical Green's functions. Waveforms recorded at a particular station from events located within 1.0 to 3.0 km of each other, depending upon site geology, with very similar focal mechanism solutions are nearly identical for frequencies up to 10 Hz. There is no correlation to waveforms between recording sites at least 1.2 km apart, and waveforms are clearly distinctive for two sites 0.185 km apart. The geologic conditions of the recording site dominate the character of empirical Green's functions. Even for source separations of up to 20.0 km, the empirical Green's functions at a particular site are consistent in frequency content, amplification, and energy distribution. Therefore, it is shown that empirical Green's functions can be used to obtain site response functions. The observations of empirical Green's functions are used as a basis for developing the theory for using empirical Green's functions in deconvolution for source pulses and synthesis of seismograms of larger earthquakes.« less

  13. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  14. Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source

    NASA Astrophysics Data System (ADS)

    Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen

    2018-05-01

    Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.

  15. An Exact Form of Lilley's Equation with a Velocity Quadrupole/Temperature Dipole Source Term

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2001-01-01

    There have been several attempts to introduce approximations into the exact form of Lilley's equation in order to express the source term as the sum of a quadrupole whose strength is quadratic in the fluctuating velocities and a dipole whose strength is proportional to the temperature fluctuations. The purpose of this note is to show that it is possible to choose the dependent (i.e., the pressure) variable so that this type of result can be derived directly from the Euler equations without introducing any additional approximations.

  16. Mathematical Fluid Dynamics of Store and Stage Separation

    DTIC Science & Technology

    2005-05-01

    coordinates r = stretched inner radius S, (x) = effective source strength Re, = transition Reynolds number t = time r = reflection coefficient T = temperature...wave drag due to lift integral has the same form as that due to thickness, the source strength of the equivalent body depends on streamwise derivatives...revolution in which the source strength S, (x) is proportional to the x rate of change of cross sectional area, the source strength depends on the streamwise

  17. OrChem - An open source chemistry search engine for Oracle®

    PubMed Central

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  18. Large Torque Variations in Two Soft Gamma Repeaters

    NASA Technical Reports Server (NTRS)

    Woods, Peter M.; Kouveliotou, Chryssa; Gogus, Ersin; Finger, Mark H.; Swank, Jean; Markwardt, Craig B.; Hurley, Kevin; vanderKlis, Michiel; Six, N. Frank (Technical Monitor)

    2001-01-01

    We have monitored the pulse frequencies of the two soft gamma repeaters SGR 1806-20 and SGR 1900+14 through the beginning of year 2001 using primarily Rossi X-ray Timing Explorer Proportional Counter Array observations. In both sources, we observe large changes in the spin-down torque up to a factor of approximately 4, which persist for several months. Using long baseline phase-connected timing solutions as well as the overall frequency histories, we construct torque noise power spectra for each SGR. The power spectrum of each source is very red (power-law slope approximately -3.5). These power spectra are consistent in normalization with some accreting systems, yet much steeper in slope than any known accreting source. To the best of our knowledge, torque noise power spectra with a comparably steep frequency dependence have only been seen in young, glitching radio pulsars (e.g. Vela). The observed changes in spin-down rate do not correlate with burst activity, therefore, the physical mechanisms behind each phenomenon are also likely unrelated. Within the context of the magnetar model, seismic activity cannot account for both the bursts and the long-term torque changes unless the seismically active regions are decoupled from one another.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oloff, L.-P., E-mail: oloff@physik.uni-kiel.de; Hanff, K.; Stange, A.

    With the advent of ultrashort-pulsed extreme ultraviolet sources, such as free-electron lasers or high-harmonic-generation (HHG) sources, a new research field for photoelectron spectroscopy has opened up in terms of femtosecond time-resolved pump-probe experiments. The impact of the high peak brilliance of these novel sources on photoemission spectra, so-called vacuum space-charge effects caused by the Coulomb interaction among the photoemitted probe electrons, has been studied extensively. However, possible distortions of the energy and momentum distributions of the probe photoelectrons caused by the low photon energy pump pulse due to the nonlinear emission of electrons have not been studied in detail yet.more » Here, we systematically investigate these pump laser-induced space-charge effects in a HHG-based experiment for the test case of highly oriented pyrolytic graphite. Specifically, we determine how the key parameters of the pump pulse—the excitation density, wavelength, spot size, and emitted electron energy distribution—affect the measured time-dependent energy and momentum distributions of the probe photoelectrons. The results are well reproduced by a simple mean-field model, which could open a path for the correction of pump laser-induced space-charge effects and thus toward probing ultrafast electron dynamics in strongly excited materials.« less

  20. X-ray time lags in PG 1211+143

    NASA Astrophysics Data System (ADS)

    Lobban, A. P.; Vaughan, S.; Pounds, K.; Reeves, J. N.

    2018-05-01

    We investigate the X-ray time lags of a recent ˜630 ks XMM-Newton observation of PG 1211+143. We find well-correlated variations across the XMM-Newton EPIC bandpass, with the first detection of a hard lag in this source with a mean time delay of up to ˜3 ks at the lowest frequencies. We find that the energy-dependence of the low-frequency hard lag scales approximately linearly with log(E) when averaged over all orbits, consistent with the propagating fluctuations model. However, we find that the low-frequency lag behaviour becomes more complex on time-scales longer than a single orbit, suggestive of additional modes of variability. We also detect a high-frequency soft lag at ˜10-4 Hz with the magnitude of the delay peaking at ≲ 0.8 ks, consistent with previous observations, which we discuss in terms of small-scale reverberation.

  1. Three-dimensional nonhydrostatic simulations of summer thunderstorms in the humid subtropics versus High Plains

    NASA Astrophysics Data System (ADS)

    Lin, Hsin-mu; Wang, Pao K.; Schlesinger, Robert E.

    2005-11-01

    This article presents a detailed comparison of cloud microphysical evolution among six warm-season thunderstorm simulations using a time-dependent three-dimensional model WISCDYMM. The six thunderstorms chosen for this study consist of three apiece from two contrasting climate zones, the US High Plains (one supercell and two multicells) and the humid subtropics (two in Florida, US and one in Taipei, Taiwan, all multicells). The primary goal of this study is to investigate the differences among thunderstorms in different climate regimes in terms of their microphysical structures and how differently these structures evolve in time. A subtropical case is used as an example to illustrate the general contents of a simulated storm, and two examples of the simulated storms, one humid subtropical and one northern High Plains case, are used to describe in detail the microphysical histories. The simulation results are compared with the available observational data, and the agreement between the two is shown to be at least fairly close overall. The analysis, synthesis and implications of the simulation results are then presented. The microphysical histories of the six simulated storms in terms of the domain-integrated masses of all five hydrometeor classes (cloud water, cloud ice, rain, snow, graupel/hail), along with the individual sources (and sinks) of the three precipitating hydrometeor classes (rain, snow, graupel/hail) are analyzed in detail. These analyses encompass both the absolute magnitudes and their percentage contributions to the totals, for the condensate mass and their precipitation production (and depletion) rates, respectively. Comparisons between the hydrometeor mass partitionings for the High Plains versus subtropical thunderstorms show that, in a time-averaged sense, ice hydrometeors (cloud ice, snow, graupel/hail) account for ˜ 70-80% of the total hydrometeor mass for the High Plains storms but only ˜ 50% for the subtropical storms, after the systems have reached quasi-steady mature states. This demonstrates that ice processes are highly important even in thunderstorms occurring in warm climatic regimes. The dominant rain sources are two of the graupel/hail sinks, shedding and melting, in both High Plains and subtropical storms, while the main rain sinks are accretion by hail and evaporation. The dominant graupel/hail sources are accretion of rain, snow and cloud water, while its main sinks are shedding and melting. The dominant snow sources are the Bergeron-Findeisen process and accretion of cloud water, while the main sinks are accretion by graupel/hail and sublimation. However, the rankings of the leading production and depletion mechanisms differ somewhat in different storm cases, especially for graupel/hail. The model results indicate that the same hydrometeor types in the different climates have their favored microphysical sources and sinks. These findings not only prove that thunderstorm structure depends on local dynamic and thermodynamic atmospheric conditions that are generally climate-dependent, but also provide information about the partitioning of hydrometeors in the storms. Such information is potentially useful for convective parameterization in large-scale models.

  2. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Griffis, N. J.; Bertsch, D. L.; Hartman, R. C.; Thompson, D. J.; Kniffen, D. A.; Bloom, S. D.

    2000-01-01

    The 3rd EGRET Catalog of High-energy Gamma-ray Sources contains 170 unidentified sources, and there is great interest in the nature of these sources. One means of determining source class is the study of flux variability on time scales of days; pulsars are believed to be stable on these time scales while blazers are known to be highly variable. In addition, previous work has demonstrated that 3EG J0241-6103 and 3EG J1837-0606 are candidates for a new gamma-ray source class. These sources near the Galactic plane display transient behavior but cannot be associated with any known blazers. Although, many instances of flaring AGN have been reported, the EGRET database has not been systematically searched for occurrences of short-timescale (approximately 1 day) variability. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through proposal cycle 4. Six 3EG catalog sources are reported here to display variability on short time scales; four of them are unidentified. In addition, three non-catalog variable sources are discussed.

  3. Trends in Mortality of Tuberculosis Patients in the United States: The Long-term Perspective

    PubMed Central

    Barnes, Richard F.W.; Moore, Maria Luisa; Garfein, Richard S.; Brodine, Stephanie; Strathdee, Steffanie A.; Rodwell, Timothy C.

    2011-01-01

    PURPOSE To describe long-term trends in TB mortality and to compare trends estimated from two different sources of public health surveillance data. METHODS Trends and changes in trend were estimated by joinpoint regression. Comparisons between datasets were made by fitting a Poisson regression model. RESULTS Since 1900, TB mortality rates estimated from death certificates have declined steeply, except for a period of no change in the 1980s. This decade had long-term consequences resulting in more TB deaths in later years than would have occurred had there been no flattening of the trend. Recent trends in TB mortality estimated from National Tuberculosis Surveillance System (NTSS) data, which record all-cause mortality, differed from trends based on death certificates. In particular, NTSS data showed TB mortality rates flattening since 2002. CONCLUSIONS Estimates of trends in TB mortality vary by data source, and therefore interpretation of the success of control efforts will depend upon the surveillance dataset used. The datasets may be subject to different biases that vary with time. One dataset showed a sustained improvement in the control of TB since the early 1990s while the other indicated that the rate of TB mortality was no longer declining. PMID:21820320

  4. SINQ layout, operation, applications and R&D to high power

    NASA Astrophysics Data System (ADS)

    Bauer, G. S.; Dai, Y.; Wagner, W.

    2002-09-01

    Since 1997, the Paul Scherrer Institut (PSI) is operating a 1 MW class research spallation neutron source, named SINQ. SINQ is driven by a cascade of three accelerators, the final stage being a 590 MeV isochronous ring cyclotron which delivers a beam current of 1.8 mA at an rf-frequency of 51 MHz. Since for neutron production this is essentially a dc-device, SINQ is a continuous neutron source and is optimized in its design for high time average neutron flux. This makes the facility similar to a research reactor in terms of utilization, but, in terms of beam power, it is, by a large margin, the most powerful spallation neutron source currently in operation world wide. As a consequence, target load levels prevail in SINQ which are beyond the realm of existing experience, demanding a careful approach to the design and operation of a high power target. While the best neutronic performance of the source is expected for a liquid lead-bismuth eutectic target, no experience with such systems exists. For this reason a staged approach has been embarked upon, starting with a heavy water cooled rod target of Zircaloy-2 and proceeding via steel clad lead rods towards the final goal of a target optimised in both, neutronic performance and service life time. Experience currently accruing with a test target containing sample rods with different materials specimens will help to select the proper structural material and make dependable life time estimates accounting for the real operating conditions that prevail in the facility. In parallel, both theoretical and experimental work is going on within the MEGAPIE (MEGAwatt Pilot Experiment) project, a joint initiative by six European research institutions and JAERI (Japan), DOE (USA) and KAERI (Korea), to design, build, operate and explore a liquid lead-bismuth spallation target for 1MW of beam power, taking advantage of the existing spallation neutron facility SINQ.

  5. Spatial variation and density-dependent dispersal in competitive coexistence.

    PubMed Central

    Amarasekare, Priyanga

    2004-01-01

    It is well known that dispersal from localities favourable to a species' growth and reproduction (sources) can prevent competitive exclusion in unfavourable localities (sinks). What is perhaps less well known is that too much emigration can undermine the viability of sources and cause regional competitive exclusion. Here, I investigate two biological mechanisms that reduce the cost of dispersal to source communities. The first involves increasing the spatial variation in the strength of competition such that sources can withstand high rates of emigration; the second involves reducing emigration from sources via density-dependent dispersal. I compare how different forms of spatial variation and modes of dispersal influence source viability, and hence source-sink coexistence, under dominance and pre-emptive competition. A key finding is that, while spatial variation substantially reduces dispersal costs under both types of competition, density-dependent dispersal does so only under dominance competition. For instance, when spatial variation in the strength of competition is high, coexistence is possible (regardless of the type of competition) even when sources experience high emigration rates; when spatial variation is low, coexistence is restricted even under low emigration rates. Under dominance competition, density-dependent dispersal has a strong effect on coexistence. For instance, when the emigration rate increases with density at an accelerating rate (Type III density-dependent dispersal), coexistence is possible even when spatial variation is quite low; when the emigration rate increases with density at a decelerating rate (Type II density-dependent dispersal), coexistence is restricted even when spatial variation is quite high. Under pre-emptive competition, density-dependent dispersal has only a marginal effect on coexistence. Thus, the diversity-reducing effects of high dispersal rates persist under pre-emptive competition even when dispersal is density dependent, but can be significantly mitigated under dominance competition if density-dependent dispersal is Type III rather than Type II. These results lead to testable predictions about source-sink coexistence under different regimes of competition, spatial variation and dispersal. They identify situations in which density-independent dispersal provides a reasonable approximation to species' dispersal patterns, and those under which consideration of density-dependent dispersal is crucial to predicting long-term coexistence. PMID:15306322

  6. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  7. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE PAGES

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...

    2016-10-11

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  8. Interlaboratory study of the ion source memory effect in 36Cl accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Pavetich, Stefan; Akhmadaliev, Shavkat; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Buchriegler, Josef; Golser, Robin; Keddadouche, Karim; Martschini, Martin; Merchel, Silke; Rugel, Georg; Steier, Peter

    2014-06-01

    Understanding and minimization of contaminations in the ion source due to cross-contamination and long-term memory effect is one of the key issues for accurate accelerator mass spectrometry (AMS) measurements of volatile elements. The focus of this work is on the investigation of the long-term memory effect for the volatile element chlorine, and the minimization of this effect in the ion source of the Dresden accelerator mass spectrometry facility (DREAMS). For this purpose, one of the two original HVE ion sources at the DREAMS facility was modified, allowing the use of larger sample holders having individual target apertures. Additionally, a more open geometry was used to improve the vacuum level. To evaluate this improvement in comparison to other up-to-date ion sources, an interlaboratory comparison had been initiated. The long-term memory effect of the four Cs sputter ion sources at DREAMS (two sources: original and modified), ASTER (Accélérateur pour les Sciences de la Terre, Environnement, Risques) and VERA (Vienna Environmental Research Accelerator) had been investigated by measuring samples of natural 35Cl/37Cl-ratio and samples highly-enriched in 35Cl (35Cl/37Cl ∼ 999). Besides investigating and comparing the individual levels of long-term memory, recovery time constants could be calculated. The tests show that all four sources suffer from long-term memory, but the modified DREAMS ion source showed the lowest level of contamination. The recovery times of the four ion sources were widely spread between 61 and 1390 s, where the modified DREAMS ion source with values between 156 and 262 s showed the fastest recovery in 80% of the measurements.

  9. The development of the time dependence of the nuclear EMP electric field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eng, C

    The nuclear electromagnetic pulse (EMP) electric field calculated with the legacy code CHAP is compared with the field given by an integral solution of Maxwell's equations, also known as the Jefimenko equation, to aid our current understanding on the factors that affect the time dependence of the EMP. For a fair comparison the CHAP current density is used as a source in the Jefimenko equation. At first, the comparison is simplified by neglecting the conduction current and replacing the standard atmosphere with a constant density air slab. The simplicity of the resultant current density aids in determining the factors thatmore » affect the rise, peak and tail of the EMP electric field versus time. The three dimensional nature of the radiating source, i.e. sources off the line-of-sight, and the time dependence of the derivative of the current density with respect to time are found to play significant roles in shaping the EMP electric field time dependence. These results are found to hold even when the conduction current and the standard atmosphere are properly accounted for. Comparison of the CHAP electric field with the Jefimenko electric field offers a direct validation of the high-frequency/outgoing wave approximation.« less

  10. New energy Era: Short Term and Long Term.

    ERIC Educational Resources Information Center

    Beckwith, Robert

    This paper examines the causes and effects of the 1973 oil embargo imposed by OPEC. The author notes that since the embargo, little positive action has been taken to reduce American dependence upon a very limited and very expensive energy source. In order to achieve any degree of independence, it will be necessary to repidly expand coal and…

  11. Sensitivity of new detection method for ultra-low frequency gravitational waves with pulsar spin-down rate statistics

    NASA Astrophysics Data System (ADS)

    Yonemaru, Naoyuki; Kumamoto, Hiroki; Takahashi, Keitaro; Kuroyanagi, Sachiko

    2018-04-01

    A new detection method for ultra-low frequency gravitational waves (GWs) with a frequency much lower than the observational range of pulsar timing arrays (PTAs) was suggested in Yonemaru et al. (2016). In the PTA analysis, ultra-low frequency GWs (≲ 10-10 Hz) which evolve just linearly during the observation time span are absorbed by the pulsar spin-down rates since both have the same effect on the pulse arrival time. Therefore, such GWs cannot be detected by the conventional method of PTAs. However, the bias on the observed spin-down rates depends on relative direction of a pulsar and GW source and shows a quadrupole pattern in the sky. Thus, if we divide the pulsars according to the position in the sky and see the difference in the statistics of the spin-down rates, ultra-low frequency GWs from a single source can be detected. In this paper, we evaluate the potential of this method by Monte-Carlo simulations and estimate the sensitivity, considering only the "Earth term" while the "pulsar term" acts like random noise for GW frequencies 10-13 - 10-10 Hz. We find that with 3,000 milli-second pulsars, which are expected to be discovered by a future survey with the Square Kilometre Array, GWs with the derivative of amplitude of about 3 × 10^{-19} {s}^{-1} can in principle be detected. Implications for possible supermassive binary black holes in Sgr* and M87 are also given.

  12. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  13. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  14. The Funding of Long-Term Care in Canada: What Do We Know, What Should We Know?

    PubMed

    Grignon, Michel; Spencer, Byron G

    2018-06-01

    ABSTRACTLong-term care is a growing component of health care spending but how much is spent or who bears the cost is uncertain, and the measures vary depending on the source used. We drew on regularly published series and ad hoc publications to compile preferred estimates of the share of long-term care spending in total health care spending, the private share of long-term care spending, and the share of residential care within long-term care. For each series, we compared estimates obtainable from published sources (CIHI [Canadian Institute for Health Information] and OECD [Organization for Economic Cooperation and Development]) with our preferred estimates. We conclude that using published series without adjustment would lead to spurious conclusions on the level and evolution of spending on long-term care in Canada as well as on the distribution of costs between private and public funders and between residential and home care.

  15. Gravitational perturbations and metric reconstruction: Method of extended homogeneous solutions applied to eccentric orbits on a Schwarzschild black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopper, Seth; Evans, Charles R.

    2010-10-15

    We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less

  16. Bubble dynamics in viscoelastic soft tissue in high-intensity focal ultrasound thermal therapy.

    PubMed

    Zilonova, E; Solovchuk, M; Sheu, T W H

    2018-01-01

    The present study is aimed to investigate bubble dynamics in a soft tissue, to which HIFU's continuous harmonic pulse is applied by introducing a viscoelastic cavitation model. After a comparison of some existing cavitation models, we decided to employ Gilmore-Akulichev model. This chosen cavitation model should be coupled with the Zener viscoelastic model in order to be able to simulate soft tissue features such as elasticity and relaxation time. The proposed Gilmore-Akulichev-Zener model was investigated for exploring cavitation dynamics. The parametric study led us to the conclusion that the elasticity and viscosity both damp bubble oscillations, whereas the relaxation effect depends mainly on the period of the ultrasound wave. The similar influence of elasticity, viscosity and relaxation time on the temperature inside the bubble can be observed. Cavitation heat source terms (corresponding to viscous damping and pressure wave radiated by bubble collapse) were obtained based on the proposed model to examine the cavitation significance during the treatment process. Their maximum values both overdominate the acoustic ultrasound term in HIFU applications. Elasticity was revealed to damp a certain amount of deposited heat for both cavitation terms. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Gas-phase naphthalene concentration data recovery in ambient air and its relevance as a tracer of sources of volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Uria-Tellaetxe, Iratxe; Navazo, Marino; de Blas, Maite; Durana, Nieves; Alonso, Lucio; Iza, Jon

    2016-04-01

    Despite the toxicity of naphthalene and the fact that it is a precursor of atmospheric photooxidants and secondary aerosol, studies on ambient gas-phase naphthalene are generally scarce. Moreover, as far as we are concerned, this is the first published one using long-term hourly ambient gas-phase naphthalene concentrations. In this work, it has been also demonstrated the usefulness of ambient gas-phase naphthalene to identify major sources of volatile organic compounds (VOC) in complex scenarios. Initially, in order to identify main benzene emission sources, hourly ambient measurements of 60 VOC were taken during a complete year together with meteorological data in an urban/industrial area. Later, due to the observed co-linearity of some of the emissions, a procedure was developed to recover naphthalene concentration data from recorded chromatograms to use it as a tracer of the combustion and distillation of petroleum products. The characteristic retention time of this compound was determined comparing previous GC-MS and GC-FID simultaneous analysis by means of relative retention times, and its concentration was calculated by using relative response factors. The obtained naphthalene concentrations correlated fairly well with ethene (r = 0.86) and benzene (r = 0.92). Besides, the analysis of daily time series showed that these compounds followed a similar pattern, very different from that of other VOC, with minimum concentrations at day-time. This, together with the results from the assessment of the meteorological dependence pointed out a coke oven as the major naphthalene and benzene emitting sources in the study area.

  18. Feasibility of Active Monitoring for Plate Coupling Using ACROSS

    NASA Astrophysics Data System (ADS)

    Yamaoka, K.; Watanabe, T.; Ikuta, R.

    2004-12-01

    Detectability of temporal changes in reflected wave from the boundary of subducting plates in Tokai district with active sources are studied. Based on rock experiments the change in the intensity of reflection wave can be caused by change in coupling between subducting and overriding plates. ACROSS (Accurately-Controlled Rountine-Operated Signal System) consists of sinusoidal vibration sources and receivers is proved to provide a data of excellent signal resolution. The following technical issues should be overcome to monitor the returned signal from boundaries of subducting plates. (1) Long term operation of the source. (2) Detection of temporal change. (3) Accurate estimation of source functions and their temporal change. First two issues have already overcome. We have already succeeded a long-term operation experiment with the ACROSS system in Awaji, Japan. The operation was carried out for 15 months with only minor troubles. Continuous signal during the experiment are successfully obtained. In the experiment we developed a technique to monitor the temporal change of travel time with a resolution of several tens of microseconds. The third issue is one of the most difficult problem for practical monitoring using artificial sources. In the 15-month experiment we correct the source function using the record of seismometers that were deployed around the source We also estimate the efficiency of the reflected wave detection using ACROSS system. We use a data of seismic exploration experiment by blasts that carried out above subducting plate in Tokai district. Clear reflection from the surface of the Philippine Sea plate is observed in the waveform. Assuming that the ACROSS source is installed at the same place of the blast source, the detectability of temporal variation of reflection wave can be estimated. As we have measured the variation of signal amplitude that depends on the distance from an ACROSS source, ground noise at seismic stations (receivers) provide us the signal-to-noise ratio for the signal from ACROSS. The resolution can be estimated only by the signal-to-noise ratio. We surveyed the noise level at the place where reflection from the boundary of subducting Philippine Sea Plate can be detected. The results show that the resolution will be better than 1% in amplitude and 0.1milisecond in travel time for the stacking of one week using three-unit source and ten-elements receiver arrays.

  19. Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars

    NASA Astrophysics Data System (ADS)

    Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan

    2013-04-01

    The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.

  20. A finite-volume ELLAM for three-dimensional solute-transport modeling

    USGS Publications Warehouse

    Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.

    2003-01-01

    A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.

  1. Comparing Time-Dependent Geomagnetic and Atmospheric Effects on Cosmogenic Nuclide Production Rate Scaling

    NASA Astrophysics Data System (ADS)

    Lifton, N. A.

    2014-12-01

    A recently published cosmogenic nuclide production rate scaling model based on analytical fits to Monte Carlo simulations of atmospheric cosmic ray flux spectra (both of which agree well with measured spectra) (Lifton et al., 2014, Earth Planet. Sci. Lett. 386, 149-160: termed the LSD model) provides two main advantages over previous scaling models: identification and quantification of potential sources of bias in the earlier models, and the ability to generate nuclide-specific scaling factors easily for a wide range of input parameters. The new model also provides a flexible framework for exploring the implications of advances in model inputs. In this work, the scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene will be explored. Korte and Constable (2011, Phys. Earth Planet. Int. 188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models used by Lifton et al. (2014) with paleomagnetic measurements from sediment cores in addition to archeomagnetic and volcanic data. These updated models offer improved accuracy over the previous versions, in part to due to increased temporal and spatial data coverage. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC- the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to the earlier models. These results will be compared to scaling predictions using another recent time-dependent spherical harmonic model of the Holocene geomagnetic field by Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109), based solely on archeomagnetic and volcanic paleomagnetic data, but extending to 14 ka. In addition, the potential effects of time-dependent atmospheric models on LSD scaling predictions will be presented. Given the typical dominance of altitudinal over latitudinal scaling effects on cosmogenic nuclide production, incorporating transient global simulations of atmospheric structure (e.g., Liu et al., 2009, Science 325, 310-314) into scaling frameworks may contribute to improved understanding of long-term production rate variations.

  2. Amplification of terahertz pulses in gases beyond thermodynamic equilibrium

    NASA Astrophysics Data System (ADS)

    Schwaab, G. W.; Schroeck, K.; Havenith, M.

    2007-03-01

    In Ebbinghaus [Plasma Sources Sci. Technol. 15, 72 (2006)] we reported terahertz time-domain spectroscopy in a plasma at low pressure, we observed a simultaneous absorption and amplification process within each single rotational transition. Here we show that this observation is a direct consequence of the short interaction time of the pulsed terahertz radiation with the plasma, which is shorter than the average collision time between the molecules. Thus, during the measurement time the molecular states may be considered entangled. Solution of the time-dependent Schrödinger equation yields a linear term that may be neglected for long observation times, large frequencies, or nonentangled states. We determine the restrictions for the observation of this effect and calculate the spectrum of a simple diatomic molecule. Using this model we are able to explain the spectral features showing a change from emission to absorption as observed previously. In addition we find that the amplification and absorption do not follow the typical Lambert-Beer exponential law but an approximate square law.

  3. OXIDATION OF INCONEL 718 IN AIR AT TEMPERATURES FROM 973K TO 1620K.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GREENE,G.A.; FINFROCK,C.C.

    2000-10-01

    As part of the APT project, it was necessary to quantify the release of tungsten from the APT spallation target during postulated accident conditions in order to develop accident source terms for accident consequence characterization. Experiments with tungsten rods at high temperatures in a flowing steam environment characteristic of postulated accidents revealed that considerable vaporization of the tungsten occurred as a result of reactions with the steam and that the aerosols which formed were readily transported away from the tungsten surfaces, thus exposing fresh tungsten to react with more steam. The resulting tungsten release fractions and source terms were undesirablemore » and it was decided to clad the tungsten target with Inconel 718 in order to protect it from contact with steam during an accident and mitigate the accident source term and the consequences. As part of the material selection criteria, experiments were conducted with Inconel 718 at high temperatures to evaluate the rate of oxidation of the proposed clad material over as wide a temperature range as possible, as well as to determine the high-temperature failure limit of the material. Samples of Inconel 718 were inserted into a preheated furnace at temperatures ranging from 973 K to 1620 K and oxidized in air for varying periods of time. After oxidizing in air at a constant temperature for the prescribed time and then being allowed to cool, the samples would be reweighed to determine their weight gain due to the uptake of oxygen. From these weight gain measurements, it was possible to identify three regimes of oxidation for Inconel 718: a low-temperature regime in which the samples became passivated after the initial oxidation, an intermediate-temperature regime in which the rate of oxidation was limited by diffusion and exhibited a constant parabolic rate dependence, and a high-temperature regime in which material deformation and damage accompanied an accelerated oxidation rate above the parabolic regime. At temperatures below 1173 K, the rate of oxidation of the Inconel 718 surface was found to decrease markedly with time; the parabolic oxidation rate coefficient was not a constant but decreased with time. This was taken to indicate that the oxide film on the surface was having a passivating effect on oxygen transport through the oxide to the underlying metal. For temperatures in the range 1173 K to 1573 K, the time-dependent rate of oxidation as determined once again by weight-gain measurements was found to display the classical parabolic rate behavior, indicating that the rate of transport of reactants through the oxide was controlled by diffusion through the growing oxide layer. Parabolic rate coefficients were determined by least-squares analysis of time-dependent mass-gain data at 1173 K, 1273 K, 1373 K, 1473 K and 1573 K. At temperatures above 1540 K, post test examination of the oxidized samples revealed that the Inconel 718 began to lose strength and to deform. At 1540 K, samples which were suspended from their ends during testing began to demonstrate axial curvature as they lost strength and bowed under their own weight. As the temperatures of the tests were increased, rivulets were seen to appear on the surfaces of the test specimens; damage became severe at 1560 K. Although melting was never observed in any of these tests even up to. 1620 K, it was concluded from these data that the Inconel 718 clad should not be expected to protect the underlying tungsten at temperatures above 1540 K.« less

  4. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  5. Covered interest parity arbitrage and temporal long-term dependence between the US dollar and the Yen

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Szilagyi, Peter G.

    2007-03-01

    Using a daily time series from 1983 to 2005 of currency prices in spot and forward USD/Yen markets and matching equivalent maturity short-term US and Japanese interest rates, we investigate the sensitivity of the difference between actual prices in forward markets to those calculated from differentials in short-term interest rates. According to a fundamental theorem in financial economics termed covered interest parity (CIP), the actual and estimated prices should be identical once transaction and other costs are accommodated. The paper presents three important findings: first, we find evidence of considerable variation in CIP deviations from equilibrium; second, these deviations have diminished significantly and by 2000 have been almost eliminated; third, an analysis of the CIP deviations using the local Hurst exponent finds episodes of time-varying dependence over the various sample periods, which appear to be linked to episodes of dollar decline/Yen appreciation, or vice versa. The finding of temporal long-term dependence in CIP deviations is consistent with recent evidence of temporal long-term dependence in the returns of currency, stock and commodity markets.

  6. Effects of topography and crustal heterogeneities on the source estimation of LP event at Kilauea volcano

    USGS Publications Warehouse

    Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.

    2008-01-01

    The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.

  7. Medication development of ibogaine as a pharmacotherapy for drug dependence.

    PubMed

    Mash, D C; Kovera, C A; Buck, B E; Norenberg, M D; Shapshak, P; Hearn, W L; Sanchez-Ramos, J

    1998-05-30

    The potential for deriving new psychotherapeutic medications from natural sources has led to renewal interest in rain forest plants as a source of lead compounds for the development of antiaddiction medications. Ibogaine is an indole alkaloid found in the roots of Tabernanthe iboga (Apocynaceae family), a rain forest shrub that is native to equatorial Africa. Ibogaine is used by indigenous peoples in low doses to combat fatigue, hunger and in higher doses as a sacrament in religious rituals. Members of American and European addict self-help groups have claimed that ibogaine promotes long-term drug abstinence from addictive substances, including psychostimulants and cocaine. Anecdotal reports attest that a single dose of ibogaine eliminates withdrawal symptoms and reduces drug cravings for extended periods of time. The purported antiaddictive properties of ibogaine require rigorous validation in humans. We have initiated a rising tolerance study using single administration to assess the safety of ibogaine for treatment of cocaine dependency. The primary objectives of the study are to determine safety, pharmacokinetics and dose effects, and to identify relevant parameters of efficacy in cocaine-dependent patients. Pharmacokinetic and pharmacodynamic characteristics of ibogaine in humans are assessed by analyzing the concentration-time data of ibogaine and its desmethyl metabolite (noribogaine) from the Phase I trial, and by conducting in vitro experiments to elucidate the specific disposition processes involved in the metabolism of both parent drug and metabolite. The development of clinical safety studies of ibogaine in humans will help to determine whether there is a rationale for conducting efficacy trials in the future.

  8. Propagators for the Time-Dependent Kohn-Sham Equations: Multistep, Runge-Kutta, Exponential Runge-Kutta, and Commutator Free Magnus Methods.

    PubMed

    Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto

    2018-05-09

    We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.

  9. Viscoelastic modeling of deformation and gravity changes induced by pressurized magmatic sources

    NASA Astrophysics Data System (ADS)

    Currenti, Gilda

    2018-05-01

    Gravity and height changes, which reflect magma accumulation in subsurface chambers, are evaluated using analytical and numerical models in order to investigate their relationships and temporal evolutions. The analysis focuses mainly on the exploration of the time-dependent response of gravity and height changes to the pressurization of ellipsoidal magmatic chambers in viscoelastic media. Firstly, the validation of the numerical Finite Element results is performed by comparison with analytical solutions, which are devised for a simple spherical source embedded in a homogeneous viscoelastic half-space medium. Then, the effect of several model parameters on time-dependent height and gravity changes is investigated thanks to the flexibility of the numerical method in handling complex configurations. Both homogeneous and viscoelastic shell models reveal significantly different amplitudes in the ratio between gravity and height changes depending on geometry factors and medium rheology. The results show that these factors also influence the relaxation characteristic times of the investigated geophysical changes. Overall, these temporal patterns are compatible with time-dependent height and gravity changes observed on Etna volcano during the 1994-1997 inflation period. By modeling the viscoelastic response of a pressurized prolate magmatic source, a general agreement between computed and observed geophysical variations is achieved.

  10. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  11. Management of time-dependent multimedia data

    NASA Astrophysics Data System (ADS)

    Little, Thomas D.; Gibbon, John F.

    1993-01-01

    A number of approaches have been proposed for supporting high-bandwidth time-dependent multimedia data in a general purpose computing environment. Much of this work assumes the availability of ample resources such as CPU performance, bus, I/O, and communication bandwidth. However, many multimedia applications have large variations in instantaneous data presentation requirements (e.g., a dynamic range of order 100,000). By using a statistical scheduling approach these variations are effectively smoothed and, therefore, more applications are made viable. The result is a more efficient use of available bandwidth and the enabling of applications that have large short-term bandwidth requirements such as simultaneous video and still image retrieval. Statistical scheduling of multimedia traffic relies on accurate characterization or guarantee of channel bandwidth and delay. If guaranteed channel characteristics are not upheld due to spurious channel overload, buffer overflow and underflow can occur at the destination. The result is the loss of established source-destination synchronization and the introduction of intermedia skew. In this paper we present an overview of a proposed synchronization mechanism to limit the effects of such anomalous behavior. The proposed mechanism monitors buffer levels to detect impending low and high levels on frame basis and regulates the destination playout rate. Intermedia skew is controlled by a similar control algorithm. This mechanism is used in conjunction with a statistical source scheduling approach to provide an overall multimedia transmission and resynchronization system supporting graceful service degradation.

  12. Reduced order modelling in searches for continuous gravitational waves - I. Barycentring time delays

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Doolan, S.; McMenamin, L.; Wette, K.

    2018-06-01

    The frequencies and phases of emission from extra-solar sources measured by Earth-bound observers are modulated by the motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the source's sky-location. Precise knowledge of the modulations are required to coherently track the source's phase over long observations, for example, in pulsar timing, or searches for continuous gravitational waves. The modulations can be modelled as sky-location and time-dependent time delays that convert arrival times at the observer to the inertial frame of the source, which can often be the Solar system barycentre. We study the use of reduced order modelling for speeding up the calculation of this time delay for any sky-location. We find that the time delay model can be decomposed into just four basis vectors, and with these the delay for any sky-location can be reconstructed to sub-nanosecond accuracy. When compared to standard routines for time delay calculation in gravitational wave searches, using the reduced basis can lead to speed-ups of 30 times. We have also studied components of time delays for sources in binary systems. Assuming eccentricities <0.25, we can reconstruct the delays to within 100 s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when interpolating the basis for different orbital periods or time stamps. In long-duration phase-coherent searches for sources with sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens.

  13. A new time-independent formulation of fractional release

    NASA Astrophysics Data System (ADS)

    Ostermöller, Jennifer; Bönisch, Harald; Jöckel, Patrick; Engel, Andreas

    2017-03-01

    The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time to our data set.We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.

  14. High-Energy, High-Pulse-Rate Light Sources for Enhanced Time-Resolved Tomographic PIV of Unsteady and Turbulent Flows

    DTIC Science & Technology

    2017-07-31

    Report: High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows The views, opinions and/or...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows Report Term: 0-Other Email

  15. Downscaling near-surface soil moisture from field to plot scale: A comparative analysis under different environmental conditions

    NASA Astrophysics Data System (ADS)

    Nasta, Paolo; Penna, Daniele; Brocca, Luca; Zuecco, Giulia; Romano, Nunzio

    2018-02-01

    Indirect measurements of field-scale (hectometer grid-size) spatial-average near-surface soil moisture are becoming increasingly available by exploiting new-generation ground-based and satellite sensors. Nonetheless, modeling applications for water resources management require knowledge of plot-scale (1-5 m grid-size) soil moisture by using measurements through spatially-distributed sensor network systems. Since efforts to fulfill such requirements are not always possible due to time and budget constraints, alternative approaches are desirable. In this study, we explore the feasibility of determining spatial-average soil moisture and soil moisture patterns given the knowledge of long-term records of climate forcing data and topographic attributes. A downscaling approach is proposed that couples two different models: the Eco-Hydrological Bucket and Equilibrium Moisture from Topography. This approach helps identify the relative importance of two compound topographic indexes in explaining the spatial variation of soil moisture patterns, indicating valley- and hillslope-dependence controlled by lateral flow and radiative processes, respectively. The integrated model also detects temporal instability if the dominant type of topographic dependence changes with spatial-average soil moisture. Model application was carried out at three sites in different parts of Italy, each characterized by different environmental conditions. Prior calibration was performed by using sparse and sporadic soil moisture values measured by portable time domain reflectometry devices. Cross-site comparisons offer different interpretations in the explained spatial variation of soil moisture patterns, with time-invariant valley-dependence (site in northern Italy) and hillslope-dependence (site in southern Italy). The sources of soil moisture spatial variation at the site in central Italy are time-variant within the year and the seasonal change of topographic dependence can be conveniently correlated to a climate indicator such as the aridity index.

  16. The role of wellbore remediation on the evolution of groundwater quality from CO₂ and brine leakage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, Kayyum; Carroll, Susan A.; Sun, Yunwei

    Long-term storage of CO₂ in underground reservoirs requires a careful assessment to evaluate risk to groundwater sources. The focus of this study is to assess time-frames required to restore water quality to pre-injection levels based on output from complex reactive transport simulations that exhibit plume retraction within a 200-year simulation period. We examined the relationship between plume volume, cumulative injected CO₂ mass, and permeability. The role of mitigation was assessed by projecting falloffs in plume volumes from their maximum peak levels with a Gaussian function to estimate plume recovery times to reach post-injection groundwater compositions. The results show a strongmore » correlation between cumulative injected CO₂ mass and maximum plume pH volumes and a positive correlation between CO₂ flux, cumulative injected CO₂, and plume recovery times, with secondary dependence on permeability.« less

  17. The role of wellbore remediation on the evolution of groundwater quality from CO₂ and brine leakage

    DOE PAGES

    Mansoor, Kayyum; Carroll, Susan A.; Sun, Yunwei

    2014-12-31

    Long-term storage of CO₂ in underground reservoirs requires a careful assessment to evaluate risk to groundwater sources. The focus of this study is to assess time-frames required to restore water quality to pre-injection levels based on output from complex reactive transport simulations that exhibit plume retraction within a 200-year simulation period. We examined the relationship between plume volume, cumulative injected CO₂ mass, and permeability. The role of mitigation was assessed by projecting falloffs in plume volumes from their maximum peak levels with a Gaussian function to estimate plume recovery times to reach post-injection groundwater compositions. The results show a strongmore » correlation between cumulative injected CO₂ mass and maximum plume pH volumes and a positive correlation between CO₂ flux, cumulative injected CO₂, and plume recovery times, with secondary dependence on permeability.« less

  18. Thermally driven advection for radioxenon transport from an underground nuclear explosion

    NASA Astrophysics Data System (ADS)

    Sun, Yunwei; Carrigan, Charles R.

    2016-05-01

    Barometric pumping is a ubiquitous process resulting in migration of gases in the subsurface that has been studied as the primary mechanism for noble gas transport from an underground nuclear explosion (UNE). However, at early times following a UNE, advection driven by explosion residual heat is relevant to noble gas transport. A rigorous measure is needed for demonstrating how, when, and where advection is important. In this paper three physical processes of uncertain magnitude (oscillatory advection, matrix diffusion, and thermally driven advection) are parameterized by using boundary conditions, system properties, and source term strength. Sobol' sensitivity analysis is conducted to evaluate the importance of all physical processes influencing the xenon signals. This study indicates that thermally driven advection plays a more important role in producing xenon signals than oscillatory advection and matrix diffusion at early times following a UNE, and xenon isotopic ratios are observed to have both time and spatial dependence.

  19. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  20. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessonsmore » learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition mechanisms of fuel particles and fission fragments during the initial phase of the Chernobyl accident. Through this study, it is now possible to generally reconstruct the radiological consequences by using a dispersion calculation technique, combined with the meteorological data at the time of the accident and land contamination densities of {sup 137}Cs measured and reported around the Chernobyl area. Although it is challenging to incorporate lessons learned from the Chernobyl accident into the source term issues, the author has already developed an example of safety goals by incorporating the radiological consequences of the accident. The example provides safety goals by specifying source term releases in a graded approach in combination with probabilities, i.e. risks. The author believes that the future source term specification should be directly linked with safety goals. (author)« less

  1. OpenNFT: An open-source Python/Matlab framework for real-time fMRI neurofeedback training based on activity, connectivity and multivariate pattern analysis.

    PubMed

    Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van

    2017-08-01

    Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Dynamic Radioactive Source for Evaluating and Demonstrating Time-dependent Performance of Continuous Air Monitors.

    PubMed

    McLean, Thomas D; Moore, Murray E; Justus, Alan L; Hudston, Jonathan A; Barbé, Benoît

    2016-11-01

    Evaluation of continuous air monitors in the presence of a plutonium aerosol is time intensive, expensive, and requires a specialized facility. The Radiation Protection Services Group at Los Alamos National Laboratory has designed a Dynamic Radioactive Source, intended to replace plutonium aerosol challenge testing. The Dynamic Radioactive Source is small enough to be inserted into the sampler filter chamber of a typical continuous air monitor. Time-dependent radioactivity is introduced from electroplated sources for real-time testing of a continuous air monitor where a mechanical wristwatch motor rotates a mask above an alpha-emitting electroplated disk source. The mask is attached to the watch's minute hand, and as it rotates, more of the underlying source is revealed. The measured alpha activity increases with time, simulating the arrival of airborne radioactive particulates at the air sampler inlet. The Dynamic Radioactive Source allows the temporal behavior of puff and chronic release conditions to be mimicked without the need for radioactive aerosols. The new system is configurable to different continuous air monitor designs and provides an in-house testing capability (benchtop compatible). It is a repeatable and reusable system and does not contaminate the tested air monitor. Test benefits include direct user control, realistic (plutonium) aerosol spectra, and iterative development of continuous air monitor alarm algorithms. Data obtained using the Dynamic Radioactive Source has been used to elucidate alarm algorithms and to compare the response time of two commercial continuous air monitors.

  3. Dynamic Radioactive Source for Evaluating and Demonstrating Time-dependent Performance of Continuous Air Monitors

    DOE PAGES

    McLean, Thomas D.; Moore, Murray E.; Justus, Alan L.; ...

    2016-01-01

    Evaluation of continuous air monitors in the presence of a plutonium aerosol is time intensive, expensive, and requires a specialized facility. The Radiation Protection Services Group at Los Alamos National Laboratory has designed a Dynamic Radioactive Source, intended to replace plutonium aerosol challenge testing. Furthermore, the Dynamic Radioactive Source is small enough to be inserted into the sampler filter chamber of a typical continuous air monitor. Time-dependent radioactivity is introduced from electroplated sources for real-time testing of a continuous air monitor where a mechanical wristwatch motor rotates a mask above an alpha-emitting electroplated disk source. The mask is attached tomore » the watch’s minute hand, and as it rotates, more of the underlying source is revealed. The alpha activity we measured increases with time, simulating the arrival of airborne radioactive particulates at the air sampler inlet. The Dynamic Radioactive Source allows the temporal behavior of puff and chronic release conditions to be mimicked without the need for radioactive aerosols. The new system is configurable to different continuous air monitor designs and provides an in-house testing capability (benchtop compatible). It is a repeatable and reusable system and does not contaminate the tested air monitor. Test benefits include direct user control, realistic (plutonium) aerosol spectra, and iterative development of continuous air monitor alarm algorithms. We also used data obtained using the Dynamic Radioactive Source to elucidate alarm algorithms and to compare the response time of two commercial continuous air monitors.« less

  4. Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

    NASA Astrophysics Data System (ADS)

    Goodwell, Allison E.; Kumar, Praveen

    2017-07-01

    Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.

  5. Bounds on Time Reversal Violation From Polarized Neutron Capture With Unpolarized Targets.

    PubMed

    Davis, E D; Gould, C R; Mitchell, G E; Sharapov, E I

    2005-01-01

    We have analyzed constraints on parity-odd time-reversal noninvariant interactions derived from measurements of the energy dependence of parity-violating polarized neutron capture on unpolarized targets. As previous authors found, a perturbation in energy dependence due to a parity (P)-odd time (T)-odd interaction is present. However, the perturbation competes with T-even terms which can obscure the T-odd signature. We estimate the magnitudes of these competing terms and suggest strategies for a practicable experiment.

  6. High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves

    NASA Technical Reports Server (NTRS)

    Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.

    2012-01-01

    In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.

  7. [Using the CAS (computer-assisted surgery) system in arthroscopic cruciate ligament surgery--adaptation and application in clinical practice].

    PubMed

    Bernsmann, K; Rosenthal, A; Sati, M; Ansari, B; Wiese, M

    2001-01-01

    The anterior cruciate ligament (ACL) is of great importance for the knee joint function. In the case of a complete ligament injury there is hardly any chance for complete recovery. The clear advantages of an operative reconstruction by replacing the ACL has been shown in many trails. The accurate placement of the graft's insertions has a significant effect on the mid- and probably long-term outcome of this procedure. Reviewing the literature, there are poor long-term results of ACL replacement in 5 to 52% of all cases, depending on the score system. One of the main reasons for unacceptable results is graft misplacement. This led to the construction of a CAS system for ACL replacement. The system assists this surgical procedure by navigating the exact position of the drilling holes. The Potential deformation quantity of the transplant can be controlled by this system in real time. 40 computer-assisted ACL replacements have been performed under active use of the CAS system. The short-term results are encouraging, no special complications have been seen so far. Prospective long-term follow-up studies are ongoing. ACL reconstruction by manual devices has many sources of error. The CAS system is able to give the surgeon reasonable views that are unachieveable by conventional surgery. He is therefore able to control a source of error and to optimise the results. The feasibility of this device in clinical routine use has been proven.

  8. Data collection as the first step in program development: the experience of a chronic care palliative unit.

    PubMed

    Munn, B; Worobec, F

    1997-01-01

    This retrospective descriptive study of 73 patients who died in St. Peter's Hospital examines and contrasts the patients profile and referral sources of a palliative care unit in a chronic care hospital over two six-month periods during 1994 and 1995. Shortened length of stay (83.8 and 43.2 days respectively), documentation issues, CPR practices (CPR was desired by seven patients up to the time of death), and lack of referrals from long-term care facilities have led St. Peter's Hospital to ask further questions of its palliative care program, e.g. given the lack of referrals from long-term care facilities, how is palliative care being managed in this sector? In Ontario, palliative care has been placed under the domain of chronic care and program development depends in part on the knowledge of the population it serves. This study is a first step.

  9. Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast

    NASA Astrophysics Data System (ADS)

    Toda, Shinji; Enescu, Bogdan

    2011-03-01

    Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.

  10. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  11. A statistical evaluation of effective time constants of random telegraph noise with various operation timings of in-pixel source follower transistors

    NASA Astrophysics Data System (ADS)

    Yonezawa, A.; Kuroda, R.; Teramoto, A.; Obara, T.; Sugawa, S.

    2014-03-01

    We evaluated effective time constants of random telegraph noise (RTN) with various operation timings of in-pixel source follower transistors statistically, and discuss the dependency of RTN time constants on the duty ratio (on/off ratio) of MOSFET which is controlled by the gate to source voltage (VGS). Under a general readout operation of CMOS image sensor (CIS), the row selected pixel-source followers (SFs) turn on and not selected pixel-SFs operate at different bias conditions depending on the select switch position; when select switch locate in between the SF driver and column output line, SF drivers nearly turn off. The duty ratio and cyclic period of selected time of SF driver depends on the operation timing determined by the column read out sequence. By changing the duty ratio from 1 to 7.6 x 10-3, time constant ratio of RTN (time to capture <τc<)/(time to emission <τe<) of a part of MOSFETs increased while RTN amplitudes were almost the same regardless of the duty ratio. In these MOSFETs, <τc< increased and the majority of <τe< decreased and the minority of <τe< increased by decreasing the duty ratio. The same tendencies of behaviors of <τc< and <τe< were obtained when VGS was decreased. This indicates that the effective <τc< and <τe< converge to those under off state as duty ratio decreases. These results are important for the noise reduction, detection and analysis of in pixel-SF with RTN.

  12. A spatio-temporal model for probabilistic seismic hazard zonation of Tehran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2013-08-01

    A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.

  13. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  14. How Unique is Any Given Seismogram? - Exploring Correlation Methods to Identify Explosions

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Dodge, D. A.; Ford, S. R.; Pyle, M. L.; Hauk, T. F.

    2015-12-01

    As with conventional wisdom about snowflakes, we would expect it unlikely that any two broadband seismograms would ever be exactly identical. However depending upon the resolution of our comparison metric, we do expect, and often find, bandpassed seismograms that correlate to very high levels (>0.99). In fact regional (e.g. Schaff and Richards, 2011) and global investigations (e.g. Dodge and Walter, 2015) find large numbers of highly correlated seismograms. Decreasing computational costs are increasing the tremendous potential for correlation in lowering detection, location and identification thresholds for explosion monitoring (e.g. Schaff et al., 2012, Gibbons and Ringdal, 2012; Zhang and Wen, 2015). We have shown in the case of Source Physics Experiment (SPE) chemical explosions, templates at local and near regional stations can detect, locate and identify very small explosions, which might be applied to monitoring active test sites (Ford and Walter, 2015). In terms of elastic theory, seismograms are the convolution between source and Green function terms. Thus high correlation implies similar sources, closely located. How do we quantify this physically? For example it is well known that as the template event and target events are increasingly separated spatially, their correlation diminishes, as the difference in the Green function between the two events grows larger. This is related to the event separation in terms of wavelength, the heterogeneity of the Earth structure, and the time-bandwidth of the correlation parameters used, but this has not been well quantified. We are using the historic dataset of nuclear explosions in southern Nevada to explore empirically where and how well these events correlate as a function of location, depth, size, time-bandwidth and other parameters. A goal is to develop more meaningful and physical metrics that go beyond the correlation coefficient and can be applied to explosion monitoring problems, particularly event identification.

  15. An investigation on nuclear energy policy in Turkey and public perception

    NASA Astrophysics Data System (ADS)

    Coskun, Mehmet Burhanettin; Tanriover, Banu

    2016-11-01

    Turkey, which meets nearly 70 per cent of its energy demands with import, is facing the problems of energy security and current account deficit as a result of its dependence on foreign sources in terms of energy input. It is also known that Turkey is having environmental problems due to the increases in CO2 emission. Considering these problems in Turkish economy, where energy input is commonly used, it is necessary to use energy sources efficiently and provide alternative energy sources. Due to the dependency of renewable sources on meteorological conditions (the absence of enough sun, wind, and water sources), the energy generation could not be provided efficiently and permanently from these sources. At this point, nuclear energy as analternative energy source maintains its importance as a sustainable energy source that providing energy in 7 days and 24 hours. The main purpose of this study is to evaluate the nuclear energy subject within the context of negative public perceptions emerged after Chernobyl (1986) and Fukushima (2011) disasters and to investigate in the economic framework.

  16. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE PAGES

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  17. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.« less

  18. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples.

    PubMed

    Snow, Mathew S; Snyder, Darin C; Delmore, James E

    2016-02-28

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1-3 and spent fuel ponds 1-4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100-250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequential ammonium molybdophosphate-polyacrylonitrile columns, following which (135)Cs/(137) Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. (135)Cs/(137)Cs isotope ratios from samples 100-250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. (135)Cs/(137)Cs versus (134)Cs/(137)Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. Cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.

  19. Granger causal time-dependent source connectivity in the somatosensory network

    NASA Astrophysics Data System (ADS)

    Gao, Lin; Sommerlade, Linda; Coffman, Brian; Zhang, Tongsheng; Stephen, Julia M.; Li, Dichen; Wang, Jue; Grebogi, Celso; Schelter, Bjoern

    2015-05-01

    Exploration of transient Granger causal interactions in neural sources of electrophysiological activities provides deeper insights into brain information processing mechanisms. However, the underlying neural patterns are confounded by time-dependent dynamics, non-stationarity and observational noise contamination. Here we investigate transient Granger causal interactions using source time-series of somatosensory evoked magnetoencephalographic (MEG) elicited by air puff stimulation of right index finger and recorded using 306-channel MEG from 21 healthy subjects. A new time-varying connectivity approach, combining renormalised partial directed coherence with state space modelling, is employed to estimate fast changing information flow among the sources. Source analysis confirmed that somatosensory evoked MEG was mainly generated from the contralateral primary somatosensory cortex (SI) and bilateral secondary somatosensory cortices (SII). Transient Granger causality shows a serial processing of somatosensory information, 1) from contralateral SI to contralateral SII, 2) from contralateral SI to ipsilateral SII, 3) from contralateral SII to contralateral SI, and 4) from contralateral SII to ipsilateral SII. These results are consistent with established anatomical connectivity between somatosensory regions and previous source modeling results, thereby providing empirical validation of the time-varying connectivity analysis. We argue that the suggested approach provides novel information regarding transient cortical dynamic connectivity, which previous approaches could not assess.

  20. Scale-dependent climatic drivers of human epidemics in ancient China.

    PubMed

    Tian, Huidong; Yan, Chuan; Xu, Lei; Büntgen, Ulf; Stenseth, Nils C; Zhang, Zhibin

    2017-12-05

    A wide range of climate change-induced effects have been implicated in the prevalence of infectious diseases. Disentangling causes and consequences, however, remains particularly challenging at historical time scales, for which the quality and quantity of most of the available natural proxy archives and written documentary sources often decline. Here, we reconstruct the spatiotemporal occurrence patterns of human epidemics for large parts of China and most of the last two millennia. Cold and dry climate conditions indirectly increased the prevalence of epidemics through the influences of locusts and famines. Our results further reveal that low-frequency, long-term temperature trends mainly contributed to negative associations with epidemics, while positive associations of epidemics with droughts, floods, locusts, and famines mainly coincided with both higher and lower frequency temperature variations. Nevertheless, unstable relationships between human epidemics and temperature changes were observed on relatively smaller time scales. Our study suggests that an intertwined, direct, and indirect array of biological, ecological, and societal responses to different aspects of past climatic changes strongly depended on the frequency domain and study period chosen.

  1. A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Garrard, Doug; Davis, Milt, Jr.; Cole, Gary

    1999-01-01

    The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.

  2. Charge state distributions of oxygen and carbon in the energy range 1 to 300 keV/e observed with AMPTE/CCE in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Kremser, G.; Stuedemann, W.; Wilken, B.; Gloeckler, G.; Hamilton, D. C.

    1985-01-01

    Observations of charge state distributions of oxygen and carbon are presented that were obtained with the charge-energy-mass spectrometer onboard the AMPTE/CCE spacecraft. Data were selected for two different local time sectors (apogee at 1300 LT and 0300 LT, respectively), three L-ranges (4-6, 6-8, and greater than 8), and quiet to moderately disturbed days (Kp less than or equal to 4). The charge state distributions reveal the existence of all charge states of oxygen and carbon in the magnetosphere. The relative importance of the different charge states strongly depends on L and much less on local time. The observations confirm that the solar wind and the ionosphere contribute to the oxygen population, whereas carbon only originates from the solar wind. The L-dependence of the charge state distributions can be interpreted in terms of these different ion sources and of charge exchange and diffusion processes that largely influence the distribution of oxygen and carbon in the magnetosphere.

  3. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)

  4. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  6. A High-Order Low-Order Algorithm with Exponentially Convergent Monte Carlo for Thermal Radiative Transfer

    DOE PAGES

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    2016-10-21

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  7. The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655

    NASA Technical Reports Server (NTRS)

    Bult, Peter

    2017-01-01

    In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.

  8. Large Torque Variations in Two Soft Gamma Repeaters

    NASA Technical Reports Server (NTRS)

    Woods, Peter M.; Kouveliotou, Chryssa; Gogus, Ersin; Finger, Mark H.; Swank, Jean; Markwardt, Craig B.; Hurley, Kevin; vanderKlis, Michiel

    2002-01-01

    We have monitored the pulse frequencies of the two soft gamma repeaters SGR 1806-20 and SGR 1900+14 through the beginning of year 2001 using primarily Rossi X-Ray Timing Explorer Proportional Counter Array observations. In both sources, we observe large changes in the spin-down torque up to a factor of approximately 4, which persist for several months. Using long-baseline phase-connected timing solutions as well as the overall frequency histories, we construct torque noise power spectra for each SGR (Soft Gamma Repeater). The power spectrum of each source is very red (power-law slope is approximately -3.5). The torque noise power levels are consistent with some accreting systems on timescales of approximately 1 yr, yet the full power spectrum is much steeper in frequency than any known accreting source. To the best of our knowledge, torque noise power spectra with a comparably steep frequency dependence have been seen only in young, glitching radio pulsars (e.g., Vela). The observed changes in spin-down rate do not correlate with burst activity; therefore, the physical mechanisms behind each phenomenon are also likely unrelated. Within the context of the magnetar model, seismic activity can not account for both the bursts and the long-term torque changes unless the seismically active regions are decoupled from one another.

  9. Memory behaviors of entropy production rates in heat conduction

    NASA Astrophysics Data System (ADS)

    Li, Shu-Nan; Cao, Bing-Yang

    2018-02-01

    Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.

  10. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  11. An extension of the Lighthill theory of jet noise to encompass refraction and shielding

    NASA Technical Reports Server (NTRS)

    Ribner, Herbert S.

    1995-01-01

    A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).

  12. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  13. Flowsheets and source terms for radioactive waste projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsberg, C.W.

    1985-03-01

    Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.

  14. Rainfall-runoff response informed by exact solutions of Boussinesq equation on hillslopes

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Porporato, A. M.

    2017-12-01

    The Boussinesq equation offers a powerful approach forunderstanding the flow dynamics of unconfined aquifers. Though this nonlinear equation allows for concise representation of both soil and geomorphological controls on groundwater flow, it has only been solved exactly for a limited number of initial and boundary conditions. These solutions do not include source/sink terms (evapotranspiration, recharge, and seepage to bedrock) and are typically limited to horizontal aquifers. Here we present a class of exact solutions that are general to sloping aquifers and a time varying source/sink term. By incorporating the source/sink term, they may describe aquifers with both time varying recharge over seasonal or weekly time scales, as well as a loss of water from seepage to the bedrock interface, which is a common feature in hillslopes. These new solutions shed light on the hysteretic relationship between streamflow and groundwater and the behavior of the hydrograph recession curves, thus providing a robust basis for deriving a runoff curves for the partition of rainfall into infiltration and runoff.

  15. Fast SiPM Readout of the PANDA TOF Detector

    NASA Astrophysics Data System (ADS)

    Böhm, M.; Lehmann, A.; Motz, S.; Uhlig, F.

    2016-05-01

    For the identification of low momentum charged particles and for event timing purposes a barrel Time-of-Flight (TOF) detector surrounding the interaction point is planned for the PANDA experiment at FAIR . Since the boundary conditions in terms of available radial space and radiation length are quite strict the favored layout is a hodoscope composed of several thousand small scintillating tiles (SciTils) read out by silicon photomultipliers (SiPMs). A time resolution of well below 100 ps is aimed for. With the originally proposed 30 × 30 × 5 mm3 SciTils read out by two single 3 × 3 mm2 SiPMs at the rims of the scintillator the targeted time resolution can be just reached, but with a considerable position dependence across the scintillator surface. In this paper we discuss other design options to further improve the time resolution and its homogeneity. It will be shown that wide scintillating rods (SciRods) with a size of, e.g., 50 × 30 × 5 mm3 or longer and read out at opposite sides by a chain of four serially connected SiPMs a time resolution down to 50 ps can be reached without problems. In addition, the position dependence of the time resolution is negligible. These SciRods were tested in the laboratory with electrons of a 90Sr source and under real experimental conditions in a particle beam at CERN. The measured time resolutions using fast BC418 or BC420 plastic scintillators wrapped in aluminum foil were consistently between 45 and 75 ps dependent on the SciRod design. This is a significant improvement compared to the original SciTil layout.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  17. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  18. Transparent mediation-based access to multiple yeast data sources using an ontology driven interface.

    PubMed

    Briache, Abdelaali; Marrakchi, Kamar; Kerzazi, Amine; Navas-Delgado, Ismael; Rossi Hassani, Badr D; Lairini, Khalid; Aldana-Montes, José F

    2012-01-25

    Saccharomyces cerevisiae is recognized as a model system representing a simple eukaryote whose genome can be easily manipulated. Information solicited by scientists on its biological entities (Proteins, Genes, RNAs...) is scattered within several data sources like SGD, Yeastract, CYGD-MIPS, BioGrid, PhosphoGrid, etc. Because of the heterogeneity of these sources, querying them separately and then manually combining the returned results is a complex and time-consuming task for biologists most of whom are not bioinformatics expert. It also reduces and limits the use that can be made on the available data. To provide transparent and simultaneous access to yeast sources, we have developed YeastMed: an XML and mediator-based system. In this paper, we present our approach in developing this system which takes advantage of SB-KOM to perform the query transformation needed and a set of Data Services to reach the integrated data sources. The system is composed of a set of modules that depend heavily on XML and Semantic Web technologies. User queries are expressed in terms of a domain ontology through a simple form-based web interface. YeastMed is the first mediation-based system specific for integrating yeast data sources. It was conceived mainly to help biologists to find simultaneously relevant data from multiple data sources. It has a biologist-friendly interface easy to use. The system is available at http://www.khaos.uma.es/yeastmed/.

  19. The roles of time and displacement in velocity-dependent volumetric strain of fault zones

    USGS Publications Warehouse

    Beeler, N.M.; Tullis, T.E.

    1997-01-01

    The relationship between measured friction??A and volumetric strain during frictional sliding was determined using a rate and state variable dependent friction constitutive equation, a common work balance relating friction and volume change, and two types of experimental faults: initially bare surfaces of Westerly granite and rock surfaces separated by a 1 mm layer of < 90 ??m Westerly granite gouge. The constitutive equation is the sum of a constant term representing the nominal resistance to sliding and two smaller terms: a rate dependent term representing the shear viscosity of the fault surface (direct effect), and a term which represents variations in the area of contact (evolution effect). The work balance relationship requires that ??A differs from the frictional resistance that leads to shear heating by the derivative of fault normal displacement with respect shear displacement, d??n ld??s. An implication of this relationship is that the rate dependence of d??n ld??s contributes to the rate dependence of ??A. Experiments show changes in sliding velocity lead to changes in both fault strength and volume. Analysis of data with the rate and state equations combined with the work balance relationship preclude the conventional interpretation of the direct effect in the rate and state variable constitutive equations. Consideration of a model bare surface fault consisting of an undeformable indentor sliding on a deformable surface reveals a serious flaw in the work balance relationship if volume change is time-dependent. For the model, at zero slip rate indentation creep under the normal load leads to time-dependent strengthening of the fault surface but, according to the work balance relationship, no work is done because compaction or dilatancy can only be induced by shearing. Additional tests on initially bare surfaces and gouges show that fault normal strain in experiments is time-dependent, consistent with the model. This time-dependent fault normal strain, which is not accounted for in the work balance relationship, explains the inconsistency between the constitutive equations and the work balance. For initially bare surface faults, all rate dependence of volume change is due to time dependence. Similar results are found for gouge. We conclude that ??A reflects the frictional resistance that results in shear heating, and no correction needs to be made for the volume changes. The result that time-dependent volume changes do not contribute to ??A is a general result and extends beyond these experiments, the simple indentor model and particular constitutive equations used to illustrate the principle.

  20. Quadratic time dependent Hamiltonians and separation of variables

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2017-06-01

    Time dependent quantum problems defined by quadratic Hamiltonians are solved using canonical transformations. The Green's function is obtained and a comparison with the classical Hamilton-Jacobi method leads to important geometrical insights like exterior differential systems, Monge cones and time dependent Gaussian metrics. The Wei-Norman approach is applied using unitary transformations defined in terms of generators of the associated Lie groups, here the semi-direct product of the Heisenberg group and the symplectic group. A new explicit relation for the unitary transformations is given in terms of a finite product of elementary transformations. The sequential application of adequate sets of unitary transformations leads naturally to a new separation of variables method for time dependent Hamiltonians, which is shown to be related to the Inönü-Wigner contraction of Lie groups. The new method allows also a better understanding of interacting particles or coupled modes and opens an alternative way to analyze topological phases in driven systems.

  1. Time-dependent variational approach in terms of squeezed coherent states: Implication to semi-classical approximation

    NASA Technical Reports Server (NTRS)

    Tsue, Yasuhiko

    1994-01-01

    A general framework for time-dependent variational approach in terms of squeezed coherent states is constructed with the aim of describing quantal systems by means of classical mechanics including higher order quantal effects with the aid of canonicity conditions developed in the time-dependent Hartree-Fock theory. The Maslov phase occurring in a semi-classical quantization rule is investigated in this framework. In the limit of a semi-classical approximation in this approach, it is definitely shown that the Maslov phase has a geometric nature analogous to the Berry phase. It is also indicated that this squeezed coherent state approach is a possible way to go beyond the usual WKB approximation.

  2. Generalized analytical solutions to sequentially coupled multi-species advective-dispersive transport equations in a finite domain subject to an arbitrary time-dependent source boundary condition

    NASA Astrophysics Data System (ADS)

    Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin

    2012-08-01

    SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.

  3. Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle

    NASA Astrophysics Data System (ADS)

    Vinay, S.; Downs, R. R.

    2012-12-01

    Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.

  4. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  5. Effect of source location and listener location on ILD cues in a reverberant room

    NASA Astrophysics Data System (ADS)

    Ihlefeld, Antje; Shinn-Cunningham, Barbara G.

    2004-05-01

    Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.

  6. Opposing roles for GABAA and GABAC receptors in short-term memory formation in young chicks.

    PubMed

    Gibbs, M E; Johnston, G A R

    2005-01-01

    The inhibitory neurotransmitter GABA has both inhibitory and enhancing effects on short-term memory for a bead discrimination task in the young chick. Low doses of GABA (1-3 pmol/hemisphere) injected into the multimodal association area of the chick forebrain, inhibit strongly reinforced memory, whereas higher doses (30-100 pmol/hemisphere) enhance weakly reinforced memory. The effect of both high and low doses of GABA is clearly on short-term memory in terms of both the time of injection and in the time that the memory loss occurs. We argue on the basis of relative sensitivities to GABA and to selective GABA receptor antagonists that low doses of GABA act at GABAC receptors (EC50 approximately 1 microM) and the higher doses of GABA act via GABAA receptors (EC50 approximately 10 microM). The selective GABAA receptor antagonist bicuculline inhibited strongly reinforced memory in a dose and time dependent manner, whereas the selective GABAC receptor antagonists TPMPA and P4MPA enhanced weakly reinforced in a dose and time dependent manner. Confirmation that different levels of GABA affect different receptor subtypes was demonstrated by the shift in the GABA dose-response curves to the selective antagonists. It is clear that GABA is involved in the control of short-term memory formation and its action, enhancing or inhibiting, depends on the level of GABA released at the time of learning.

  7. The BGS magnetic field candidate models for the 12th generation IGRF

    NASA Astrophysics Data System (ADS)

    Hamilton, Brian; Ridley, Victoria A.; Beggan, Ciarán D.; Macmillan, Susan

    2015-05-01

    We describe the candidate models submitted by the British Geological Survey for the 12th generation International Geomagnetic Reference Field. These models are extracted from a spherical harmonic `parent model' derived from vector and scalar magnetic field data from satellite and observatory sources. These data cover the period 2009.0 to 2014.7 and include measurements from the recently launched European Space Agency (ESA) Swarm satellite constellation. The parent model's internal field time dependence for degrees 1 to 13 is represented by order 6 B-splines with knots at yearly intervals. The parent model's degree 1 external field time dependence is described by periodic functions for the annual and semi-annual signals and by dependence on the 20-min Vector Magnetic Disturbance index. Signals induced by these external fields are also parameterized. Satellite data are weighted by spatial density and by two different noise estimators: (a) by standard deviation along segments of the satellite track and (b) a larger-scale noise estimator defined in terms of a measure of vector activity at the geographically closest magnetic observatories to the sample point. Forecasting of the magnetic field secular variation beyond the span of data is by advection of the main field using core surface flows.

  8. Religion in SETI Communications

    NASA Astrophysics Data System (ADS)

    Pay, R.

    The prospect of millions of civilizations in the Galaxy raises the probability of receiving communications in the Search for Extraterrestrial Intelligence (SETI). However, much depends on the average lifetime of planetary civilizations. For a lifetime of 500 years, an optimistic forecast would predict about 65 civilizations in the Galaxy at any one time, separated by 5,000 light years. No prospect of communication. For a lifetime of 10 million years, over a million civilizations would be spaced 180 light years apart. Communication among them is feasible. This indicates that extraterrestrial communications depend on civilizations achieving long term stability, probably by evolving a global religion that removes sources of religious strife. Stability also requires an ethic supporting universal rights, nonviolence, empathy and cooperation. As this ethic will be expressed in the planet-wide religion, it will lead to offers of support to other civilizations struggling to gain stability. As stable civilizations will be much advanced scientifically, understanding the religious concepts that appear in their communications will depend on how quantum mechanics, biological evolution, and the creation of the universe at a point in time are incorporated into their religion. Such a religion will view creation as intentional rather than accidental (the atheistic alternative) and will find the basis for its natural theology in the intention revealed by the physical laws of the universe.

  9. Generalized reference fields and source interpolation for the difference formulation of radiation transport

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham

    2010-03-01

    In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.

  10. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  11. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  12. Hydro power flexibility for power systems with variable renewable energy sources: an IEA Task 25 collaboration: Hydro power flexibility for power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele

    2016-06-20

    Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less

  13. Time-dependent jet flow and noise computations

    NASA Technical Reports Server (NTRS)

    Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.

    1990-01-01

    Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.

  14. Multiple vesicle recycling pathways in central synapses and their impact on neurotransmission

    PubMed Central

    Kavalali, Ege T

    2007-01-01

    Short-term synaptic depression during repetitive activity is a common property of most synapses. Multiple mechanisms contribute to this rapid depression in neurotransmission including a decrease in vesicle fusion probability, inactivation of voltage-gated Ca2+ channels or use-dependent inhibition of release machinery by presynaptic receptors. In addition, synaptic depression can arise from a rapid reduction in the number of vesicles available for release. This reduction can be countered by two sources. One source is replenishment from a set of reserve vesicles. The second source is the reuse of vesicles that have undergone exocytosis and endocytosis. If the synaptic vesicle reuse is fast enough then it can replenish vesicles during a brief burst of action potentials and play a substantial role in regulating the rate of synaptic depression. In the last 5 years, we have examined the impact of synaptic vesicle reuse on neurotransmission using fluorescence imaging of synaptic vesicle trafficking in combination with electrophysiological detection of short-term synaptic plasticity. These studies have revealed that synaptic vesicle reuse shapes the kinetics of short-term synaptic depression in a frequency-dependent manner. In addition, synaptic vesicle recycling helps maintain the level of neurotransmission at steady state. Moreover, our studies showed that synaptic vesicle reuse is a highly plastic process as it varies widely among synapses and can adapt to changes in chronic activity levels. PMID:17690145

  15. Two tales of legacy effects on stream nutrient behaviour

    NASA Astrophysics Data System (ADS)

    Bieroza, M.; Heathwaite, A. L.

    2017-12-01

    Intensive agriculture has led to large-scale land use conversion, shortening of flow pathways and increased loads of nutrients in streams. This legacy results in gradual build-up of nutrients in agricultural catchments: in soil for phosphorus (biogeochemical legacy) and in the unsaturated zone for nitrate (hydrologic legacy), controlling the water quality in the long-term. Here we investigate these effects on phosphorus and nitrate stream concentrations using high-frequency (10-5 - 100 Hz) sampling with in situ wet-chemistry analysers and optical sensors. Based on our 5 year study, we observe that storm flow responses differ for both nutrients: phosphorus shows rapid increases (up to 3 orders of magnitude) in concentrations with stream flow, whereas nitrate shows both dilution and concentration effects with increasing flow. However, the range of nitrate concentrations change is narrow (up to 2 times the mean) and reflects chemostatic behaviour. We link these nutrient responses with their dominant sources and flow pathways in the catchment. Nitrate from agriculture (with the peak loading in 1983) is stored in the unsaturated zone of the Penrith Sandstone, which can reach up to 70 m depth. Thus nitrate legacy is related to a hydrologic time lag with long travel times in the unsaturated zone. Phosphorus is mainly sorbed to soil particles, therefore it is mobilised rapidly during rainfall events (biogeochemical legacy). The phosphorus stream response will however depend on how well connected is the stream to the catchment sources (driven by soil moisture distribution) and biogeochemical activity (driven by temperature), leading to both chemostatic and non-chemostatic responses, alternating on a storm-to-storm and seasonal basis. Our results also show that transient within-channel storage is playing an important role in delivery of phosphorus, providing an additional time lag component. These results show, that consistent agricultural legacy in the catchment (high historical loads of nutrients) has different effects on nutrients stream responses, depending on their dominant sources and pathways. Both types of time lags, biogeochemical for phosphorus and hydrologic for nitrate, need to be taken into account when designing and evaluating the effectiveness of the agri-environmental mitigation measures.

  16. Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1995-01-01

    Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…

  17. Time-local equation for exact time-dependent optimized effective potential in time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel; Chu, Shih-I.

    2017-04-01

    Solving and analyzing the exact time-dependent optimized effective potential (TDOEP) integral equation has been a longstanding challenge due to its highly nonlinear and nonlocal nature. To meet the challenge, we derive an exact time-local TDOEP equation that admits a unique real-time solution in terms of time-dependent Kohn-Sham orbitals and effective memory orbitals. For illustration, the dipole evolution dynamics of a one-dimension-model chain of hydrogen atoms is numerically evaluated and examined to demonstrate the utility of the proposed time-local formulation. Importantly, it is shown that the zero-force theorem, violated by the time-dependent Krieger-Li-Iafrate approximation, is fulfilled in the current TDOEP framework. This work was partially supported by DOE.

  18. GPS Imaging of Time-Dependent Seasonal Strain in Central California

    NASA Astrophysics Data System (ADS)

    Kraner, M.; Hammond, W. C.; Kreemer, C.; Borsa, A. A.; Blewitt, G.

    2016-12-01

    Recently, studies are suggesting that crustal deformation can be time-dependent and nontectonic. Continuous global positioning system (cGPS) measurements are now showing how steady long-term deformation can be influenced by factors such as fluctuations in loading and temperature variations. Here we model the seasonal time-dependent dilatational and shear strain in Central California, specifically surrounding the Parkfield region and try to uncover the sources of these deformation patterns. We use 8 years of cGPS data (2008 - 2016) processed by the Nevada Geodetic Laboratory and carefully select the cGPS stations for our analysis based on the vertical position of cGPS time series during the drought period. In building our strain model, we first detrend the selected station time series using a set of velocities from the robust MIDAS trend estimator. This estimation algorithm is a robust approach that is insensitive to common problems such as step discontinuities, outliers, and seasonality. We use these detrended time series to estimate the median cGPS positions for each month of the 8-year period and filter displacement differences between these monthly median positions using a filtering technique called "GPS Imaging." This technique improves the overall robustness and spatial resolution of the input displacements for the strain model. We then model our dilatational and shear strain field for each month of time series. We also test a variety of a priori constraints, which controls the style of faulting within the strain model. Upon examining our strain maps, we find that a seasonal strain signal exists in Central California. We investigate how this signal compares to thermoelastic, hydrologic, and atmospheric loading models during the 8-year period. We additionally determine whether the drought played a role in influencing the seasonal signal.

  19. ESPC Coupled Global Prediction System

    DTIC Science & Technology

    2014-09-30

    active, and cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts and for use in the full coupled system. APPROACH...cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts for ESPC applications. We are relying on approaches, findings...function. For sea salt we follow NAAPS and use a source that depends on ocean surface winds and relative humidity . In lieu of the relevant

  20. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  1. A New Unsteady Model for Dense Cloud Cavitation in Cryogenic Fluids

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Ahuja, Vineet

    2005-01-01

    Contents include the following: Background on thermal effects in cavitation. Physical properties of hydrogen. Multi-phase cavitation with thermal effect. Solution procedure. Cavitation model overview. Cavitation source terms. New cavitation model. Source term for bubble growth. One equation les model. Unsteady ogive simulations: liquid nitrogen. Unsteady incompressible flow in a pipe. Time averaged cavity length for NACA15 flowfield.

  2. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  3. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  4. Similarity solutions of reaction–diffusion equation with space- and time-dependent diffusion and reaction terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, C.-L.; Lee, C.-C., E-mail: chieh.no27@gmail.com

    2016-01-15

    We consider solvability of the generalized reaction–diffusion equation with both space- and time-dependent diffusion and reaction terms by means of the similarity method. By introducing the similarity variable, the reaction–diffusion equation is reduced to an ordinary differential equation. Matching the resulting ordinary differential equation with known exactly solvable equations, one can obtain corresponding exactly solvable reaction–diffusion systems. Several representative examples of exactly solvable reaction–diffusion equations are presented.

  5. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  6. Combined optical gain and degradation measurements in DCM2 doped Tris-(8-hydroxyquinoline)aluminum thin-films

    NASA Astrophysics Data System (ADS)

    Čehovski, Marko; Döring, Sebastian; Rabe, Torsten; Caspary, Reinhard; Kowalsky, Wolfgang

    2016-04-01

    Organic laser sources offer the opportunity to integrate flexible and widely tunable lasers in polymer waveguide circuits, e.g. for Lab-on-Foil applications. Therefore, it is necessary to understand gain and degradation processes for long-term operation. In this paper we address the challenge of life-time (degradation) measurements of photoluminescence (PL) and optical gain in thin-film lasers. The well known guest-host system of aluminum-chelate Alq3 (Tris-(8-hydroxyquinoline)aluminum) as host material and the laser dye DCM2 (4-(Dicyanomethylene)-2- methyl-6-julolidyl-9-enyl-4H-pyran) as guest material is employed as laser active material. Sample layers have been built up by co-evaporation in an ultrahigh (UHV) vacuum chamber. 200nm thick films of Alq3:DCM2 with different doping concentrations have been processed onto glass and thermally oxidized silicon substrates. The gain measurements have been performed by the variable stripe length (VSL) method. This measurement technique allows to determine the thin-film waveguide gain and loss, respectively. For the measurements the samples were excited with UV irradiation (ƛ = 355nm) under nitrogen atmosphere by a passively Q-switched laser source. PL degradation measurements with regard to the optical gain have been done at laser threshold (approximately 3 μJ/cm2), five times above laser threshold and 10 times above laser threshold. A t50-PL lifetime of > 107 pulses could be measured at a maximum excitation energy density of 32 μJ/cm2. This allows for a detailed analysis of the gain degradation mechanism and therefore of the stimulated cross section. Depending on the DCM2 doping concentration C the stimulated cross section was reduced by 35 %. Nevertheless, the results emphasizes the necessity of the investigation of degradation processes in organic laser sources for long-term applications.

  7. Dynamics of a Hogg-Huberman Model with Time Dependent Reevaluation Rates

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshijiro; Kurihara, Tetsuya; Inoue, Masayoshi

    2006-05-01

    The dynamical behavior of the Hogg-Huberman model with time-dependent reevaluation rates is studied. The time dependence of the reevaluation rate that agents using one of resources decide to consider their resource choice is obtained in terms of states of the system. It is seen that the change of fraction of agents using one resource is suppressed to be smaller than that in the case of a fixed reevaluation rate and the chaos control in the system associated with time-dependent reevaluation rates can be performed by the system itself.

  8. Modeling ecological traps for the control of feral pigs

    PubMed Central

    Dexter, Nick; McLeod, Steven R

    2015-01-01

    Ecological traps are habitat sinks that are preferred by dispersing animals but have higher mortality or reduced fecundity compared to source habitats. Theory suggests that if mortality rates are sufficiently high, then ecological traps can result in extinction. An ecological trap may be created when pest animals are controlled in one area, but not in another area of equal habitat quality, and when there is density-dependent immigration from the high-density uncontrolled area to the low-density controlled area. We used a logistic population model to explore how varying the proportion of habitat controlled, control mortality rate, and strength of density-dependent immigration for feral pigs could affect the long-term population abundance and time to extinction. Increasing control mortality, the proportion of habitat controlled and the strength of density-dependent immigration decreased abundance both within and outside the area controlled. At higher levels of these parameters, extinction was achieved for feral pigs. We extended the analysis with a more complex stochastic, interactive model of feral pig dynamics in the Australian rangelands to examine how the same variables as the logistic model affected long-term abundance in the controlled and uncontrolled area and time to extinction. Compared to the logistic model of feral pig dynamics, the stochastic interactive model predicted lower abundances and extinction at lower control mortalities and proportions of habitat controlled. To improve the realism of the stochastic interactive model, we substituted fixed mortality rates with a density-dependent control mortality function, empirically derived from helicopter shooting exercises in Australia. Compared to the stochastic interactive model with fixed mortality rates, the model with the density-dependent control mortality function did not predict as substantial decline in abundance in controlled or uncontrolled areas or extinction for any combination of variables. These models demonstrate that pest eradication is theoretically possible without the pest being controlled throughout its range because of density-dependent immigration into the area controlled. The stronger the density-dependent immigration, the better the overall control in controlled and uncontrolled habitat combined. However, the stronger the density-dependent immigration, the poorer the control in the area controlled. For feral pigs, incorporating environmental stochasticity improves the prospects for eradication, but adding a realistic density-dependent control function eliminates these prospects. PMID:26045954

  9. Time-reversal in geophysics: the key for imaging a seismic source, generating a virtual source or imaging with no source (Invited)

    NASA Astrophysics Data System (ADS)

    Tourin, A.; Fink, M.

    2010-12-01

    The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake

  10. Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.

  11. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  12. Open source posturography.

    PubMed

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  13. Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.

    2008-05-01

    The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.

  14. Correction of phase errors in quantitative water-fat imaging using a monopolar time-interleaved multi-echo gradient echo sequence.

    PubMed

    Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C

    2017-09-01

    To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  16. Exact time-dependent solutions for a self-regulating gene.

    PubMed

    Ramos, A F; Innocentini, G C P; Hornos, J E M

    2011-06-01

    The exact time-dependent solution for the stochastic equations governing the behavior of a binary self-regulating gene is presented. Using the generating function technique to rephrase the master equations in terms of partial differential equations, we show that the model is totally integrable and the analytical solutions are the celebrated confluent Heun functions. Self-regulation plays a major role in the control of gene expression, and it is remarkable that such a microscopic model is completely integrable in terms of well-known complex functions.

  17. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  18. Gravitational wave source counts at high redshift and in models with extra dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es

    2016-07-01

    Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less

  19. Effect of time dependence on probabilistic seismic-hazard maps and deaggregation for the central Apennines, Italy

    USGS Publications Warehouse

    Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.

    2009-01-01

    We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.

  20. ANALYTICAL ASSESSMENT OF THE IMPACTS OF PARTIAL MASS DEPLETION IN DNAPL SOURCE ZONES (SAN FRANCISCO, CA)

    EPA Science Inventory

    Analytical solutions describing the time-dependent DNAPL source-zone mass and contaminant discharge rate are used as a flux-boundary condition in a semi-analytical contaminant transport model. These analytical solutions assume a power relationship between the flow-averaged sourc...

  1. Sources of career disadvantage in nursing. A study of NHS Wales.

    PubMed

    Lane, N

    1999-01-01

    Despite the numerical predominance of women in nursing there is a marked concentration of women, especially those working part-time, in the lower echelons of the profession. The paper presents survey data and interview material from a study of qualified nurses in NHS Wales. By controlling for differences in education and experience in nursing work, it was found that comparable groups of female nurses received unequal employment opportunities. Women with dependent children were primarily located in the lower nurse grades irrespective of their qualifications and experience. Much of this was associated with inflexible working practices, and the low status of part-time work. Occupational downgrading for female returners was also a significant barrier to career advancement. However, these problems were not recognised by management. Management failed to evaluate the mechanics of their human resource policies in terms which matter to many nurses, in particular with regard to the management of diversity.

  2. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  3. Time dependence of breakdown in a global fiber-bundle model with continuous damage.

    PubMed

    Moral, L; Moreno, Y; Gómez, J B; Pacheco, A F

    2001-06-01

    A time-dependent global fiber-bundle model of fracture with continuous damage is formulated in terms of a set of coupled nonlinear differential equations. A first integral of this set is analytically obtained. The time evolution of the system is studied by applying a discrete probabilistic method. Several results are discussed emphasizing their differences with the standard time-dependent model. The results obtained show that with this simple model a variety of experimental observations can be qualitatively reproduced.

  4. Global biodiversity monitoring: from data sources to essential biodiversity variables

    USGS Publications Warehouse

    Proenca, Vania; Martin, Laura J.; Pereira, Henrique M.; Fernandez, Miguel; McRae, Louise; Belnap, Jayne; Böhm, Monika; Brummitt, Neil; Garcia-Moreno, Jaime; Gregory, Richard D.; Honrado, Joao P; Jürgens, Norbert; Opige, Michael; Schmeller, Dirk S.; Tiago, Patricia; van Sway, Chris A

    2016-01-01

    Essential Biodiversity Variables (EBVs) consolidate information from varied biodiversity observation sources. Here we demonstrate the links between data sources, EBVs and indicators and discuss how different sources of biodiversity observations can be harnessed to inform EBVs. We classify sources of primary observations into four types: extensive and intensive monitoring schemes, ecological field studies and satellite remote sensing. We characterize their geographic, taxonomic and temporal coverage. Ecological field studies and intensive monitoring schemes inform a wide range of EBVs, but the former tend to deliver short-term data, while the geographic coverage of the latter is limited. In contrast, extensive monitoring schemes mostly inform the population abundance EBV, but deliver long-term data across an extensive network of sites. Satellite remote sensing is particularly suited to providing information on ecosystem function and structure EBVs. Biases behind data sources may affect the representativeness of global biodiversity datasets. To improve them, researchers must assess data sources and then develop strategies to compensate for identified gaps. We draw on the population abundance dataset informing the Living Planet Index (LPI) to illustrate the effects of data sources on EBV representativeness. We find that long-term monitoring schemes informing the LPI are still scarce outside of Europe and North America and that ecological field studies play a key role in covering that gap. Achieving representative EBV datasets will depend both on the ability to integrate available data, through data harmonization and modeling efforts, and on the establishment of new monitoring programs to address critical data gaps.

  5. Effects of Aging-Time Reference on the Long Term Behavior of the IM7/K3B Composite

    NASA Technical Reports Server (NTRS)

    Veazie, David R.; Gates, Thomas S.

    1998-01-01

    An analytical study was undertaken to investigate the effects of the time-based shift reference on the long term behavior of the graphite reinforced thermoplastic polyimide composite IM7/K3B at elevated temperature. Creep compliance and the effects of physical aging on the time dependent response was measured for uniaxial loading at several isothermal conditions below the glass transition temperature (T(sub g). Two matrix dominated loading modes, shear and transverse, were investigated in tension and compression. The momentary sequenced creep/aging curves were collapsed through a horizontal (time) shift using the shortest, middle and longest aging time curve as the reference curve. Linear viscoelasticity was used to characterize the creep/recovery behavior and superposition techniques were used to establish the physical aging related material constants. The use of effective time expressions in a laminated plate model allowed for the prediction of long term creep compliance. The effect of using different reference curves with time/aging-time superposition was most sensitive to the physical aging shift rate at lower test temperatures. Depending on the loading mode, the reference curve used can result in a more accurate long term prediction, especially at lower test temperatures.

  6. Effects of variable regolith depth, hydraulic properties, and rainfall on debris-flow initiation during the September 2013 northern Colorado Front Range rainstorm

    NASA Astrophysics Data System (ADS)

    Baum, R. L.; Coe, J. A.; Kean, J. W.; Jones, E. S.; Godt, J.

    2015-12-01

    Heavy rainfall during 9 - 13 September 2013 induced about 1100 debris flows in the foothills and mountains of the northern Colorado Front Range. Weathered bedrock was partially exposed in the basal surfaces of many of the shallow source areas at depths ranging from 0.2 to 5 m. Typical values of saturated hydraulic conductivity of soils and regolith units mapped in the source areas range from about 10-4 - 10-6 m/s, with a median value of 2.8 x 10-5 m/s based on number of source areas in each map unit. Rainfall intensities varied spatially and temporally, from 0 to 2.5 x 10-5 m/s (90 mm/hour), with two periods of relatively heavy rainfall on September 12 - 13. The distribution of debris flows appears to correlate with total storm rainfall, and reported times of greatest landslide activity coincide with times of heaviest rainfall. Process-based models of rainfall infiltration and slope stability (TRIGRS) representing the observed ranges of regolith depth, hydraulic conductivity, and rainfall intensity, provide additional insights about the timing and distribution of debris flows from this storm. For example, small debris flows from shallower source areas (<2 m) occurred late on September 11 and in the early morning of September 12, whereas large debris flows from deeper (3 - 5 m) source areas in the western part of the affected area occurred late on September 12. Timing of these flows can be understood in terms of the time required for pore pressure rise depending on regolith depth and rainfall intensity. The variable hydraulic properties combined with variable regolith depth and slope angles account for much of the observed range in timing in areas of similar rainfall intensity and duration. Modeling indicates that the greatest and most rapid pore pressure rise likely occurred in areas of highest rainfall intensity and amount. This is consistent with the largest numbers of debris flows occurring on steep canyon walls in areas of high total storm rainfall.

  7. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  8. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  9. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  10. Retrofitting LID Practices into Existing Neighborhoods: Is It Worth It?

    NASA Astrophysics Data System (ADS)

    Wright, Timothy J.; Liu, Yaoze; Carroll, Natalie J.; Ahiablame, Laurent M.; Engel, Bernard A.

    2016-04-01

    Low-impact development (LID) practices are gaining popularity as an approach to manage stormwater close to the source. LID practices reduce infrastructure requirements and help maintain hydrologic processes similar to predevelopment conditions. Studies have shown LID practices to be effective in reducing runoff and improving water quality. However, little has been done to aid decision makers in selecting the most effective practices for their needs and budgets. The long-term hydrologic impact assessment LID model was applied to four neighborhoods in Lafayette, Indiana using readily available data sources to compare LID practices by analyzing runoff volumes, implementation cost, and the approximate period needed to achieve payback on the investment. Depending on the LID practice and adoption level, 10-70 % reductions in runoff volumes could be achieved. The cost per cubic meter of runoff reduction was highly variable depending on the LID practice and the land use to which it was applied, ranging from around 3 to almost 600. In some cases the savings from reduced runoff volumes paid back the LID practice cost with interest in less than 3 years, while in other cases it was not possible to generate a payback. Decision makers need this information to establish realistic goals and make informed decisions regarding LID practices before moving into detailed designs, thereby saving time and resources.

  11. Selenium mobility and distribution in irrigated and nonirrigated alluvial soils

    USGS Publications Warehouse

    Fio, John L.; Fujii, Roger; Deverel, S.J.

    1991-01-01

    Dissolution and leaching of soil salts by irrigation water is a primary source of Se to shallow groundwater in the western San Joaquin Valley, California. In this study, the mobility and distribution of selenite and selenate in soils with different irrigation and drainage histories was evaluated using sorption experiments and an advection-dispersion model. The sorption studies showed that selenate (15–12400 µg Se L−1) is not adsorbed to soil, whereas selenite (10–5000 µg Se L−1) is rapidly adsorbed. The time lag between adsorption and desorption of selenite is considerable, indicating a dependence of reaction rate on reaction direction (hysteresis). Selenite adsorption and desorption isotherms were different, and both were described with the Freundlich equation. Model results and chemical analyses of extracts from the soil samples showed that selenite is resistant to leaching and therefore can represent a potential long-term source of Se to groundwater. In contrast, selenate behaves as a conservative constituent under alkaline and oxidized conditions and is easily leached from soil.

  12. Integrated Microfluidic Membrane Transistor Utilizing Chemical Information for On-Chip Flow Control.

    PubMed

    Frank, Philipp; Schreiter, Joerg; Haefner, Sebastian; Paschew, Georgi; Voigt, Andreas; Richter, Andreas

    2016-01-01

    Microfluidics is a great enabling technology for biology, biotechnology, chemistry and general life sciences. Despite many promising predictions of its progress, microfluidics has not reached its full potential yet. To unleash this potential, we propose the use of intrinsically active hydrogels, which work as sensors and actuators at the same time, in microfluidic channel networks. These materials transfer a chemical input signal such as a substance concentration into a mechanical output. This way chemical information is processed and analyzed on the spot without the need for an external control unit. Inspired by the development electronics, our approach focuses on the development of single transistor-like components, which have the potential to be used in an integrated circuit technology. Here, we present membrane isolated chemical volume phase transition transistor (MIS-CVPT). The device is characterized in terms of the flow rate from source to drain, depending on the chemical concentration in the control channel, the source-drain pressure drop and the operating temperature.

  13. Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation

    PubMed Central

    De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan

    2017-01-01

    Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006

  14. The 2016 Al-Mishraq sulphur plant fire: Source and health risk area estimation

    NASA Astrophysics Data System (ADS)

    Björnham, Oscar; Grahn, Håkan; von Schoenberg, Pontus; Liljedahl, Birgitta; Waleij, Annica; Brännström, Niklas

    2017-11-01

    On October 20, 2016, Daesh (Islamic State) set fire to the sulphur production site Al-Mishraq as the battle of Mosul in northern Iraq became more intense. An extensive plume of toxic sulphur dioxide and hydrogen sulphide caused comprehensive casualties. The intensity of the SO2 release was reaching levels of minor volcanic eruptions and the plume was observed by several satellites. By investigation of the measurement data from instruments on the MetOp-A, MetOp-B, Aura and Soumi satellites we have estimated the time-dependent source term to 161 kilotonnes sulphur dioxide released into the atmosphere during seven days. A long-range dispersion model was utilized to simulate the atmospheric transport over the Middle East. The ground level concentrations predicted by the simulation were compared with observation from the Turkey National Air Quality Monitoring Network. Finally, the simulation data provided, using a probit analysis of the simulated data, an estimate of the health risk area that was compared to reported urgent medical treatments.

  15. Integrated Microfluidic Membrane Transistor Utilizing Chemical Information for On-Chip Flow Control

    PubMed Central

    Frank, Philipp; Schreiter, Joerg; Haefner, Sebastian; Paschew, Georgi; Voigt, Andreas; Richter, Andreas

    2016-01-01

    Microfluidics is a great enabling technology for biology, biotechnology, chemistry and general life sciences. Despite many promising predictions of its progress, microfluidics has not reached its full potential yet. To unleash this potential, we propose the use of intrinsically active hydrogels, which work as sensors and actuators at the same time, in microfluidic channel networks. These materials transfer a chemical input signal such as a substance concentration into a mechanical output. This way chemical information is processed and analyzed on the spot without the need for an external control unit. Inspired by the development electronics, our approach focuses on the development of single transistor-like components, which have the potential to be used in an integrated circuit technology. Here, we present membrane isolated chemical volume phase transition transistor (MIS-CVPT). The device is characterized in terms of the flow rate from source to drain, depending on the chemical concentration in the control channel, the source-drain pressure drop and the operating temperature. PMID:27571209

  16. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.

  17. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  18. A Kinetic Study of Microwave Start-up of Tokamak Plasmas

    NASA Astrophysics Data System (ADS)

    du Toit, E. J.; O'Brien, M. R.; Vann, R. G. L.

    2017-07-01

    A kinetic model for studying the time evolution of the distribution function for microwave startup is presented. The model for the distribution function is two dimensional in momentum space, but, for simplicity and rapid calculations, has no spatial dependence. Experiments on the Mega Amp Spherical Tokamak have shown that the plasma current is carried mainly by electrons with energies greater than 70 keV, and effects thought to be important in these experiments are included, i.e. particle sources, orbital losses, the loop voltage and microwave heating, with suitable volume averaging where necessary to give terms independent of spatial dimensions. The model predicts current carried by electrons with the same energies as inferred from the experiments, though the current drive efficiency is smaller.

  19. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-07

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  20. Conformal Collineations of the Ricci and Energy-Momentum Tensors in Static Plane Symmetric Space-Times

    NASA Astrophysics Data System (ADS)

    Akhtar, S. S.; Hussain, T.; Bokhari, A. H.; Khan, F.

    2018-04-01

    We provide a complete classification of static plane symmetric space-times according to conformal Ricci collineations (CRCs) and conformal matter collineations (CMCs) in both the degenerate and nondegenerate cases. In the case of a nondegenerate Ricci tensor, we find a general form of the vector field generating CRCs in terms of unknown functions of t and x subject to some integrability conditions. We then solve the integrability conditions in different cases depending upon the nature of the Ricci tensor and conclude that the static plane symmetric space-times have a 7-, 10- or 15-dimensional Lie algebra of CRCs. Moreover, we find that these space-times admit an infinite number of CRCs if the Ricci tensor is degenerate. We use a similar procedure to study CMCs in the case of a degenerate or nondegenerate matter tensor. We obtain the exact form of some static plane symmetric space-time metrics that admit nontrivial CRCs and CMCs. Finally, we present some physical applications of our obtained results by considering a perfect fluid as a source of the energy-momentum tensor.

  1. Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source

    NASA Astrophysics Data System (ADS)

    Kornienko, Y. V.; Skuratovskiy, S. I.

    2014-12-01

    The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.

  2. MODULATING EMISSIONS FROM ELECTRIC GENERATING UNITS AS A FUNCTION OF METEOROLOGICAL VARIABLES

    EPA Science Inventory

    Electric Generating Units (EGUs) are an important source of emissions of nitrogen oxides (NOx), which react with volatile organic compounds (VOCs) in the presence of sunlight to form ozone. Emissions from EGUs are believed to vary depending on short-term demands for electricity;...

  3. On the geometry dependence of differential pathlength factor for near-infrared spectroscopy. I. Steady-state with homogeneous medium

    PubMed Central

    Piao, Daqing; Barbour, Randall L.; Graber, Harry L.; Lee, Daniel C.

    2015-01-01

    Abstract. This work analytically examines some dependences of the differential pathlength factor (DPF) for steady-state photon diffusion in a homogeneous medium on the shape, dimension, and absorption and reduced scattering coefficients of the medium. The medium geometries considered include a semi-infinite geometry, an infinite-length cylinder evaluated along the azimuthal direction, and a sphere. Steady-state photon fluence rate in the cylinder and sphere geometries is represented by a form involving the physical source, its image with respect to the associated extrapolated half-plane, and a radius-dependent term, leading to simplified formula for estimating the DPFs. With the source-detector distance and medium optical properties held fixed across all three geometries, and equal radii for the cylinder and sphere, the DPF is the greatest in the semi-infinite and the smallest in the sphere geometry. When compared to the results from finite-element method, the DPFs analytically estimated for 10 to 25 mm source–detector separations on a sphere of 50 mm radius with μa=0.01  mm−1 and μs′=1.0  mm−1 are on average less than 5% different. The approximation for sphere, generally valid for a diameter ≥20 times of the effective attenuation pathlength, may be useful for rapid estimation of DPFs in near-infrared spectroscopy of an infant head and for short source–detector separation. PMID:26465613

  4. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  5. Collecting Psycholinguistic Response Time Data Using Amazon Mechanical Turk

    PubMed Central

    Enochson, Kelly; Culbertson, Jennifer

    2015-01-01

    Researchers in linguistics and related fields have recently begun exploiting online crowd-sourcing tools, like Amazon Mechanical Turk (AMT), to gather behavioral data. While this method has been successfully validated for various offline measures—grammaticality judgment or other forced-choice tasks—its use for mainstream psycholinguistic research remains limited. This is because psycholinguistic effects are often dependent on relatively small differences in response times, and there remains some doubt as to whether precise timing measurements can be gathered over the web. Here we show that three classic psycholinguistic effects can in fact be replicated using AMT in combination with open-source software for gathering response times client-side. Specifically, we find reliable effects of subject definiteness, filler-gap dependency processing, and agreement attraction in self-paced reading tasks using approximately the same numbers of participants and/or trials as similar laboratory studies. Our results suggest that psycholinguists can and should be taking advantage of AMT and similar online crowd-sourcing marketplaces as a fast, low-resource alternative to traditional laboratory research. PMID:25822348

  6. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  7. A near-infrared, optical, and ultraviolet polarimetric and timing investigation of complex equatorial dusty structures

    NASA Astrophysics Data System (ADS)

    Marin, F.; Rojas Lobos, P. A.; Hameury, J. M.; Goosmann, R. W.

    2018-05-01

    Context. From stars to active galactic nuclei, many astrophysical systems are surrounded by an equatorial distribution of dusty material that is, in a number of cases, spatially unresolved even with cutting edge facilities. Aims: In this paper, we investigate if and how one can determine the unresolved and heterogeneous morphology of dust distribution around a central bright source using time-resolved polarimetric observations. Methods: We used polarized radiative transfer simulations to study a sample of circumnuclear dusty morphologies. We explored a grid of geometrically variable models that are uniform, fragmented, and density stratified in the near-infrared, optical, and ultraviolet bands, and we present their distinctive time-dependent polarimetric signatures. Results: As expected, varying the structure of the obscuring equatorial disk has a deep impact on the inclination-dependent flux, polarization degree and angle, and time lags we observe. We find that stratified media are distinguishable by time-resolved polarimetric observations, and that the expected polarization is much higher in the infrared band than in the ultraviolet. However, because of the physical scales imposed by dust sublimation, the average time lags of months to years between the total and polarized fluxes are important; these time lags lengthens the observational campaigns necessary to break more sophisticated, and therefore also more degenerated, models. In the ultraviolet band, time lags are slightly shorter than in the infrared or optical bands, and, coupled to lower diluting starlight fluxes, time-resolved polarimetry in the UV appears more promising for future campaigns. Conclusions: Equatorial dusty disks differ in terms of inclination-dependent photometric, polarimetric, and timing observables, but only the coupling of these different markers can lead to inclination-independent constraints on the unresolved structures. Even though it is complex and time consuming, polarized reverberation mapping in the ultraviolet-blue band is probably the best technique to rely on in this field.

  8. Interaction of Inhibitory and Facilitatory Effects of Conditioning Trials on Long-Term Memory Formation

    ERIC Educational Resources Information Center

    Hosono, Shouhei; Matsumoto, Yukihisa; Mizunami, Makoto

    2016-01-01

    Animals learn through experience and consolidate the memories into long-time storage. Conditioning parameters to induce protein synthesis-dependent long-term memory (LTM) have been the subject of extensive studies in many animals. Here we found a case in which a conditioning trial inhibits or facilitates LTM formation depending on the intervals…

  9. Protein Synthesis-Dependent Long-Term Memory Induced by One Single Associative Training Trial in the Parasitic Wasp Lariophagus distinguendus

    ERIC Educational Resources Information Center

    Steidle, Johannes L. M.; Collatz, Jana; Muller, Caroline

    2006-01-01

    Protein synthesis-dependent long-term memory in Apis mellifera and Drosophila melanogaster is formed after multiple trainings that are spaced in time. The parasitic wasp Lariophagus distinguendus remarkably differs from these species. It significantly responds to the artificial odor furfurylheptanoate (FFH) in olfactometer experiments, when this…

  10. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  11. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  12. Time dependent emission line profiles in the radially streaming particle model of Seyfert galaxy nuclei and quasi-stellar objects

    NASA Technical Reports Server (NTRS)

    Hubbard, R.

    1974-01-01

    The radially-streaming particle model for broad quasar and Seyfert galaxy emission features is modified to include sources of time dependence. The results are suggestive of reported observations of multiple components, variability, and transient features in the wings of Seyfert and quasi-stellar emission lines.

  13. Analysis and Synthesis of Tonal Aircraft Noise Sources

    NASA Technical Reports Server (NTRS)

    Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen

    2012-01-01

    Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.

  14. Generalized quantum Fokker-Planck equation for photoinduced nonequilibrium processes with positive definiteness condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Seogjoo, E-mail: sjang@qc.cuny.edu

    2016-06-07

    This work provides a detailed derivation of a generalized quantum Fokker-Planck equation (GQFPE) appropriate for photo-induced quantum dynamical processes. The path integral method pioneered by Caldeira and Leggett (CL) [Physica A 121, 587 (1983)] is extended by utilizing a nonequilibrium influence functional applicable to different baths for the ground and the excited electronic states. Both nonequilibrium and non-Markovian effects are accounted for consistently by expanding the paths in the exponents of the influence functional up to the second order with respect to time. This procedure results in approximations involving only single time integrations for the exponents of the influence functionalmore » but with additional time dependent boundary terms that have been ignored in previous works. The boundary terms complicate the derivation of a time evolution equation but do not affect position dependent physical observables or the dynamics in the steady state limit. For an effective density operator with the boundary terms factored out, a time evolution equation is derived, through short time expansion of the effective action and Gaussian integration in analytically continued complex domain of space. This leads to a compact form of the GQFPE with time dependent kernels and additional terms, which renders the resulting equation to be in the Dekker form [Phys. Rep. 80, 1 (1981)]. Major terms of the equation are analyzed for the case of Ohmic spectral density with Drude cutoff, which shows that the new GQFPE satisfies the positive definiteness condition in medium to high temperature limit. Steady state limit of the GQFPE is shown to approach the well-known expression derived by CL in the high temperature and Markovian bath limit and also provides additional corrections due to quantum and non-Markovian effects of the bath.« less

  15. Generalized quantum Fokker-Planck equation for photoinduced nonequilibrium processes with positive definiteness condition

    NASA Astrophysics Data System (ADS)

    Jang, Seogjoo

    2016-06-01

    This work provides a detailed derivation of a generalized quantum Fokker-Planck equation (GQFPE) appropriate for photo-induced quantum dynamical processes. The path integral method pioneered by Caldeira and Leggett (CL) [Physica A 121, 587 (1983)] is extended by utilizing a nonequilibrium influence functional applicable to different baths for the ground and the excited electronic states. Both nonequilibrium and non-Markovian effects are accounted for consistently by expanding the paths in the exponents of the influence functional up to the second order with respect to time. This procedure results in approximations involving only single time integrations for the exponents of the influence functional but with additional time dependent boundary terms that have been ignored in previous works. The boundary terms complicate the derivation of a time evolution equation but do not affect position dependent physical observables or the dynamics in the steady state limit. For an effective density operator with the boundary terms factored out, a time evolution equation is derived, through short time expansion of the effective action and Gaussian integration in analytically continued complex domain of space. This leads to a compact form of the GQFPE with time dependent kernels and additional terms, which renders the resulting equation to be in the Dekker form [Phys. Rep. 80, 1 (1981)]. Major terms of the equation are analyzed for the case of Ohmic spectral density with Drude cutoff, which shows that the new GQFPE satisfies the positive definiteness condition in medium to high temperature limit. Steady state limit of the GQFPE is shown to approach the well-known expression derived by CL in the high temperature and Markovian bath limit and also provides additional corrections due to quantum and non-Markovian effects of the bath.

  16. VNAP2: A Computer Program for Computation of Two-dimensional, Time-dependent, Compressible, Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Cline, M. C.

    1981-01-01

    A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  17. Portable atomic frequency standard based on coherent population trapping

    NASA Astrophysics Data System (ADS)

    Shi, Fan; Yang, Renfu; Nian, Feng; Zhang, Zhenwei; Cui, Yongshun; Zhao, Huan; Wang, Nuanrang; Feng, Keming

    2015-05-01

    In this work, a portable atomic frequency standard based on coherent population trapping is designed and demonstrated. To achieve a portable prototype, in the system, a single transverse mode 795nm VCSEL modulated by a 3.4GHz RF source is used as a pump laser which generates coherent light fields. The pump beams pass through a vapor cell containing atom gas and buffer gas. This vapor cell is surrounded by a magnetic shield and placed inside a solenoid which applies a longitudinal magnetic field to lift the Zeeman energy levels' degeneracy and to separate the resonance signal, which has no first-order magnetic field dependence, from the field-dependent resonances. The electrical control system comprises two control loops. The first one locks the laser wavelength to the minimum of the absorption spectrum; the second one locks the modulation frequency and output standard frequency. Furthermore, we designed the micro physical package and realized the locking of a coherent population trapping atomic frequency standard portable prototype successfully. The short-term frequency stability of the whole system is measured to be 6×10-11 for averaging times of 1s, and reaches 5×10-12 at an averaging time of 1000s.

  18. Long-term primary culture of mouse mammary tumor cells: production of virus.

    PubMed

    Young, L J; Cardiff, R D; Ashley, R L

    1975-05-01

    Long-term primary cultures of mouse mammary tumor cells proved an excellent source of mouse mammary tumor virus (MMTV). Virus purified from these primary cultures had the same morphologic biochemical, immunologic, and biologic characteristics as MMTV. Quantitation of MMTV-protein equivalents released into the medium was measured by the radioimmunoassay for MMTV. Peak production levels were 20-40 mug MMTV protien equivalents/75-cm-2 flask/24 hours. These cultures produced MMTV for as long as 90 days. MMTV cultivation depended on the initial cell-plating density and hormones. Maximal MMTV release was obtained at a plating density of 1 times 10-6 cells/cm-2 in the presence of insulin and hydrocortisone. Insulin alone gave basal levels of MMTV, and hydrocortisone alone increased MMTV release only three-fold, but insulin and hydrocortisone together effected an eightfold increase in MMTV release. This suggested that hydrocortisone had a primary effect on MMTV release and insulin acted synergistically with hydrocortisone to maximize MMTV release.

  19. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  20. See–saw relationship of the Holocene East Asian–Australian summer monsoon

    PubMed Central

    Eroglu, Deniz; McRobie, Fiona H.; Ozken, Ibrahim; Stemler, Thomas; Wyrwoll, Karl-Heinz; Breitenbach, Sebastian F. M.; Marwan, Norbert; Kurths, Jürgen

    2016-01-01

    The East Asian–Indonesian–Australian summer monsoon (EAIASM) links the Earth's hemispheres and provides a heat source that drives global circulation. At seasonal and inter-seasonal timescales, the summer monsoon of one hemisphere is linked via outflows from the winter monsoon of the opposing hemisphere. Long-term phase relationships between the East Asian summer monsoon (EASM) and the Indonesian–Australian summer monsoon (IASM) are poorly understood, raising questions of long-term adjustments to future greenhouse-triggered climate change and whether these changes could ‘lock in' possible IASM and EASM phase relationships in a region dependent on monsoonal rainfall. Here we show that a newly developed nonlinear time series analysis technique allows confident identification of strong versus weak monsoon phases at millennial to sub-centennial timescales. We find a see–saw relationship over the last 9,000 years—with strong and weak monsoons opposingly phased and triggered by solar variations. Our results provide insights into centennial- to millennial-scale relationships within the wider EAIASM regime. PMID:27666662

  1. See-saw relationship of the Holocene East Asian-Australian summer monsoon.

    PubMed

    Eroglu, Deniz; McRobie, Fiona H; Ozken, Ibrahim; Stemler, Thomas; Wyrwoll, Karl-Heinz; Breitenbach, Sebastian F M; Marwan, Norbert; Kurths, Jürgen

    2016-09-26

    The East Asian-Indonesian-Australian summer monsoon (EAIASM) links the Earth's hemispheres and provides a heat source that drives global circulation. At seasonal and inter-seasonal timescales, the summer monsoon of one hemisphere is linked via outflows from the winter monsoon of the opposing hemisphere. Long-term phase relationships between the East Asian summer monsoon (EASM) and the Indonesian-Australian summer monsoon (IASM) are poorly understood, raising questions of long-term adjustments to future greenhouse-triggered climate change and whether these changes could 'lock in' possible IASM and EASM phase relationships in a region dependent on monsoonal rainfall. Here we show that a newly developed nonlinear time series analysis technique allows confident identification of strong versus weak monsoon phases at millennial to sub-centennial timescales. We find a see-saw relationship over the last 9,000 years-with strong and weak monsoons opposingly phased and triggered by solar variations. Our results provide insights into centennial- to millennial-scale relationships within the wider EAIASM regime.

  2. Heavy metals in urban soils of East St. Louis, IL. Part II: Leaching characteristics and modeling.

    PubMed

    Kaminski, M D; Landsberger, S

    2000-09-01

    The city of East St. Louis, IL, has a history of abundant industrial activities including smelters of ferrous and non-ferrous metals, a coal-fired power plant, companies that produced organic and inorganic chemicals, and petroleum refineries. Following a gross assessment of heavy metals in the community soils (see Part I of this two-part series), leaching tests were performed on specific soils to elucidate heavy metal-associated mineral fractions and general leachability. Leaching experiments, including the Toxicity Characteristic Leaching Procedure (TLCP) and column tests, and sequential extractions, illustrated the low leachability of metals in East St. Louis soils. The column leachate results were modeled using a formulation developed for fly ash leaching. The importance of instantaneous dissolution was evident from the model. By incorporating desorption/adsorption terms into the source term, the model was adapted very well to the time-dependent heavy metal leachate concentrations. The results demonstrate the utility of a simple model to describe heavy metal leaching from contaminated soils.

  3. Heavy Metals in Urban Soils of East St. Louis, IL Part II: Leaching Characteristics and Modeling.

    PubMed

    Kaminski, Michael D; Landsberger, Sheldon

    2000-09-01

    The city of East St. Louis, IL, has a history of abundant industrial activities including smelters of ferrous and non-ferrous metals, a coal-fired power plant, companies that produced organic and inorganic chemicals, and petroleum refineries. Following a gross assessment of heavy metals in the community soils (see Part I of this two-part series), leaching tests were performed on specific soils to elucidate heavy metal-associated mineral fractions and general leachability. Leaching experiments, including the Toxicity Characteristic Leaching Procedure (TLCP) and column tests, and sequential extractions, illustrated the low leachability of metals in East St. Louis soils. The column leachate results were modeled using a formulation developed for fly ash leaching. The importance of instantaneous dissolution was evident from the model. By incorporating desorption/adsorption terms into the source term, the model was adapted very well to the time-dependent heavy metal leachate concentrations. The results demonstrate the utility of a simple model to describe heavy metal leaching from contaminated soils.

  4. Sanitary protection zoning based on time-dependent vulnerability assessment model - case examples at two different type of aquifers

    NASA Astrophysics Data System (ADS)

    Živanović, Vladimir; Jemcov, Igor; Dragišić, Veselin; Atanacković, Nebojša

    2017-04-01

    Delineation of sanitary protection zones of groundwater source is a comprehensive and multidisciplinary task. Uniform methodology for protection zoning for various type of aquifers is not established. Currently applied methods mostly rely on horizontal groundwater travel time toward the tapping structure. On the other hand, groundwater vulnerability assessment methods evaluate the protective function of unsaturated zone as an important part of groundwater source protection. In some particular cases surface flow might also be important, because of rapid transfer of contaminants toward the zones with intense infiltration. For delineation of sanitary protection zones three major components should be analysed: vertical travel time through unsaturated zone, horizontal travel time through saturated zone and surface water travel time toward intense infiltration zones. Integrating the aforementioned components into one time-dependent model represents a basis of presented method for delineation of groundwater source protection zones in rocks and sediments of different porosity. The proposed model comprises of travel time components of surface water, as well as groundwater (horizontal and vertical component). The results obtained using the model, represent the groundwater vulnerability as the sum of the surface and groundwater travel time and corresponds to the travel time of potential contaminants from the ground surface to the tapping structure. This vulnerability assessment approach do not consider contaminant properties (intrinsic vulnerability) although it can be easily improved for evaluating the specific groundwater vulnerability. This concept of the sanitary protection zones was applied at two different type of aquifers: karstic aquifer of catchment area of Blederija springs and "Beli Timok" source of intergranular shallow aquifer. The first one represents a typical karst hydrogeological system with part of the catchment with allogenic recharge, and the second one, the groundwater source within shallow intergranular alluvial aquifer, dominantly recharged by river bank filtration. For sanitary protection zones delineation, the applied method has shown the importance of introducing all travel time components equally. In the case of the karstic source, the importance of the surface flow toward ponor zones has been emphasized, as a consequence of rapid travel time of water in relation to diffuse infiltration from autogenic part. When it comes to the shallow intergranular aquifer, the character of the unsaturated zone gets more prominent role in the source protection, as important buffer of the vertical movement downward. The applicability of proposed method has been shown regardless of the type of the aquifer, and at the same time intelligible results of the delineated sanitary protection zones are possible to validate with various methods. Key words: groundwater protection zoning, time dependent model, karst aquifer, intergranular aquifer, groundwater source protection

  5. Translation invariant time-dependent solutions to massive gravity II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mourad, J.; Steer, D.A., E-mail: mourad@apc.univ-paris7.fr, E-mail: steer@apc.univ-paris7.fr

    2014-06-01

    This paper is a sequel to JCAP 12 (2013) 004 and is also devoted to translation-invariant solutions of ghost-free massive gravity in its moving frame formulation. Here we consider a mass term which is linear in the vielbein (corresponding to a β{sub 3} term in the 4D metric formulation) in addition to the cosmological constant. We determine explicitly the constraints, and from the initial value formulation show that the time-dependent solutions can have singularities at a finite time. Although the constraints give, as in the β{sub 1} case, the correct number of degrees of freedom for a massive spin twomore » field, we show that the lapse function can change sign at a finite time causing a singular time evolution. This is very different to the β{sub 1} case where time evolution is always well defined. We conclude that the β{sub 3} mass term can be pathological and should be treated with care.« less

  6. Time-dependent perpendicular fluctuations in the driven lattice Lorentz gas

    NASA Astrophysics Data System (ADS)

    Leitmann, Sebastian; Schwab, Thomas; Franosch, Thomas

    2018-02-01

    We present results for the fluctuations of the displacement of a tracer particle on a planar lattice pulled by a step force in the presence of impenetrable, immobile obstacles. The fluctuations perpendicular to the applied force are evaluated exactly in first order of the obstacle density for arbitrarily strong pulling and all times. The complex time-dependent behavior is analyzed in terms of the diffusion coefficient, local exponent, and the non-Skellam parameter, which quantifies deviations from the dynamics on the lattice in the absence of obstacles. The non-Skellam parameter along the force is analyzed in terms of an asymptotic model and reveals a power-law growth for intermediate times.

  7. Time-dependent spherically symmetric accretion onto compact X-ray sources

    NASA Technical Reports Server (NTRS)

    Cowie, L. L.; Ostriker, J. P.; Stark, A. A.

    1978-01-01

    Analytical arguments and a numerical hydrodynamic code are used to investigate spherically symmetric accretion onto a compact object, in an attempt to provide some insight into gas flows heated by an outgoing X-ray flux. It is shown that preheating of spherically symmetric accretion flows by energetic radiation from an X-ray source results in time-dependent behavior for a much wider range of source parameters than was determined previously and that there are two distinct types of instability. The results are compared with observations of X-ray bursters and transients as well as with theories on quasars and active galactic nuclei that involve quasi-spherically symmetric accretion onto massive black holes. Models based on spherically symmetric accretion are found to be inconsistent with observations of bursters and transients.

  8. Equivalent source modeling of the main field using MAGSAT data

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The software was considerably enhanced to accommodate a more comprehensive examination of data available for field modeling using the equivalent sources method by (1) implementing a dynamic core allocation capability into the software system for the automatic dimensioning of the normal matrix; (2) implementing a time dependent model for the dipoles; (3) incorporating the capability to input specialized data formats in a fashion similar to models in spherical harmonics; and (4) implementing the optional ability to simultaneously estimate observatory anomaly biases where annual means data is utilized. The time dependence capability was demonstrated by estimating a component model of 21 deg resolution using the 14 day MAGSAT data set of Goddard's MGST (12/80). The equivalent source model reproduced both the constant and the secular variation found in MGST (12/80).

  9. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    NASA Astrophysics Data System (ADS)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  10. Characteristics of Resources Represented in the OCLC CORC Database.

    ERIC Educational Resources Information Center

    Connell, Tschera Harkness; Prabha, Chandra

    2002-01-01

    Examines the characteristics of Web resources in Online Computer Library Center's (OCLC) Cooperative Online Resource Catalog (CORC) in terms of subject matter, source of content, publication patterns, and units of information chosen for representation in the database. Suggests that the ability to successfully use a database depends on…

  11. Surface Disposal of Waste Water Treatment Plant Biosludge--an Important Source of Perfluorinated Compound Contamination in the Environment

    EPA Science Inventory

    What are “Biosolids”?- “Biosolids” are what remains after WWTP processing Sewage sludge probably a more accurate term - Could contain anything that comes down the pipe to the WWTP, varies greatly depending on community type, industry effluents, plant desig...

  12. Phosphorus Import Dependency and Recycling Potential in the Global Phosphorus Mosaic

    NASA Astrophysics Data System (ADS)

    Powers, S. M.

    2017-12-01

    Nations differ widely in terms of recent P consumption trends and fertilizer trade dependencies, reflecting dynamic and globally uneven P fertilizer production, consumption, export, and import. Recovered P from urban and agricultural wastes can provide renewable sources that supplant the need to import P fertilizer, but to date, research on P recycling potential has been highly spatially segregated. Understanding of the global distribution of P recycling potential and options, and how these intersect with P import dependencies, could be used to guide long-term, spatially-prioritized planning for P, food, and water security. We integrated recent data on national P fertilizer flows, subnational P use, and landscape features within a global grid to understand how these constraints on future options for P use are distributed worldwide. This analysis illustrates several regions where combinations of high population density, cropland extent, and manure P production provide islands of opportunity for P recycling in mixed crop-livestock and populous agricultural areas. At the same time, nations with lower import ratios (net P import:consumption) contained a disproportionately large share of manure-rich croplands and populous croplands. As a further demonstration of the kinds of integrated comparisons that are possible using global land use data sets in combination with P, worldwide similarities and distinctions for P emerged from a cluster analysis. These kinds of socioeconomic-geographic patterns may foretell distinct P futures as societies address spatially uneven options for P, food, and water security.

  13. Global and Regional Temperature-change Potentials for Near-term Climate Forcers

    NASA Technical Reports Server (NTRS)

    Collins, W.J.; Fry, M. M.; Yu, H.; Fuglestvedt, J. S.; Shindell, D. T.; West, J. J.

    2013-01-01

    The emissions of reactive gases and aerosols can affect climate through the burdens of ozone, methane and aerosols, having both cooling and warming effects. These species are generally referred to near-term climate forcers (NTCFs) or short-lived climate pollutants (SLCPs), because of their short atmospheric residence time. The mitigation of these would be attractive for both air quality and climate on a 30-year timescale, provided it is not at the expense of CO2 mitigation. In this study we examine the climate effects of the emissions of NTCFs from 4 continental regions (East Asia, Europe, North America and South Asia) using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon - BC) and 4 ozone precursors (methane, reactive nitrogen oxides - NOx, volatile organic compounds VOC, and carbon monoxide - CO). For the aerosols the global warming potentials (GWPs) and global temperature change potentials (GTPs) are simply time-dependent scaling of the equilibrium radiative forcing, with the GTPs decreasing more rapidly with time than the GWPs. While the aerosol climate metrics have only a modest dependence on emission region, emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other northern hemisphere regions. On regional basis, the northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20-30% larger than the global average for methane, VOC and CO emissions. We also found that temperatures in the Arctic latitudes appear to be particularly sensitive to black carbon emissions from South Asia.

  14. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  15. Organic aerosol source apportionment in London 2013 with ME-2: exploring the solution space with annual and seasonal analysis

    NASA Astrophysics Data System (ADS)

    Reyes-Villegas, Ernesto; Green, David C.; Priestman, Max; Canonaco, Francesco; Coe, Hugh; Prévôt, André S. H.; Allan, James D.

    2016-12-01

    The multilinear engine (ME-2) factorization tool is being widely used following the recent development of the Source Finder (SoFi) interface at the Paul Scherrer Institute. However, the success of this tool, when using the a value approach, largely depends on the inputs (i.e. target profiles) applied as well as the experience of the user. A strategy to explore the solution space is proposed, in which the solution that best describes the organic aerosol (OA) sources is determined according to the systematic application of predefined statistical tests. This includes trilinear regression, which proves to be a useful tool for comparing different ME-2 solutions. Aerosol Chemical Speciation Monitor (ACSM) measurements were carried out at the urban background site of North Kensington, London from March to December 2013, where for the first time the behaviour of OA sources and their possible environmental implications were studied using an ACSM. Five OA sources were identified: biomass burning OA (BBOA), hydrocarbon-like OA (HOA), cooking OA (COA), semivolatile oxygenated OA (SVOOA) and low-volatility oxygenated OA (LVOOA). ME-2 analysis of the seasonal data sets (spring, summer and autumn) showed a higher variability in the OA sources that was not detected in the combined March-December data set; this variability was explored with the triangle plots f44 : f43 f44 : f60, in which a high variation of SVOOA relative to LVOOA was observed in the f44 : f43 analysis. Hence, it was possible to conclude that, when performing source apportionment to long-term measurements, important information may be lost and this analysis should be done to short periods of time, such as seasonally. Further analysis on the atmospheric implications of these OA sources was carried out, identifying evidence of the possible contribution of heavy-duty diesel vehicles to air pollution during weekdays compared to those fuelled by petrol.

  16. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  17. A time-dependent diffusion convection model for the long term modulation of cosmic rays

    NASA Technical Reports Server (NTRS)

    Gallagher, J. J.

    1974-01-01

    A model is developed which incorporates to first order the direct effects of the time dependent diffusive propagation of interstellar cosmic rays in a slowly changing interplanetary medium. The model provides a physical explanation for observed rigidity-dependent phase lags in modulated spectra (cosmic ray hysteresis). The average distance to the modulating boundary during the last solar cycle is estimated.

  18. Landau problem with time dependent mass in time dependent electric and harmonic background fields

    NASA Astrophysics Data System (ADS)

    Lawson, Latévi M.; Avossevou, Gabriel Y. H.

    2018-04-01

    The spectrum of a Hamiltonian describing the dynamics of a Landau particle with time-dependent mass and frequency undergoing the influence of a uniform time-dependent electric field is obtained. The configuration space wave function of the model is expressed in terms of the generalised Laguerre polynomials. To diagonalize the time-dependent Hamiltonian, we employ the Lewis-Riesenfeld method of invariants. To this end, we introduce a unitary transformation in the framework of the algebraic formalism to construct the invariant operator of the system and then to obtain the exact solution of the Hamiltonian. We recover the solutions of the ordinary Landau problem in the absence of the electric and harmonic fields for a constant particle mass.

  19. Ammonia concentrations at a site in Southern Scotland from 2 yr of continuous measurements

    NASA Astrophysics Data System (ADS)

    Burkhardt, J.; Sutton, M. A.; Milford, C.; Storeton-West, R. L.; Fowler, D.

    Atmospheric ammonia (NH 3) concentrations were measured using a continuous-flow annular denuder over a period of 2 yr at a rural site near Edinburgh, Scotland. Meteorological parameters as well as sulphur dioxide (SO 2) concentrations were also recorded. The overall arithmetic mean NH 3 concentration was 1.4 μg m -3. Although an annual cycle with largest NH 3 concentrations in summer was apparent for seasonal geometric mean concentrations, arithmetic mean concentrations were largest in the spring and autumn, indicating the increased importance of occasional high concentration events in these seasons. The NH 3 concentrations were influenced by local sources as well as by background concentrations, dependent on wind direction, whereas SO 2 geometric standard deviations indicated more distant sources. The daily cycle of NH 3 and SO 2 concentrations was dependent on wind speed ( u). At u<1 m s -1, NH 3 concentrations were smallest and SO 2 concentrations were largest around noon, whereas at u>1 m s -1 this cycle was less pronounced for both gases and NH 3 concentrations were largest around 1800 hours. These opposite diurnal cycles may be explained by the interaction of boundary layer mixing with local sources for NH 3 and remote sources for SO 2. Comparing the ammonia data with critical levels and critical loads shows that the critical level is not exceeded at this site over any averaging time. In contrast, the N critical load would probably be exceeded for moorland vegetation near this site, showing that the contribution of atmospheric NH 3 to nitrogen deposition in the long term is a more significant issue than exceedance of critical levels.

  20. Black Carbon and Sulfate Aerosols in the Arctic: Long-term Trends, Radiative Impacts, and Source Attributions

    NASA Astrophysics Data System (ADS)

    Wang, H.; Zhang, R.; Yang, Y.; Smith, S.; Rasch, P. J.

    2017-12-01

    The Arctic has warmed dramatically in recent decades. As one of the important short-lived climate forcers, aerosols affect the Arctic radiative budget directly by interfering radiation and indirectly by modifying clouds. Light-absorbing particles (e.g., black carbon) in snow/ice can reduce the surface albedo. The direct radiative impact of aerosols on the Arctic climate can be either warming or cooling, depending on their composition and location, which can further alter the poleward heat transport. Anthropogenic emissions, especially, BC and SO2, have changed drastically in low/mid-latitude source regions in the past few decades. Arctic surface observations at some locations show that BC and sulfate aerosols had a decreasing trend in the recent decades. In order to understand the impact of long-term emission changes on aerosols and their radiative effects, we use the Community Earth System Model (CESM) equipped with an explicit BC and sulfur source-tagging technique to quantify the source-receptor relationships and decadal trends of Arctic sulfate and BC and to identify variations in their atmospheric transport pathways from lower latitudes. The simulation was conducted for 36 years (1979-2014) with prescribed sea surface temperatures and sea ice concentrations. To minimize potential biases in modeled large-scale circulations, wind fields in the simulation are nudged toward an atmospheric reanalysis dataset, while atmospheric constituents including water vapor, clouds, and aerosols are allowed to evolve according to the model physics. Both anthropogenic and open fire emissions came from the newly released CMIP6 datasets, which show strong regional trends in BC and SO2 emissions during the simulation time period. Results show that emissions from East Asia and South Asia together have the largest contributions to Arctic sulfate and BC concentrations in the upper troposphere, which have an increasing trend. The strong decrease in emissions from Europe, Russia and North America contributed significantly to the overall decreasing trend in Arctic BC and sulfate, especially, in the lower troposphere. The long-term changes in the spatial distributions of aerosols, their radiative impacts and source attributions, along with implications for the Arctic warming trend, will be discussed.

  1. A Scalable, Reconfigurable, and Dependable Time-Triggered Architecture

    DTIC Science & Technology

    2003-07-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this

  2. Gauge-invariant expectation values of the energy of a molecule in an electromagnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandal, Anirban; Hunt, Katharine L. C.

    In this paper, we show that the full Hamiltonian for a molecule in an electromagnetic field can be separated into a molecular Hamiltonian and a field Hamiltonian, both with gauge-invariant expectation values. The expectation value of the molecular Hamiltonian gives physically meaningful results for the energy of a molecule in a time-dependent applied field. In contrast, the usual partitioning of the full Hamiltonian into molecular and field terms introduces an arbitrary gauge-dependent potential into the molecular Hamiltonian and leaves a gauge-dependent form of the Hamiltonian for the field. With the usual partitioning of the Hamiltonian, this same problem of gaugemore » dependence arises even in the absence of an applied field, as we show explicitly by considering a gauge transformation from zero applied field and zero external potentials to zero applied field, but non-zero external vector and scalar potentials. We resolve this problem and also remove the gauge dependence from the Hamiltonian for a molecule in a non-zero applied field and from the field Hamiltonian, by repartitioning the full Hamiltonian. It is possible to remove the gauge dependence because the interaction of the molecular charges with the gauge potential cancels identically with a gauge-dependent term in the usual form of the field Hamiltonian. We treat the electromagnetic field classically and treat the molecule quantum mechanically, but nonrelativistically. Our derivation starts from the Lagrangian for a set of charged particles and an electromagnetic field, with the particle coordinates, the vector potential, the scalar potential, and their time derivatives treated as the variables in the Lagrangian. We construct the full Hamiltonian using a Lagrange multiplier method originally suggested by Dirac, partition this Hamiltonian into a molecular term H{sub m} and a field term H{sub f}, and show that both H{sub m} and H{sub f} have gauge-independent expectation values. Any gauge may be chosen for the calculations; but following our partitioning, the expectation values of the molecular Hamiltonian are identical to those obtained directly in the Coulomb gauge. As a corollary of this result, the power absorbed by a molecule from a time-dependent, applied electromagnetic field is equal to the time derivative of the non-adiabatic term in the molecular energy, in any gauge.« less

  3. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  4. The plant phenological online database (PPODB): an online database for long-term phenological data

    NASA Astrophysics Data System (ADS)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  5. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  6. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  7. Thermal analysis of a Phase Change Material for a Solar Organic Rankine Cycle

    NASA Astrophysics Data System (ADS)

    Iasiello, M.; Braimakis, K.; Andreozzi, A.; Karellas, S.

    2017-11-01

    Organic Rankine Cycle (ORC) is a promising technology for low temperature power generation, for example for the utilization of medium temperature solar energy. Since heat generated from solar source is variable throughout the day, the implementation of Thermal Energy Storage (TES) systems to guarantee the continuous operation of solar ORCs is a critical task, and Phase Change Materials (PCM) rely on latent heat to store large amounts of energy. In the present study, a thermal analysis of a PCM for a solar ORC is carried out. Three different types of PCMs are analyzed. The energy equation for the PCM is modeled by using the heat capacity method, and it is solved by employing a 1Dexplicit finite difference scheme. The solar source is modeled with a time-variable temperature boundary condition, with experimental data taken from the literature for two different solar collectors. Results are presented in terms of temperature profiles and stored energy. It has been shown that the stored energy depends on the heat source temperature, on the employed PCM and on the boundary conditions. It has been demonstrated that the use of a metal foam can drastically enhance the stored energy due to the higher overall thermal conductivity.

  8. Research in atmospheric chemistry and transport

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1982-01-01

    The carbon monoxide cycle was studied by incorporating the known CO sources and sinks in a tracer model which used the winds generated by a general circulation model. The photochemical production and loss terms, which depended on OH radical concentrations, were calculated in an interactive fashion. Comparison of the computed global distribution and seasonal variations of CO with observations was used to yield constraints on the distribution and magnitude of the sources and sinks of CO, and the abundance of OH radicals in the troposphere.

  9. On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel

    2018-05-01

    We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.

  10. Comparison of the behavior of normal factor IX and the factor IX Bm variant Hilo in the prothrombin time test using tissue factors from bovine, human, and rabbit sources.

    PubMed

    Lefkowitz, J B; Monroe, D M; Kasper, C K; Roberts, H R

    1993-07-01

    A subset of hemophilia B patients have a prolonged bovine-brain prothrombin time. These CRM+ patients are classified as having hemophilia Bm. The prolongation of the prothrombin time has been reported only with bovine brain (referred to as ox brain in some literature) as the source of thromboplastin; prothrombin times determined with thromboplastin from rabbit brain or human brain are not reported to be prolonged. Factor IX from a hemophilia Bm patient (factor IX Hilo) was isolated. The activity of factor IX Hilo was compared to that of normal factor IX in prothrombin time assays when the thromboplastin source was of bovine, rabbit, or human origin. Factor IX, either normal or Hilo, prolonged a prothrombin time regardless of the tissue factor source. However, unless thromboplastin was from a bovine source, this prolongation required high concentrations of factor IX. Further, factor IX normal was as effective as factor IX Hilo in prolonging the prothrombin time when rabbit or human thromboplastin was used. With bovine thromboplastin, factor IX Hilo was significantly better than factor IX normal at prolonging the prothrombin time. The amount of prolongation was dependent on the amount of factor IX Hilo added. In addition, the prolongation was dependent on the concentration of factor X present in the sample. The prothrombin time changed as much as 20 seconds when the factor X concentration was varied from 50% to 150% to normal (fixed concentration of factor IX Hilo). These results demonstrate the difficulty of classifying the severity of a hemophilia Bm patient based on the bovine brain prothrombin time unless both the factor IX and factor X concentrations are known.

  11. Organ doses from radionuclides on the ground. Part I. Simple time dependences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, P.; Paretzke, H.G.; Rosenbaum, H.

    1988-06-01

    Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less

  12. Learning rules for spike timing-dependent plasticity depend on dendritic synapse location.

    PubMed

    Letzkus, Johannes J; Kampa, Björn M; Stuart, Greg J

    2006-10-11

    Previous studies focusing on the temporal rules governing changes in synaptic strength during spike timing-dependent synaptic plasticity (STDP) have paid little attention to the fact that synaptic inputs are distributed across complex dendritic trees. During STDP, propagation of action potentials (APs) back to the site of synaptic input is thought to trigger plasticity. However, in pyramidal neurons, backpropagation of single APs is decremental, whereas high-frequency bursts lead to generation of distal dendritic calcium spikes. This raises the question whether STDP learning rules depend on synapse location and firing mode. Here, we investigate this issue at synapses between layer 2/3 and layer 5 pyramidal neurons in somatosensory cortex. We find that low-frequency pairing of single APs at positive times leads to a distance-dependent shift to long-term depression (LTD) at distal inputs. At proximal sites, this LTD could be converted to long-term potentiation (LTP) by dendritic depolarizations suprathreshold for BAC-firing or by high-frequency AP bursts. During AP bursts, we observed a progressive, distance-dependent shift in the timing requirements for induction of LTP and LTD, such that distal synapses display novel timing rules: they potentiate when inputs are activated after burst onset (negative timing) but depress when activated before burst onset (positive timing). These findings could be explained by distance-dependent differences in the underlying dendritic voltage waveforms driving NMDA receptor activation during STDP induction. Our results suggest that synapse location within the dendritic tree is a crucial determinant of STDP, and that synapses undergo plasticity according to local rather than global learning rules.

  13. Numerical investigation and electro-acoustic modeling of measurement methods for the in-duct acoustical source parameters.

    PubMed

    Jang, Seung-Ho; Ih, Jeong-Guon

    2003-02-01

    It is known that the direct method yields different results from the indirect (or load) method in measuring the in-duct acoustic source parameters of fluid machines. The load method usually comes up with a negative source resistance, although a fairly accurate prediction of radiated noise can be obtained from any method. This study is focused on the effect of the time-varying nature of fluid machines on the output results of two typical measurement methods. For this purpose, a simplified fluid machine consisting of a reservoir, a valve, and an exhaust pipe is considered as representing a typical periodic, time-varying system and the measurement situations are simulated by using the method of characteristics. The equivalent circuits for such simulations are also analyzed by considering the system as having a linear time-varying source. It is found that the results from the load method are quite sensitive to the change of cylinder pressure or valve profile, in contrast to those from the direct method. In the load method, the source admittance turns out to be predominantly dependent on the valve admittance at the calculation frequency as well as the valve and load admittances at other frequencies. In the direct method, however, the source resistance is always positive and the source admittance depends mainly upon the zeroth order of valve admittance.

  14. Medication Development of Ibogaine as a Pharmacotherapy for Drug Dependencea.

    PubMed

    Mash, Deborah C; Kovera, Craig A; Buck, Billy E; Norenberg, Michael D; Shapshak, Paul; Hearn, W Lee; Sanchez-Ramos, Juan

    1998-05-01

    The potential for deriving new psychotherapeutic medications from natural sources has led to renewed interest in rain forest plants as a source of lead compounds for the development of antiaddiction medications. Ibogaine is an indole alkaloid found in the roots of Tabernanthe iboga (Apocynaceae family), a rain forest shrub that is native to equatorial Africa. Ibogaine is used by indigenous peoples in low doses to combat fatigue, hunger and in higher doses as a sacrament in religious rituals. Members of American and European addict self-help groups have claimed that ibogaine promotes long-term drug abstinence from addictive substances, including psychostimulants and cocaine. Anecdotal reports attest that a single dose of ibogaine eliminates withdrawal symptoms and reduces drug cravings for extended periods of time. The purported antiaddictive properties of ibogaine require rigorous validation in humans. We have initiated a rising tolerance study using single administration to assess the safety of ibogaine for the treatment of cocaine dependency. The primary objectives of the study are to determine safety, pharmacokinetics and dose effects, and to identify relevant parameters of efficacy in cocaine-dependent patients. Pharmacokinetic and pharmacodynamic characteristics of ibogaine in humans are assessed by analyzing the concentration-time data of ibogaine and its desmethyl metabolite (noribogaine) from the Phase I trial, and by conducting in vitro experiments to elucidate the specific disposition processes involved in the metabolism of both parent drug and metabolite. The development of clinical safety studies of ibogaine in humans will help to determine whether there is a rationale for conducting efficacy trials in the future.

  15. Climate, economic, and environmental impacts of producing wood for bioenergy

    NASA Astrophysics Data System (ADS)

    Birdsey, Richard; Duffy, Philip; Smyth, Carolyn; Kurz, Werner A.; Dugan, Alexa J.; Houghton, Richard

    2018-05-01

    Increasing combustion of woody biomass for electricity has raised concerns and produced conflicting statements about impacts on atmospheric greenhouse gas (GHG) concentrations, climate, and other forest values such as timber supply and biodiversity. The purposes of this concise review of current literature are to (1) examine impacts on net GHG emissions and climate from increasing bioenergy production from forests and exporting wood pellets to Europe from North America, (2) develop a set of science-based recommendations about the circumstances that would result in GHG reductions or increases in the atmosphere, and (3) identify economic and environmental impacts of increasing bioenergy use of forests. We find that increasing bioenergy production and pellet exports often increase net emissions of GHGs for decades or longer, depending on source of feedstock and its alternate fate, time horizon of analysis, energy emissions associated with the supply chain and fuel substitution, and impacts on carbon cycling of forest ecosystems. Alternative uses of roundwood often offer larger reductions in GHGs, in particular long-lived wood products that store carbon for longer periods of time and can achieve greater substitution benefits than bioenergy. Other effects of using wood for bioenergy may be considerable including induced land-use change, changes in supplies of wood and other materials for construction, albedo and non-radiative effects of land-cover change on climate, and long-term impacts on soil productivity. Changes in biodiversity and other ecosystem attributes may be strongly affected by increasing biofuel production, depending on source of material and the projected scale of biofuel production increases.

  16. Development of efficient time-evolution method based on three-term recurrence relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akama, Tomoko, E-mail: a.tomo---s-b-l-r@suou.waseda.jp; Kobayashi, Osamu; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function.more » Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost.« less

  17. Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations

    ERIC Educational Resources Information Center

    Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.

    2009-01-01

    Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…

  18. Nonequilibrium itinerant-electron magnetism: A time-dependent mean-field theory

    NASA Astrophysics Data System (ADS)

    Secchi, A.; Lichtenstein, A. I.; Katsnelson, M. I.

    2016-08-01

    We study the dynamical magnetic susceptibility of a strongly correlated electronic system in the presence of a time-dependent hopping field, deriving a generalized Bethe-Salpeter equation that is valid also out of equilibrium. Focusing on the single-orbital Hubbard model within the time-dependent Hartree-Fock approximation, we solve the equation in the nonequilibrium adiabatic regime, obtaining a closed expression for the transverse magnetic susceptibility. From this, we provide a rigorous definition of nonequilibrium (time-dependent) magnon frequencies and exchange parameters, expressed in terms of nonequilibrium single-electron Green's functions and self-energies. In the particular case of equilibrium, we recover previously known results.

  19. Cosmic Ray Hysteresis as Evidence for Time-dependent Diffusive Processes in the Long Term Solar Modulation

    NASA Technical Reports Server (NTRS)

    Ogallagher, J. J.

    1973-01-01

    A simple one-dimensional time-dependent diffusion-convection model for the modulation of cosmic rays is presented. This model predicts that the observed intensity at a given time is approximately equal to the intensity given by the time independent diffusion convection solution under interplanetary conditions which existed a time iota in the past, (U(t sub o) = U sub s(t sub o - tau)) where iota is the average time spent by a particle inside the modulating cavity. Delay times in excess of several hundred days are possible with reasonable modulation parameters. Interpretation of phase lags observed during the 1969 to 1970 solar maximum in terms of this model suggests that the modulating region is probably not less than 10 a.u. and maybe as much as 35 a.u. in extent.

  20. Capstone Depleted Uranium Aerosol Biokinetics, Concentrations, and Doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guilmette, Raymond A.; Miller, Guthrie; Parkhurst, MaryAnn

    2009-02-26

    One of the principal goals of the Capstone Depleted Uranium (DU) Aerosol Study was to quantify and characterize DU aerosols generated inside armored vehicles by perforation with a DU penetrator. This study consequently produced a database in which the DU aerosol source terms were specified both physically and chemically for a variety of penetrator-impact geometries and conditions. These source terms were used to calculate radiation doses and uranium concentrations for various scenarios as part of the Capstone DU Human Health Risk Assessment (HHRA). This paper describes the scenario-related biokinetics of uranium, and summarizes intakes, chemical concentrations to the organs, andmore » E(50) and HT(50) for organs and tissues based on exposure scenarios for personnel in vehicles at the time of perforation as well as for first responders. For a given exposure scenario (duration time and breathing rates), the range of DU intakes among the target vehicles and shots was not large, about a factor of 10, with the lowest being from a ventilated operational Abrams tank and the highest being for an unventilated Abrams with DU penetrator perforating DU armor. The ranges of committed effective doses were more scenario-dependent than were intakes. For example, the largest range, a factor of 20, was shown for scenario A, a 1-min exposure, whereas, the range was only a factor of two for the first-responder scenario (E). In general, the committed effective doses were found to be in the tens of mSv. The risks ascribed to these doses are discussed separately.« less

  1. Soundscapes

    DTIC Science & Technology

    2014-09-30

    Soundscapes ...global oceanographic models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we...other types of sources. APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that

  2. Mapping water availability, projected use and cost in the western United States

    NASA Astrophysics Data System (ADS)

    Tidwell, Vincent C.; Moreland, Barbara D.; Zemlick, Katie M.; Roberts, Barry L.; Passell, Howard D.; Jensen, Daniel; Forsgren, Christopher; Sehlke, Gerald; Cook, Margaret A.; King, Carey W.; Larsen, Sara

    2014-05-01

    New demands for water can be satisfied through a variety of source options. In some basins surface and/or groundwater may be available through permitting with the state water management agency (termed unappropriated water), alternatively water might be purchased and transferred out of its current use to another (termed appropriated water), or non-traditional water sources can be captured and treated (e.g., wastewater). The relative availability and cost of each source are key factors in the development decision. Unfortunately, these measures are location dependent with no consistent or comparable set of data available for evaluating competing water sources. With the help of western water managers, water availability was mapped for over 1200 watersheds throughout the western US. Five water sources were individually examined, including unappropriated surface water, unappropriated groundwater, appropriated water, municipal wastewater and brackish groundwater. Also mapped was projected change in consumptive water use from 2010 to 2030. Associated costs to acquire, convey and treat the water, as necessary, for each of the five sources were estimated. These metrics were developed to support regional water planning and policy analysis with initial application to electric transmission planning in the western US.

  3. Electricity generation and health.

    PubMed

    Markandya, Anil; Wilkinson, Paul

    2007-09-15

    The provision of electricity has been a great benefit to society, particularly in health terms, but it also carries health costs. Comparison of different forms of commercial power generation by use of the fuel cycle methods developed in European studies shows the health burdens to be greatest for power stations that most pollute outdoor air (those based on lignite, coal, and oil). The health burdens are appreciably smaller for generation from natural gas, and lower still for nuclear power. This same ranking also applies in terms of greenhouse-gas emissions and thus, potentially, to long-term health, social, and economic effects arising from climate change. Nuclear power remains controversial, however, because of public concern about storage of nuclear waste, the potential for catastrophic accident or terrorist attack, and the diversion of fissionable material for weapons production. Health risks are smaller for nuclear fusion, but commercial exploitation will not be achieved in time to help the crucial near-term reduction in greenhouse-gas emissions. The negative effects on health of electricity generation from renewable sources have not been assessed as fully as those from conventional sources, but for solar, wind, and wave power, such effects seem to be small; those of biofuels depend on the type of fuel and the mode of combustion. Carbon dioxide (CO2) capture and storage is increasingly being considered for reduction of CO2 emissions from fossil fuel plants, but the health effects associated with this technology are largely unquantified and probably mixed: efficiency losses mean greater consumption of the primary fuel and accompanying increases in some waste products. This paper reviews the state of knowledge regarding the health effects of different methods of generating electricity.

  4. Repeat synoptic sampling reveals drivers of change in carbon and nutrient chemistry of Arctic catchments

    NASA Astrophysics Data System (ADS)

    Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.

    2017-12-01

    Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.

  5. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  6. Towards the accounting of ocean wave noise in the real-time characterization of explosion detection capability

    NASA Astrophysics Data System (ADS)

    Walker, K. T.

    2013-12-01

    The detection of nuclear explosions depends on the signal-to-noise ratio of recorded signals. Cross-correlation-based array processing algorithms, such as that used by the International Data Center for nuclear monitoring, locks onto the dominant signal, masking weaker signals from potential sources of interest along different back azimuths. Microbaroms and microseisms are continuous sources of acoustic and seismic noise in the ~0.1 to 0.5 Hz range radiated by areas in the ocean where two opposing wave sets with the same period coexist. These sources of energy travel tens of thousands of kilometers and routinely dominate the recorded spectra. For any given time, this noise may render useless several arrays that are otherwise in a good position to detect an explosion. It would therefore be useful to know in real-time where such noise is expected to cause problems in event detection and location. In this presentation, I show that there is a potential to use the NOAA Wave Watch 3 (NWW3) modeling program to routinely output in real-time a prediction of the global distribution of microbarom and microseism sources. I do this by presenting a study of the detailed analysis of 12 microphone arrays around the North Pacific that recorded microbaroms during 2010. For this analysis, a time-progressive, frequency-domain beamforming approach is implemented. It is assumed that microbarom sources are illuminated by a high density of intersecting dominant microbarom back azimuths. Common pelagic sources move around the North Pacific during the boreal winter. Summertime North Pacific sources are only observed by western Pacific arrays, presumably a result of weaker microbarom radiation and westward stratospheric winds. A well-defined source is resolved ˜2000 km off the coast of California in January 2011 that moves closer to land over several days. The source locations are corrected for deflection by horizontal winds using acoustic ray trace modeling with range-dependent atmospheric specifications provided by publicly available NOAA/NASA models. The observed source locations do not correlate with anomalies in NWW3 model field data. However, application of the opposing-wave, microbarom source model of Waxler and Gilbert (2006) to the NWW3 directional wave height spectra output at buoy locations within 1100 km of the western North America coastline predicts microbarom radiation in locations that correlate with observed microbarom locations. Therefore, the availability of microbarom source strength maps as a real-time product from NWW3 is simply dependent on an additional, minor code revision. Success in such a revision has recently been reported by Drs. Fabrice Ardhuin and Justin Stopa.

  7. Vorticity Transfer in Shock Wave Interactions with Turbulence and Vortices

    NASA Astrophysics Data System (ADS)

    Agui, J. H.; Andreopoulos, J.

    1998-11-01

    Time-dependent, three-dimensional vorticity measurements of shock waves interacting with grid generated turbulence and concentrated tip vortices were conducted in a large diameter shock tube facility. Two different mesh size grids and a NACA-0012 semi-span wing acting as a tip vortex generator were used to carry out different relative Mach number interactions. The turbulence interactions produced a clear amplification of the lateral and spanwise vorticity rms, while the longitudinal component remained mostly unaffected. By comparison, the tip vortex/shock wave interactions produced a two fold increase in the rms of longitudinal vorticity. Considerable attention was given to the vorticity source terms. The mean and rms of the vorticity stretching terms dominated by 5 to 7 orders of magnitude over the dilitational compression terms in all the interactions. All three signals of the stretching terms manifested very intermittent, large amplitude peak events which indicated the bursting character of the stretching process. Distributions of these signals were characterized by extremely large levels of flatness with varying degrees of skewness. These distribution patterns were found to change only slightly through the turbulence interactions. However, the tip vortex/shock wave interactions brought about significant changes in these distributions which were associated with the abrupt structural changes of the vortex after the interaction.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  9. A Preclinical Population Pharmacokinetic Model for Anti-CD20/CD3 T-Cell-Dependent Bispecific Antibodies.

    PubMed

    Ferl, Gregory Z; Reyes, Arthur; Sun, Liping L; Cheu, Melissa; Oldendorp, Amy; Ramanujan, Saroja; Stefanich, Eric G

    2018-05-01

    CD20 is a cell-surface receptor expressed by healthy and neoplastic B cells and is a well-established target for biologics used to treat B-cell malignancies. Pharmacokinetic (PK) and pharmacodynamic (PD) data for the anti-CD20/CD3 T-cell-dependent bispecific antibody BTCT4465A were collected in transgenic mouse and nonhuman primate (NHP) studies. Pronounced nonlinearity in drug elimination was observed in the murine studies, and time-varying, nonlinear PK was observed in NHPs, where three empirical drug elimination terms were identified using a mixed-effects modeling approach: i) a constant nonsaturable linear clearance term (7 mL/day/kg); ii) a rapidly decaying time-varying, linear clearance term (t ½  = 1.6 h); and iii) a slowly decaying time-varying, nonlinear clearance term (t ½  = 4.8 days). The two time-varying drug elimination terms approximately track with time scales of B-cell depletion and T-cell migration/expansion within the central blood compartment. The mixed-effects NHP model was scaled to human and prospective clinical simulations were generated. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  10. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  11. Angular dependence of source-target-detector in active mode standoff infrared detection

    NASA Astrophysics Data System (ADS)

    Pacheco-Londoño, Leonardo C.; Castro-Suarez, John R.; Aparicio-Bolaños, Joaquín. A.; Hernández-Rivera, Samuel P.

    2013-06-01

    Active mode standoff measurement using infrared spectroscopy were carried out in which the angle between target and the source was varied from 0-70° with respect to the surface normal of substrates containing traces of highly energetic materials (explosives). The experiments were made using three infrared sources: a modulated source (Mod-FTIR), an unmodulated source (UnMod-FTIR) and a scanning quantum cascade laser (QCL), part of a dispersive mid infrared (MIR) spectrometer. The targets consisted of PENT 200 μg/cm2 deposited on aluminum plates placed at 1 m from the sources. The evaluation of the three modalities was aimed at verifying the influence of the highly collimated laser beam in the detection in comparison with the other sources. The Mod-FTIR performed better than QCL source in terms of the MIR signal intensity decrease with increasing angle.

  12. Estimating tree species richness from forest inventory plot data

    Treesearch

    Ronald E. McRoberts; Dacia M. Meneguzzo

    2007-01-01

    Montreal Process Criterion 1, Conservation of Biological Diversity, expresses species diversity in terms of number of forest dependent species. Species richness, defined as the total number of species present, is a common metric for analyzing species diversity. A crucial difficulty in estimating species richness from sample data obtained from sources such as inventory...

  13. Executive Summary: Professional Partners Supporting Family Caregivers

    ERIC Educational Resources Information Center

    Kelly, Kathleen; Reinhard, Susan C.; Brooks-Danso, Ashley

    2008-01-01

    Today, more than three-quarters of adults who live in the community and need long-term care depend on family and friends as their only source of assistance with activities of daily living (such as bathing, dressing, and eating) or instrumental activities of daily living (such as transportation and managing finances). Research suggests that the…

  14. Reducing DoD Fossil-Fuel Dependence

    DTIC Science & Technology

    2006-09-01

    hour: the amount of energy available from one gigawatt in one hour. HFCS High - fructose corn syrup HHV High -heat value HICE Hydrogen internal combustion...63 Ethanol derived from corn .................................................... 63...particular, alternate fuels and energy sources are to be assessed in terms of multiple parameters, to include (but not limited to) stability, high & low

  15. Using faults for PSHA in a volcanic context: the Etna case (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Azzaro, Raffaele; D'Amico, Salvatore; Gee, Robin; Pace, Bruno; Peruzza, Laura

    2016-04-01

    At Mt. Etna volcano (Southern Italy), recurrent volcano-tectonic earthquakes affect the urbanised areas, with an overall population of about 400,000 and with important infrastructures and lifelines. For this reason, seismic hazard analyses have been undertaken in the last decade focusing on the capability of local faults to generate damaging earthquakes especially in the short-term (30-5 yrs); these results have to be intended as complementary to the regulatory seismic hazard maps, and devoted to establish priority in the seismic retrofitting of the exposed municipalities. Starting from past experience, in the framework of the V3 Project funded by the Italian Department of Civil Defense we performed a fully probabilistic seismic hazard assessment by using an original definition of seismic sources and ground-motion prediction equations specifically derived for this volcanic area; calculations are referred to a new brand topographic surface (Mt. Etna reaches more than 3,000 m in elevation, in less than 20 km from the coast), and to both Poissonian and time-dependent occurrence models. We present at first the process of defining seismic sources that includes individual faults, seismic zones and gridded seismicity; they are obtained by integrating geological field data with long-term (the historical macroseismic catalogue) and short-term earthquake data (the instrumental catalogue). The analysis of the Frequency Magnitude Distribution identifies areas in the volcanic complex, with a- and b-values of the Gutenberg-Richter relationship representative of different dynamic processes. Then, we discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults estimated by using a purely geologic approach. This analysis has been carried out through the software code FISH, a Matlab® tool developed to turn fault data representative of the seismogenic process into hazard models. The utilization of a magnitude-size scaling relationship specific for volcanic areas is a key element: the FiSH code may thus calculate the most probable values of characteristic expected magnitude (Mchar) with the associated standard deviation σ, the corresponding mean recurrence times (Tmean) and the aperiodicity factor  for each fault. Finally, we show some results obtained by the OpenQuake-engine by considering a conceptual logic tree model organised in several branches (zone and zoneless, historical and geological rates, Poisson and time-dependent assumptions). Maps are referred to various exposure periods (10% exceeding probability in 30-5 years) and different spectral accelerations. The volcanic region of Mt. Etna represents a perfect lab for fault-based PSHA; the large dataset of input parameters used in the calculations allows testing different methodological approaches and validating some conceptual procedures.

  16. Climate change and drinking water production in The Netherlands: a flexible approach.

    PubMed

    Ramaker, T A B; Meuleman, A F M; Bernhardi, L; Cirkel, G

    2005-01-01

    Climate change increases water system dynamics through temperature changes, changes in precipitation patterns, evaporation, water quality and water storage in ice packs. Water system dependent economical stakeholders, such as drinking water companies in The Netherlands, have to cope with consequences of climate change, e.g. floods and water shortages in river systems, upconing brackish ground water, salt water intrusion, increasing peak demands and microbiological activity. In the past decades, however, both water systems and drinking water production have become more and more inflexible; water systems have been heavily regulated and the drinking water supply has grown into an inflexible, but cheap and reliable, system. Flexibility and adaptivity are solutions to overcome climate change related consequences. Flexible adaptive strategies for drinking water production comprise new sources for drinking water production, application of storage concepts in the short term, and a redesign of large centralised systems, including flexible treatment plants, in the long term. Transition to flexible concepts will take decades because investment depreciation periods of assets are long. This implies that long-term strategies within an indicated time path have to be developed. These strategies must be based on thorough knowledge of current assets to seize opportunities for change.

  17. Predicting Long-term Temperature Increase for Time-Dependent SAR Levels with a Single Short-term Temperature Response

    PubMed Central

    Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M.

    2015-01-01

    Purpose Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). Methods After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and Impulse-Response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes’ bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. Results The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time, and can be adjusted to be more or less conservative than the corresponding finite difference simulation. Conclusion With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. (200/200 words) PMID:26096947

  18. Predicting long-term temperature increase for time-dependent SAR levels with a single short-term temperature response.

    PubMed

    Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M

    2016-05-01

    Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and impulse-response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes' bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time and can be adjusted to be more or less conservative than the corresponding finite difference simulation. With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. © 2015 Wiley Periodicals, Inc.

  19. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  20. Density and white light brightness in looplike coronal mass ejections - Temporal evolution

    NASA Technical Reports Server (NTRS)

    Steinolfson, R. S.; Hundhausen, A. J.

    1988-01-01

    Three ambient coronal models suitable for studies of time-dependent phenomena were used to investigate the propagation of coronal mass ejections initiated in each atmosphere by an identical energy source. These models included those of a static corona with a dipole magnetic field, developed by Dryer et al. (1979); a steady polytropic corona with an equatorial coronal streamer, developed by Steinolfson et al. (1982); and Steinolfson's (1988) model of heated corona with an equatorial coronal streamer. The results indicated that the first model does not adequately represent the general characteristics of observed looplike mass ejections, and the second model simulated only some of the observed features. Only the third model, which included a heating term and a streamer, was found to yield accurate simulation of the mess ejection observations.

  1. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  2. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  3. Orientation dependence of temporal and spectral properties of high-order harmonics in solids [Orientation dependence of high-harmonic temporal and spectral properties in solids

    DOE PAGES

    Wu, Mengxi; You, Yongsing; Ghimire, Shambhu; ...

    2017-12-18

    We investigate the connection between crystal symmetry and temporal and spectral properties of high-order harmonics in solids. We calculate the orientation-dependent harmonic spectrum driven by an intense, linearly polarized infrared laser field, using a momentum-space description of the generation process in terms of strong-field-driven electron dynamics on the band structure. We show that the orientation dependence of both the spectral yield and the subcycle time profile of the harmonic radiation can be understood in terms of the coupling strengths and relative curvatures of the valence band and the low-lying conduction bands. In particular, we show that in some systems thismore » gives rise to a rapid shift of a quarter optical cycle in the timing of harmonics in the secondary plateau as the crystal is rotated relative to the laser polarization. Here, we address recent experimental results in MgO and show that the observed change in orientation dependence for the highest harmonics can be interpreted in the momentum space picture in terms of the contributions of several different conduction bands.« less

  4. Orientation dependence of temporal and spectral properties of high-order harmonics in solids [Orientation dependence of high-harmonic temporal and spectral properties in solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Mengxi; You, Yongsing; Ghimire, Shambhu

    We investigate the connection between crystal symmetry and temporal and spectral properties of high-order harmonics in solids. We calculate the orientation-dependent harmonic spectrum driven by an intense, linearly polarized infrared laser field, using a momentum-space description of the generation process in terms of strong-field-driven electron dynamics on the band structure. We show that the orientation dependence of both the spectral yield and the subcycle time profile of the harmonic radiation can be understood in terms of the coupling strengths and relative curvatures of the valence band and the low-lying conduction bands. In particular, we show that in some systems thismore » gives rise to a rapid shift of a quarter optical cycle in the timing of harmonics in the secondary plateau as the crystal is rotated relative to the laser polarization. Here, we address recent experimental results in MgO and show that the observed change in orientation dependence for the highest harmonics can be interpreted in the momentum space picture in terms of the contributions of several different conduction bands.« less

  5. Study on changes in cognitive function of elderly individuals certified for long-term care insurance.

    PubMed

    Horiguchi, Minako; Kokubu, Keiko; Mori, Toru

    2017-01-01

    Objectives To elucidate the changes in cognitive function in elderly individuals as observed in the results of a long-term care certification survey.Methods The data were obtained from the long-term care insurance of 121 subjects who applied for benefit renewal between 2010 and 2011, in a city in Japan. The subjects were grouped into one of three groups (improved, maintained, or worsened) according to the change in status of overall cognitive function. Analyses were completed with this grouping as the main dependent variable and with sex, age, degree of independence at the initial insurance application in 2006, and levels of seven categories of cognitive function as independent variables.Results There was a statistically significant association between age and deterioration of various cognitive functions. Sex had no significant effect on the rate of deterioration. The initial degree of independence was positively associated with the cognitive function change. Multivariate analysis (logistic regression analysis) incorporating age, sex, and initial degree of dependence as independent variables revealed that sex does not significantly influence the prognosis of cognitive function. Changes in the score of each of the seven cognitive functions were analyzed with ANOVA, with categories of functions and individuals as sources of variance. Both function category and individuals were significantly associated with deterioration. Among the seven categories of functions, "understanding daily activities" had the greatest deterioration, while "calling him/herself by his/her own name" had the least.Conclusion Cognitive function, as observed in the long-term care certification survey, is more likely to deteriorate in elderly individuals and in those who were at higher levels of dependency index at the time of initial certification, and this effect is observed equally in men and women. Our results suggest that, in providing long-term care for elderly people, it may be useful to call the clients by their names and ask them to name themselves, as well as to try to improve their understanding regarding the daily activities by articulating the components of each activity.

  6. Rupture Complexities of Fluid Induced Microseismic Events at the Basel EGS Project

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.; Häring, Markus; Asanuma, Hiroshi

    2016-04-01

    Microseismic data sets of excellent quality, such as the seismicity recorded in the Basel-1 enhanced geothermal system, Switzerland, in 2006-2007, provide the opportunity to analyse induced seismic events in great detail. It is important to understand in how far seismological insights on e.g. source and rupture processes are scale dependent and how they can be transferred to fluid induced micro-seismicity. We applied the empirical Green's function (EGF) method in order to reconstruct the relative source time functions of 195 suitable microseismic events from the Basel-1 reservoir. We found 93 solutions with a clear and consistent directivity pattern. The remaining events display either no measurable directivity, are unfavourably oriented or exhibit non consistent or complex relative source time functions. In this work we focus on selected events of M ˜ 1 which show possible rupture complexities. It is demonstrated that the EGF method allows to resolve complex rupture behaviour even if it is not directly identifiable in the seismograms. We find clear evidence of rupture directivity and multi-phase rupturing in the analysed relative source time functions. The time delays between consecutive subevents lies in the order of 10ms. Amplitudes of the relative source time functions of the subevents do not always show the same azimuthal dependence, indicating dissimilarity in the rupture directivity of the subevents. Our observations support the assumption that heterogeneity on fault surfaces persists down to small scale (few tens of meters).

  7. Scaling behavior of EEG amplitude and frequency time series across sleep stages

    NASA Astrophysics Data System (ADS)

    Kantelhardt, Jan W.; Tismer, Sebastian; Gans, Fabian; Schumann, Aicko Y.; Penzel, Thomas

    2015-10-01

    We study short-term and long-term persistence properties (related with auto-correlations) of amplitudes and frequencies of EEG oscillations in 176 healthy subjects and 40 patients during nocturnal sleep. The amplitudes show scaling from 2 to 500 seconds (depending on the considered band) with large fluctuation exponents during (nocturnal) wakefulness (0.73-0.83) and small ones during deep sleep (0.50-0.69). Light sleep is similar to deep sleep, while REM sleep (0.64-0.76) is closer to wakefulness except for the EEG γ band. Some of the frequency time series also show long-term scaling, depending on the selected bands and stages. Only minor deviations are seen for patients with depression, anxiety, or Parkinson's disease.

  8. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  9. Sources and Deposition of Polycyclic Aromatic Hydrocarbons to Western U.S. National Parks

    PubMed Central

    USENKO, SASCHA; MASSEY SIMONICH, STACI L.; HAGEMAN, KIMBERLY J.; SCHRLAU, JILL E.; GEISER, LINDA; CAMPBELL, DON H.; APPLEBY, PETER G.; LANDERS, DIXON H.

    2010-01-01

    Seasonal snowpack, lichens, and lake sediment cores were collected from fourteen lake catchments in eight western U.S. National Parks and analyzed for sixteen polycyclic aromatic hydrocarbons (PAHs) in order to determine their current and historical deposition, as well as to identify their potential sources. Seasonal snowpack was measured to determine the current wintertime atmospheric PAH deposition; lichens were measured to determine the long-term, year around deposition; and the temporal PAH deposition trends were reconstructed using lake sediment cores dated using 210Pb and 137Cs. The fourteen remote lake catchments ranged from low-latitude catchments (36.6° N) at high elevation (2900 masl) in Sequoia National Park, CA to high-latitude catchments (68.4° N) at low elevation (427 masl) in the Alaskan Arctic. Over 75% of the catchments demonstrated statistically significant temporal trends in ΣPAH sediment flux, depending on catchment proximity to source regions and topographic barriers. The ΣPAH concentrations and fluxes in seasonal snowpack, lichens, and surficial sediment were 3.6 to 60,000 times greater in the Snyder Lake catchment of Glacier National Park than the other 13 lake catchments. The PAH ratios measured in snow, lichen, and sediment were used to identify a local aluminum smelter as a major source of PAHs to the Snyder Lake catchment. These results suggest that topographic barriers influence the atmospheric transport and deposition of PAHs in high-elevation ecosystems and that PAH sources to these national park ecosystems range from local point sources to diffuse regional and global sources. PMID:20465303

  10. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  11. Analog performance of vertical nanowire TFETs as a function of temperature and transport mechanism

    NASA Astrophysics Data System (ADS)

    Martino, Marcio Dalla Valle; Neves, Felipe; Ghedini Der Agopian, Paula; Martino, João Antonio; Vandooren, Anne; Rooyackers, Rita; Simoen, Eddy; Thean, Aaron; Claeys, Cor

    2015-10-01

    The goal of this work is to study the analog performance of tunnel field effect transistors (TFETs) and its susceptibility to temperature variation and to different dominant transport mechanisms. The experimental input characteristic of nanowire TFETs with different source compositions (100% Si and Si1-xGex) has been presented, leading to the extraction of the Activation Energy for each bias condition. These first results have been connected to the prevailing transport mechanism for each configuration, namely band-to-band tunneling (BTBT) or trap assisted tunneling (TAT). Afterward, this work analyzes the analog behavior, with the intrinsic voltage gain calculated in terms of Early voltage, transistor efficiency, transconductance and output conductance. Comparing the results for devices with different source compositions, it is interesting to note how the analog trends vary depending on the source characteristics and the prevailing transport mechanisms. This behavior results in a different suitability analysis depending on the working temperature. In other words, devices with full-Silicon source and non-abrupt junction profile present the worst intrinsic voltage gain at room temperature, but the best results for high temperatures. This was possible since, among the 4 studied devices, this configuration was the only one with a positive intrinsic voltage gain dependence on the temperature variation.

  12. Ozone depletion and chlorine loading potentials

    NASA Technical Reports Server (NTRS)

    Pyle, John A.; Wuebbles, Donald J.; Solomon, Susan; Zvenigorodsky, Sergei; Connell, Peter; Ko, Malcolm K. W.; Fisher, Donald A.; Stordal, Frode; Weisenstein, Debra

    1991-01-01

    The recognition of the roles of chlorine and bromine compounds in ozone depletion has led to the regulation or their source gases. Some source gases are expected to be more damaging to the ozone layer than others, so that scientific guidance regarding their relative impacts is needed for regulatory purposes. Parameters used for this purpose include the steady-state and time-dependent chlorine loading potential (CLP) and the ozone depletion potential (ODP). Chlorine loading potentials depend upon the estimated value and accuracy of atmospheric lifetimes and are subject to significant (approximately 20-50 percent) uncertainties for many gases. Ozone depletion potentials depend on the same factors, as well as the evaluation of the release of reactive chlorine and bromine from each source gas and corresponding ozone destruction within the stratosphere.

  13. Measuring and modeling of soil N2O emissions - How well are we doing?

    NASA Astrophysics Data System (ADS)

    Butterbach-Bahl, K.; Ralf, K.; Werner, C.; Wolf, B.

    2017-12-01

    Microbial processes in soils are the primarily source of atmospheric N2O. Fertilizer use to boost food and feed production of agricultural systems as well as nitrogen deposition to natural and semi-natural ecosystems due to emissions of NOx and NH3 from agriculture and energy production and re-deposition to terrestrial ecosystems has likely nearly doubled the pre-industrial source strength of soils for atmospheric N2O. Quantifying soil emissions and identifying mitigation options is becoming a major focus in the climate debate as N2O emissions from agricultural soils are a major contributor to the greenhouse gas footprint of agricultural systems, with agriculture incl. land use change contributing up to 30% to total anthropogenic GHG emissions. The increasing number of annual datasets show that soil emissions a) are largely depended on soil N availability and thus e.g. fertilizer application, b) vary with management (e.g. timing of fertilization, residue management, tillage), c) depend on soil properties such as organic matter content and pH, e) are affected by plant N uptake, and e) are controlled by environmental factors such as moisture and temperature regimes. It is remarkable that the magnitude of annual emissions is largely controlled by short-term N2O pulses occurring due to fertilization, wetting and drying or freezing and thawing of soils. All of this contributes to a notorious variability of soil N2O emissions in space and time. Overcoming this variability for quantification of source strengths and identifying tangible mitigation options requires targeted measuring approaches as well as the translation of our knowledge on mechanisms underlying emissions into process oriented models, which finally might be used for upscaling and scenario studies. This paper aims at reviewing current knowledge on measurements, modelling and upscaling of soil N2O emissions, thereby identifying short comes and uncertainties of the various approaches and fields for future research.

  14. Characterisation of exposure to non-ionising electromagnetic fields in the Spanish INMA birth cohort: study protocol.

    PubMed

    Gallastegi, Mara; Guxens, Mònica; Jiménez-Zabala, Ana; Calvente, Irene; Fernández, Marta; Birks, Laura; Struchen, Benjamin; Vrijheid, Martine; Estarlich, Marisa; Fernández, Mariana F; Torrent, Maties; Ballester, Ferrán; Aurrekoetxea, Juan J; Ibarluzea, Jesús; Guerra, David; González, Julián; Röösli, Martin; Santa-Marina, Loreto

    2016-02-18

    Analysis of the association between exposure to electromagnetic fields of non-ionising radiation (EMF-NIR) and health in children and adolescents is hindered by the limited availability of data, mainly due to the difficulties on the exposure assessment. This study protocol describes the methodologies used for characterising exposure of children to EMF-NIR in the INMA (INfancia y Medio Ambiente- Environment and Childhood) Project, a prospective cohort study. Indirect (proximity to emission sources, questionnaires on sources use and geospatial propagation models) and direct methods (spot and fixed longer-term measurements and personal measurements) were conducted in order to assess exposure levels of study participants aged between 7 and 18 years old. The methodology used varies depending on the frequency of the EMF-NIR and the environment (homes, schools and parks). Questionnaires assessed the use of sources contributing both to Extremely Low Frequency (ELF) and Radiofrequency (RF) exposure levels. Geospatial propagation models (NISMap) are implemented and validated for environmental outdoor sources of RFs using spot measurements. Spot and fixed longer-term ELF and RF measurements were done in the environments where children spend most of the time. Moreover, personal measurements were taken in order to assess individual exposure to RF. The exposure data are used to explore their relationships with proximity and/or use of EMF-NIR sources. Characterisation of the EMF-NIR exposure by this combination of methods is intended to overcome problems encountered in other research. The assessment of exposure of INMA cohort children and adolescents living in different regions of Spain to the full frequency range of EMF-NIR extends the characterisation of environmental exposures in this cohort. Together with other data obtained in the project, on socioeconomic and family characteristics and development of the children and adolescents, this will enable to evaluate the complex interaction between health outcomes in children and adolescents and the various environmental factors that surround them.

  15. Measuring functional service quality using SERVQUAL in a high-dependence health service relationship.

    PubMed

    Clark, W Randy; Clark, Leigh Anne

    2007-01-01

    Although there is a growing concern about health care quality, little research has focused on how to measure quality in long-term care settings. In this article, we make the following observations: (1) most users of the SERVQUAL instrument reassess customers' expectations each time they measure quality perceptions; (2) long-term care relationships are likely to be ongoing, dependent relationships; (3) because of this dependence, customers in the long-term care setting are likely to reduce their expectations when faced with poor service quality; (4) by using this "settled" expectations level, service providers may make biased conclusions of quality improvements. We recommend various methods for overcoming or minimizing this "settling" effect and propose modifications to the SERVQUAL gap 5 measure to assess quality in a long-term care setting.

  16. Use of commercial and social sources of alcohol by underage drinkers: the role of pubertal timing.

    PubMed

    Storvoll, Elisabet E; Pape, Hilde; Rossow, Ingeborg

    2008-01-01

    We have explored whether alcohol use and procurement of alcohol from commercial and social sources vary with pubertal timing. A sub-sample of 9291 Norwegian minors (13-17 year-olds) was extracted from a nationwide school survey (response rate: 92%). Adolescents who had matured early (early developers, EDs) reported higher consumption and more alcohol-related harm than those who had matured late (late developers, LDs) or at the "normal" time (on time developers, ODs). Purchases from on-premise and off-premise outlets were much more important sources of alcohol for EDs than for ODs and LDs - both in relative and absolute terms. Moreover, EDs were somewhat more likely to obtain alcohol from social sources. Taken together, the findings indicate that adolescents who mature early have access to a larger variety of sources of alcohol than adolescents who mature later - which in turn may explain their increased level of drinking.

  17. Reversible and Irreversible Time-Dependent Behavior of GRCop-84

    NASA Technical Reports Server (NTRS)

    Lerch, Bradley A.; Arnold, Steven M.; Ellis, David L.

    2017-01-01

    A series of mechanical tests were conducted on a high-conductivity copper alloy, GRCop-84, in order to understand the time dependent response of this material. Tensile, creep, and stress relaxation tests were performed over a wide range of temperatures, strain rates, and stress levels to excite various amounts of time-dependent behavior. At low applied stresses the deformation behavior was found to be fully reversible. Above a certain stress, termed the viscoelastic threshold, irreversible deformation was observed. At these higher stresses the deformation was observed to be viscoplastic. Both reversible and irreversible regions contained time dependent deformation. These experimental data are documented to enable characterization of constitutive models to aid in design of high temperature components.

  18. Measures for the Dynamics in a Few-Body Quantum System with Harmonic Interactions

    NASA Astrophysics Data System (ADS)

    Nagy, I.; Pipek, J.; Glasser, M. L.

    2018-01-01

    We determine the exact time-dependent non-idempotent one-particle reduced density matrix and its spectral decomposition for a harmonically confined two-particle correlated one-dimensional system when the interaction terms in the Schrödinger Hamiltonian are changed abruptly. Based on this matrix in coordinate space we derive a precise condition for the equivalence of the purity and the overlap-square of the correlated and non-correlated wave functions as the model system with harmonic interactions evolves in time. This equivalence holds only if the interparticle interactions are affected, while the confinement terms are unaffected within the stability range of the system. Under this condition we analyze various time-dependent measures of entanglement and demonstrate that, depending on the magnitude of the changes made in the Hamiltonian, periodic, logarithmically increasing or constant value behavior of the von Neumann entropy can occur.

  19. Do cigarette prices motivate smokers to quit? New evidence from the ITC survey

    PubMed Central

    Ross, Hana; Blecher, Evan; Yan, Lili; Hyland, Andrew

    2015-01-01

    Aims To examine the importance of cigarette prices in influencing smoking cessation and the motivation to quit. Design We use longitudinal data from three waves of the International Tobacco Control Policy Evaluation Survey (ITC). The study contrasts smoking cessation and motivation to quit among US and Canadian smokers and evaluates how this relationship is modified by cigarette prices, nicotine dependence and health knowledge. Different price measures are used to understand how the ability to purchase cheaper cigarettes may reduce the influence of prices. Our first model examines whether cigarette prices affect motivation to quit smoking using Generalized Estimating Equations to predict cessation stage and a least squares model to predict the change in cessation stage. The second model evaluates quitting behavior over time. The probability of quitting is estimated with Generalized Estimating Equations and a transition model to account for the ‘left-truncation’ of the data. Settings US and Canada. Participants 4352 smokers at Wave 1, 2000 smokers completing all three waves. Measurements Motivation to quit, cigarette prices, nicotine dependence and health knowledge. Findings Smokers living in areas with higher cigarette prices are significantly more motivated to quit. There is limited evidence to suggest that price increases over time may also increase quit motivation. Higher cigarette prices increase the likelihood of actual quitting, with the caveat that results are statistically significant in one out of two models. Access to cheaper cigarette sources does not impede cessation although smokers would respond more aggressively (in terms of cessation) to price increases if cheaper cigarette sources were not available. Conclusions This research provides a unique opportunity to study smoking cessation among adult smokers and their response to cigarette prices in a market where they are able to avoid tax increases by purchasing cigarettes from cheaper sources. Higher cigarette prices appear to be associated with greater motivation to stop smoking, an effect which does not appear to be mitigated by cheaper cigarette sources. The paper supports the use of higher prices as a means of encouraging smoking cessation and motivation to quit. PMID:21059183

  20. Do cigarette prices motivate smokers to quit? New evidence from the ITC survey.

    PubMed

    Ross, Hana; Blecher, Evan; Yan, Lili; Hyland, Andrew

    2011-03-01

    To examine the importance of cigarette prices in influencing smoking cessation and the motivation to quit. We use longitudinal data from three waves of the International Tobacco Control Policy Evaluation Survey (ITC). The study contrasts smoking cessation and motivation to quit among US and Canadian smokers and evaluates how this relationship is modified by cigarette prices, nicotine dependence and health knowledge. Different price measures are used to understand how the ability to purchase cheaper cigarettes may reduce the influence of prices. Our first model examines whether cigarette prices affect motivation to quit smoking using Generalized Estimating Equations to predict cessation stage and a least squares model to predict the change in cessation stage. The second model evaluates quitting behavior over time. The probability of quitting is estimated with Generalized Estimating Equations and a transition model to account for the 'left-truncation' of the data. US and Canada. 4352 smokers at Wave 1, 2000 smokers completing all three waves. Motivation to quit, cigarette prices, nicotine dependence and health knowledge. Smokers living in areas with higher cigarette prices are significantly more motivated to quit. There is limited evidence to suggest that price increases over time may also increase quit motivation. Higher cigarette prices increase the likelihood of actual quitting, with the caveat that results are statistically significant in one out of two models. Access to cheaper cigarette sources does not impede cessation although smokers would respond more aggressively (in terms of cessation) to price increases if cheaper cigarette sources were not available. This research provides a unique opportunity to study smoking cessation among adult smokers and their response to cigarette prices in a market where they are able to avoid tax increases by purchasing cigarettes from cheaper sources. Higher cigarette prices appear to be associated with greater motivation to stop smoking, an effect which does not appear to be mitigated by cheaper cigarette sources. The paper supports the use of higher prices as a means of encouraging smoking cessation and motivation to quit. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.

  1. A higher-order Skyrme model

    NASA Astrophysics Data System (ADS)

    Gudnason, Sven Bjarke; Nitta, Muneto

    2017-09-01

    We propose a higher-order Skyrme model with derivative terms of eighth, tenth and twelfth order. Our construction yields simple and easy-to-interpret higher-order Lagrangians. We first show that a Skyrmion with higher-order terms proposed by Marleau has an instability in the form of a baby-Skyrmion string, while the static energies of our construction are positive definite, implying stability against time-independent perturbations. However, we also find that the Hamiltonians of our construction possess two kinds of dynamical instabilities, which may indicate the instability with respect to time-dependent perturbations. Different from the well-known Ostrogradsky instability, the instabilities that we find are intrinsically of nonlinear nature and also due to the fact that even powers of the inverse metric gives a ghost-like higher-order kinetic-like term. The vacuum state is, however, stable. Finally, we show that at sufficiently low energies, our Hamiltonians in the simplest cases, are stable against time-dependent perturbations.

  2. Real-time monitoring of steady-state pulsed chemical beam epitaxy by p-polarized reflectance

    NASA Astrophysics Data System (ADS)

    Bachmann, K. J.; Sukidi, N.; Höpfner, C.; Harris, C.; Dietz, N.; Tran, H. T.; Beeler, S.; Ito, K.; Banks, H. T.

    1998-01-01

    The structure in the p-polarized reflectance (PR) intensity Rp4( t) - observed under conditions of pulsed chemical beam epitaxy (PCBE) - is modeled on the basis of the four-layer stack: ambient/surface reaction layer (SRL)/epilayer/substrate. Linearization of the PR intensity with regard to the phase factor associated with the SRL results in a good approximation that can be expressed as Rp4 = Rp3 + ΔRp. Rp3 is the reflectivity of the three-layer stack ambient-epilayer-substrate. ΔRp describes the properties of the SRL. An explicit relation is derived between ΔRp( t) and the time-dependent surface concentrations ch( t) ( h = 1, 2, …, N) of the constituents of the SRL, which holds for conditions of submonolayer coverage of the surface by source vapor molecules. Under conditions of low temperature PCBE at high flux, the SRL is expected to exhibit nonideal behavior, mandating replacement of the surface concentrations by activities. Also, in this case, the thickness of the SRL must be represented in terms of partial molar volumina Vh. Since the relation between ΔRp( t) and the activities of reactants, intermediates and products of the chemical reactions driving heteroepitaxial growth is non-linear, the extraction of kinetic parameters from the measured time dependence of the PR signal generally requires numerical modeling.

  3. An Atmospheric Constraint on the NO2 Dependence of Daytime Near-Surface Nitrous Acid (HONO).

    PubMed

    Pusede, Sally E; VandenBoer, Trevor C; Murphy, Jennifer G; Markovic, Milos Z; Young, Cora J; Veres, Patrick R; Roberts, James M; Washenfelder, Rebecca A; Brown, Steven S; Ren, Xinrong; Tsai, Catalina; Stutz, Jochen; Brune, William H; Browne, Eleanor C; Wooldridge, Paul J; Graham, Ashley R; Weber, Robin; Goldstein, Allen H; Dusanter, Sebastien; Griffith, Stephen M; Stevens, Philip S; Lefer, Barry L; Cohen, Ronald C

    2015-11-03

    Recent observations suggest a large and unknown daytime source of nitrous acid (HONO) to the atmosphere. Multiple mechanisms have been proposed, many of which involve chemistry that reduces nitrogen dioxide (NO2) on some time scale. To examine the NO2 dependence of the daytime HONO source, we compare weekday and weekend measurements of NO2 and HONO in two U.S. cities. We find that daytime HONO does not increase proportionally to increases in same-day NO2, i.e., the local NO2 concentration at that time and several hours earlier. We discuss various published HONO formation pathways in the context of this constraint.

  4. Short-term seismic precursors to Icelandic eruptions 1973-2014.

    NASA Astrophysics Data System (ADS)

    Einarsson, Páll

    2018-05-01

    Networks of seismographs of high sensitivity have been in use in the vicinity of active volcanoes in Iceland since 1973. During this time 21 confirmed eruptions have occurred and several intrusions where magma did not reach the surface. All these events have been accompanied by characteristic seismic activity. Long-term precursory activity is characterised by low-level, persistent seismicity (months-years), clustered around an inflating magma body. Whether or not a magma accumulation is accompanied by seismicity depends on the tectonic setting, interplate or intraplate, the depth of magma accumulation, the previous history and the state of stress. All eruptions during the time of observation had a detectable short-term seismic precursor marking the time of dike propagation towards the surface. The precursor times varied between 15 minutes and 13 days. In half of the cases the precursor time was less than 2 hours. Three eruptions stand out for their long duration of the immediate precursory activity, Heimaey 1973 with 30 hours, Gjálp 1996 with 34 hours, and Bárðarbunga 2014 with 13 days. In the case of Heimaey the long time is most likely the consequence of the great depth of the magma source, 15-25 km. The Gjálp eruption had a prelude that was unusual in many respects. The long propagation time may have resulted from a complicated triggering scenario involving more than one magma chamber. The Bárðarbunga eruption at Holuhraun issued from the distal end of a dike that took 13 days to propagate laterally for 48 km before it opened to the surface. Out of the 21 detected precursors 14 were noticed soon enough to lead to a public warning of the coming eruption. In 4 additional cases the precursory signal was noticed before the eruption was seen. In only 3 cases was the eruption seen or detected before the seismic precursor was verified.

  5. Effect of basic physical parameters to control plasma meniscus and beam halo formation in negative ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K.; Okuda, S.; Nishioka, S.

    2013-09-14

    Our previous study shows that the curvature of the plasma meniscus causes the beam halo in the negative ion sources: the negative ions extracted from the periphery of the meniscus are over-focused in the extractor due to the electrostatic lens effect, and consequently become the beam halo. In this article, the detail physics of the plasma meniscus and beam halo formation is investigated with two-dimensional particle-in-cell simulation. It is shown that the basic physical parameters such as the H{sup −} extraction voltage and the effective electron confinement time significantly affect the formation of the plasma meniscus and the resultant beammore » halo since the penetration of electric field for negative ion extraction depends on these physical parameters. Especially, the electron confinement time depends on the characteristic time of electron escape along the magnetic field as well as the characteristic time of electron diffusion across the magnetic field. The plasma meniscus penetrates deeply into the source plasma region when the effective electron confinement time is short. In this case, the curvature of the plasma meniscus becomes large, and consequently the fraction of the beam halo increases.« less

  6. Modeling magnetic field and TEC signatures of large-amplitude acoustic and gravity waves generated by natural hazard events

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Inchin, P.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-12-01

    Ocean and solid earth responses during earthquakes are a significant source of large amplitude acoustic and gravity waves (AGWs) that perturb the overlying ionosphere-thermosphere (IT) system. IT disturbances are routinely detected following large earthquakes (M > 7.0) via GPS total electron content (TEC) observations, which often show acoustic wave ( 3-4 min periods) and gravity wave ( 10-15 min) signatures with amplitudes of 0.05-2 TECU. In cases of very large earthquakes (M > 8.0) the persisting acoustic waves are estimated to have 100-200 m/s compressional velocities in the conducting ionospheric E and F-regions and should generate significant dynamo currents and magnetic field signatures. Indeed, some recent reports (e.g. Hao et al, 2013, JGR, 118, 6) show evidence for magnetic fluctuations, which appear to be related to AGWs, following recent large earthquakes. However, very little quantitative information is available on: (1) the detailed spatial and temporal dependence of these magnetic fluctuations, which are usually observed at a small number of irregularly arranged stations, and (2) the relation of these signatures to TEC perturbations in terms of relative amplitudes, frequency, and timing for different events. This work investigates space- and time-dependent behavior of both TEC and magnetic fluctuations following recent large earthquakes, with the aim to improve physical understanding of these perturbations via detailed, high-resolution, two- and three-dimensional modeling case studies with a coupled neutral atmospheric and ionospheric model, MAGIC-GEMINI (Zettergren and Snively, 2015, JGR, 120, 9). We focus on cases inspired by the large Chilean earthquakes from the past decade (viz., the M > 8.0 earthquakes from 2010 and 2015) to constrain the sources for the model, i.e. size, frequency, amplitude, and timing, based on available information from ocean buoy and seismometer data. TEC data are used to validate source amplitudes and to constrain background ionospheric conditions. Preliminary comparisons against available magnetic field and TEC data from these events provide evidence, albeit limited and localized, that support the validity of the spatially-resolved simulation results.

  7. The need for harmonization of methods for finding locations and magnitudes of air pollution sources using observations of concentrations and wind fields

    NASA Astrophysics Data System (ADS)

    Hanna, Steven R.; Young, George S.

    2017-01-01

    What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);

  8. The use of hierarchical clustering for the design of optimized monitoring networks

    NASA Astrophysics Data System (ADS)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different sampling methodologies as well as outliers (stations' time series which are markedly different from all others in a given dataset).

  9. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 2: Computational implementation and first results

    NASA Astrophysics Data System (ADS)

    Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina

    2017-11-01

    This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M < 6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered in this study, we present a different viewpoint that, in our opinion, is relevant for retrofitting the existing buildings and for driving impending interventions of risk reduction.

  10. YouTube as a patient-information source for root canal treatment.

    PubMed

    Nason, K; Donnelly, A; Duncan, H F

    2016-12-01

    To assess the content and completeness of Youtube ™ as an information source for patients undergoing root canal treatment procedures. YouTube ™ (https://www.youtube.com/) was searched for information using three relevant treatment search terms ('endodontics', 'root canal' and 'root canal treatment'). After exclusions (language, no audio, >15 min, duplicates), 20 videos per search term were selected. General video assessment included duration, ownership, views, age, likes/dislikes, target audience and video/audio quality, whilst content was analysed under six categories ('aetiology', 'anatomy', 'symptoms', 'procedure', 'postoperative course' and 'prognosis'). Content was scored for completeness level and statistically analysed using anova and post hoc Tukey's test (P < 0.05). To obtain 60 acceptable videos, 124 were assessed. Depending on the search term employed, the video content and ownership differed markedly. There was wide variation in both the number of video views and 'likes/dislikes'. The average video age was 788 days. In total, 46% of videos were 'posted' by a dentist/specialist source; however, this was search term specific rising to 70% of uploads for the search 'endodontic', whilst laypersons contributed 18% of uploads for the search 'root canal treatment'. Every video lacked content in the designated six categories, although 'procedure' details were covered more frequently and in better detail than other categories. Videos posted by dental professional (P = 0.046) and commercial sources (P = 0.009) were significantly more complete than videos posted by laypeople. YouTube ™ videos for endodontic search terms varied significantly by source and content and were generally incomplete. The danger of patient reliance on YouTube ™ is highlighted, as is the need for endodontic professionals to play an active role in directing patients towards alternative high-quality information sources. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christy, Brian; Anella, Ryan; Lommen, Andrea

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. Wemore » consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.« less

  12. Characterization of the Reverberation Chamber at the NASA Langley Structural Acoustics Loads and Transmission (SALT) Facility

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.

    2013-01-01

    In 2011 the noise generating capabilities in the reverberation chamber of the Structural Acoustic Loads and Transmission (SALT) facility at NASA Langley Research Center were enhanced with two fiberglass reinforced polyester resin exponential horns, each coupled to Wyle Acoustic Source WAS-3000 airstream modulators. This report describes the characterization of the reverberation chamber in terms of the background noise, diffusivity, sound pressure levels, the reverberation times and the related overall acoustic absorption in the empty chamber and with the acoustic horn(s) installed. The frequency range of interest includes the 80 Hz to 8000 Hz one-third octave bands. Reverberation time and sound pressure level measurements were conducted and standard deviations from the mean were computed. It was concluded that a diffuse field could be produced above the Schroeder frequency in the 400 Hz one-third octave band and higher for all applications. This frequency could be lowered by installing panel diffusers or moving vanes to improve the acoustic modal overlap in the chamber. In the 80 Hz to 400 Hz one-third octave bands a successful measurement will be dependent on the type of measurement, the test configuration, the source and microphone locations and the desired accuracy. It is recommended that qualification measurements endorsed in the International Standards be conducted for each particular application.

  13. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  14. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  15. On the effect of using the Shapiro filter to smooth winds on a sphere

    NASA Technical Reports Server (NTRS)

    Takacs, L. L.; Balgovind, R. C.

    1984-01-01

    Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.

  16. Numerical modeling of materials processing applications of a pulsed cold cathode electron gun

    NASA Astrophysics Data System (ADS)

    Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.

    1998-04-01

    A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.

  17. Translation invariant time-dependent massive gravity: Hamiltonian analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mourad, Jihad; Steer, Danièle A.; Noui, Karim, E-mail: mourad@apc.univ-paris7.fr, E-mail: karim.noui@lmpt.univ-tours.fr, E-mail: steer@apc.univ-paris7.fr

    2014-09-01

    The canonical structure of the massive gravity in the first order moving frame formalism is studied. We work in the simplified context of translation invariant fields, with mass terms given by general non-derivative interactions, invariant under the diagonal Lorentz group, depending on the moving frame as well as a fixed reference frame. We prove that the only mass terms which give 5 propagating degrees of freedom are the dRGT mass terms, namely those which are linear in the lapse. We also complete the Hamiltonian analysis with the dynamical evolution of the system.

  18. Bayesian Inference for Time Trends in Parameter Values using Weighted Evidence Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. L. Kelly; A. Malkhasyan

    2010-09-01

    There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “in-dustry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an applica-tion of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates an approach to incorporating multiple sources of data via applicability weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less

  19. Bayesian Inference for Time Trends in Parameter Values: Case Study for the Ageing PSA Network of the European Commission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly; Albert Malkhasyan

    2010-06-01

    There is a nearly ubiquitous assumption in PSA that parameter values are at least piecewise-constant in time. As a result, Bayesian inference tends to incorporate many years of plant operation, over which there have been significant changes in plant operational and maintenance practices, plant management, etc. These changes can cause significant changes in parameter values over time; however, failure to perform Bayesian inference in the proper time-dependent framework can mask these changes. Failure to question the assumption of constant parameter values, and failure to perform Bayesian inference in the proper time-dependent framework were noted as important issues in NUREG/CR-6813, performedmore » for the U. S. Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards in 2003. That report noted that “industry lacks tools to perform time-trend analysis with Bayesian updating.” This paper describes an application of time-dependent Bayesian inference methods developed for the European Commission Ageing PSA Network. These methods utilize open-source software, implementing Markov chain Monte Carlo sampling. The paper also illustrates the development of a generic prior distribution, which incorporates multiple sources of generic data via weighting factors that address differences in key influences, such as vendor, component boundaries, conditions of the operating environment, etc.« less

  20. Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.

    PubMed

    Escolano, José; López, José J; Pueo, Basilio

    2007-06-01

    Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  2. Long-term trends in California mobile source emissions and ambient concentrations of black carbon and organic aerosol.

    PubMed

    McDonald, Brian C; Goldstein, Allen H; Harley, Robert A

    2015-04-21

    A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.

  3. Current Knowledge and Priorities for Future Research in Late Effects after Hematopoietic Stem Cell Transplantation (HCT) for Severe Combined Immunodeficiency (SCID) Patients: a Consensus Statement from the Second Pediatric Blood and Marrow Transplant Consortium International Conference on Late Effects after Pediatric HCT

    PubMed Central

    Heimall, J; Puck, J; Buckley, R H; Fleisher, T A; Gennery, A R; Neven, B; Slatter, M; Haddad, E; Notarangelo, L; Baker, KS; Dietz, A C; Duncan, C; Pulsipher, M A; Cowan, MJ

    2017-01-01

    Severe Combined Immunodeficiency (SCID) is one of the most common indications for pediatric hematopoietic cell transplantation (HCT) in patients with primary immunodeficiency (PID). Historically, SCID was diagnosed in infants who presented with opportunistic infections within the first year of life. With newborn screening (NBS) for SCID in most of the U.S., the majority of infants with SCID are now diagnosed and treated in the first 3.5 months of life, although in the rest of the world, the lack of NBS means that most infants with SCID still present with infections. The average survival for transplanted SCID patients currently is >70% at 3 years post-transplant, although this can vary significantly based on multiple factors including age and infection status at the time of transplantation, type of donor source utilized, manipulation of graft prior to transplant, GVHD prophylaxis, type of conditioning (if any) utilized and underlying genotype of SCID. In at least one study of SCID patients who received no conditioning, long-term survival was 77% at 8.7 years (range out to 26 years) post-transplantation. While a majority of patients with SCID will engraft T cells without any conditioning therapy, depending on genotype, donor source, HLA match and presence of circulating maternal cells a sizable percentage of these will fail to achieve full immune reconstitution. Without conditioning, T cell reconstitution typically occurs, although not always fully, while B cell engraftment does not—leaving some molecular types of SCID patients with intrinsically defective B cells in most cases dependent on regular infusions of immunoglobulin. Because of this, many centers have used conditioning with alkylating agents including busulfan or melphalan known to open marrow niches in attempts to achieve B cell reconstitution. Thus, it is imperative that we understand the potential late effects of these agents in this patient population. There are also non-immunologic risks associated with HCT for SCID that appear to be dependent upon the genotype of the patient. In this report we have evaluated the published data on late effects and attempted to summarize the known risks associated with conditioning and alternative donor sources. These data, while informative, are also a clear demonstration that there is still much to be learned from the SCID population in terms of their post-HCT outcomes. This paper will summarize current findings and recommend further research in areas considered high priority. Specific guidelines regarding a recommended approach to long-term follow up, including laboratory and clinical monitoring will be forthcoming in a subsequent paper. PMID:28068510

  4. Seismic hazard assessment of the Province of Murcia (SE Spain): analysis of source contribution to hazard

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.

    2007-10-01

    A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.

  5. SPECTRAL SURVEY OF X-RAY BRIGHT ACTIVE GALACTIC NUCLEI FROM THE ROSSI X-RAY TIMING EXPLORER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivers, Elizabeth; Markowitz, Alex; Rothschild, Richard, E-mail: erivers@ucsd.edu

    2011-03-15

    Using long-term monitoring data from the Rossi X-ray Timing Explorer (RXTE), we have selected 23 active galactic nuclei (AGNs) with sufficient brightness and overall observation time to derive broadband X-ray spectra from 3 to {approx}>100 keV. Our sample includes mainly radio-quiet Seyferts, as well as seven radio-loud sources. Given the longevity of the RXTE mission, the greater part of our data is spread out over more than a decade, providing truly long-term average spectra and eliminating inconsistencies arising from variability. We present long-term average values of absorption, Fe line parameters, Compton reflection strengths, and photon indices, as well as fluxesmore » and luminosities for the hard and very hard energy bands, 2-10 keV and 20-100 keV, respectively. We find tentative evidence for high-energy rollovers in three of our objects. We improve upon previous surveys of the very hard X-ray energy band in terms of accuracy and sensitivity, particularly with respect to confirming and quantifying the Compton reflection component. This survey is meant to provide a baseline for future analysis with respect to the long-term averages for these sources and to cement the legacy of RXTE, and especially its High Energy X-ray Timing Experiment, as a contributor to AGN spectral science.« less

  6. Experimental study on the measurement of uranium casting enrichment by time-dependent coincidence method

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Xiong; Li, Jian-Sheng; Gong, Jian; Zhu, Jian-Yu; Huang, Po

    2013-10-01

    Based on the time-dependent coincidence method, a preliminary experiment has been performed on uranium metal castings with similar quality (about 8-10 kg) and shape (hemispherical shell) in different enrichments using neutron from Cf fast fission chamber and timing DT accelerator. Groups of related parameters can be obtained by analyzing the features of time-dependent coincidence counts between source-detector and two detectors to characterize the fission signal. These parameters have high sensitivity to the enrichment, the sensitivity coefficient (defined as (ΔR/Δm)/R¯) can reach 19.3% per kg of 235U. We can distinguish uranium castings with different enrichments to hold nuclear weapon verification.

  7. Reaction front barriers in time aperiodic fluid flows

    NASA Astrophysics Data System (ADS)

    Locke, Rory; Mitchell, Kevin

    2016-11-01

    Many chemical and biological systems can be characterized by the propagation of a front that separates different phases or species. One approach to formalizing a general theory is to apply frameworks developed in nonlinear dynamics. It has been shown that invariant manifolds form barriers to passive transport in time-dependent or time-periodic fluid flows. More recently, analogous manifolds termed burning- invariant-manifolds (BIMs), have been shown to form one-sided barriers to reaction fronts in advection-reaction-diffusion (ARD) systems. To model more realistic time-aperiodic systems, recent theoretical work has suggested that similar one-sided barriers, termed burning Lagrangian coherent structures (bLCSs), exist for fluid velocity data prescribed over a finite time interval. In this presentation, we use a stochastic "wind" to generate time dependence in a double-vortex channel flow and demonstrate the (locally) most attracting or repelling curves are the bLCSs.

  8. Exact Magnetic Diffusion Solutions for Magnetohydrodynamic Code Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D S

    In this paper, the authors present several new exact analytic space and time dependent solutions to the problem of magnetic diffusion in R-Z geometry. These problems serve to verify several different elements of an MHD implementation: magnetic diffusion, external circuit time integration, current and voltage energy sources, spatially dependent conductivities, and ohmic heating. The exact solutions are shown in comparison with 2D simulation results from the Ares code.

  9. Creep and shrinkage effects on integral abutment bridges

    NASA Astrophysics Data System (ADS)

    Munuswamy, Sivakumar

    Integral abutment bridges provide bridge engineers an economical design alternative to traditional bridges with expansion joints owing to the benefits, arising from elimination of expensive joints installation and reduced maintenance cost. The superstructure for integral abutment bridges is cast integrally with abutments. Time-dependent effects of creep, shrinkage of concrete, relaxation of prestressing steel, temperature gradient, restraints provided by abutment foundation and backfill and statical indeterminacy of the structure introduce time-dependent variations in the redundant forces. An analytical model and numerical procedure to predict instantaneous linear behavior and non-linear time dependent long-term behavior of continuous composite superstructure are developed in which the redundant forces in the integral abutment bridges are derived considering the time-dependent effects. The redistributions of moments due to time-dependent effects have been considered in the analysis. The analysis includes nonlinearity due to cracking of the concrete, as well as the time-dependent deformations. American Concrete Institute (ACI) and American Association of State Highway and Transportation Officials (AASHTO) models for creep and shrinkage are considered in modeling the time dependent material behavior. The variations in the material property of the cross-section corresponding to the constituent materials are incorporated and age-adjusted effective modulus method with relaxation procedure is followed to include the creep behavior of concrete. The partial restraint provided by the abutment-pile-soil system is modeled using discrete spring stiffness as translational and rotational degrees of freedom. Numerical simulation of the behavior is carried out on continuous composite integral abutment bridges and the deformations and stresses due to time-dependent effects due to typical sustained loads are computed. The results from the analytical model are compared with the published laboratory experimental and field data. The behavior of the laterally loaded piles supporting the integral abutments is evaluated and presented in terms of the lateral deflection, bending moment, shear force and stress along the pile depth.

  10. Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts

    NASA Astrophysics Data System (ADS)

    AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.

    2014-12-01

    Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using nonparametric kernel methods. In addition, to the pointwise hourly wind speed forecasts, a confidence interval is also provided which allows to quantify the uncertainty around the forecasts.

  11. Multiwavelength Challenges in the Fermi Era

    NASA Technical Reports Server (NTRS)

    Thompson, D. J.

    2010-01-01

    The gamma-ray surveys of the sky by AGILE and the Fermi Gamma-ray Space Telescope offer both opportunities and challenges for multiwavelength and multi-messenger studies. Gamma-ray bursts, pulsars, binary sources, flaring Active Galactic Nuclei, and Galactic transient sources are all phenomena that can best be studied with a wide variety of instruments simultaneously or contemporaneously. From the gamma-ray side, a principal challenge is the latency from the time of an astrophysical event to the recognition of this event in the data. Obtaining quick and complete multiwavelength coverage of gamma-ray sources of interest can be difficult both in terms of logistics and in terms of generating scientific interest.

  12. Space-based measurements of elemental abundances and their relation to solar abundances

    NASA Technical Reports Server (NTRS)

    Coplan, M. A.; Ogilvie, K. W.; Bochsler, P.; Geiss, J.

    1990-01-01

    The Ion Composition Instrument (ICI) aboard the ISEE-3/ICE spacecraft was in the solar wind continuously from August 1978 to December 1982. The results made it possible to establish long-term average solar wind abundance values for helium, oxygen, neon, silicon, and iron. The Charge-Energy-Mass instrument aboard the CCE spacecraft of the AMPTE mission has measured the abundance of these elements in the magnetosheath and has also added carbon, nitrogen, magnesium, and sulfur to the list. There is strong evidence that these magnetosheath abundances are representative of the solar wind. Other sources of solar wind abundances are Solar Energetic Particle experiments and Apollo lunar foils. When comparing the abundances from all of these sources with photospheric abundances, it is clear that helium is depleted in the solar wind while silicon and iron are enhanced. Solar wind abundances for carbon, nitrogen, oxygen, and neon correlate well with the photospheric values. The incorporation of minor ions into the solar wind appears to depend upon both the ionization times for the elements and the Coulomb drag exerted by the outflowing proton flux.

  13. Turbulent mass inhomogeneities induced by a point-source

    NASA Astrophysics Data System (ADS)

    Thalabard, Simon

    2018-03-01

    We describe how turbulence distributes tracers away from a localized source of injection, and analyze how the spatial inhomogeneities of the concentration field depend on the amount of randomness in the injection mechanism. For that purpose, we contrast the mass correlations induced by purely random injections with those induced by continuous injections in the environment. Using the Kraichnan model of turbulent advection, whereby the underlying velocity field is assumed to be shortly correlated in time, we explicitly identify scaling regions for the statistics of the mass contained within a shell of radius r and located at a distance ρ away from the source. The two key parameters are found to be (i) the ratio s 2 between the absolute and the relative timescales of dispersion and (ii) the ratio Λ between the size of the cloud and its distance away from the source. When the injection is random, only the former is relevant, as previously shown by Celani et al (2007 J. Fluid Mech. 583 189–98) in the case of an incompressible fluid. It is argued that the space partition in terms of s 2 and Λ is a robust feature of the injection mechanism itself, which should remain relevant beyond the Kraichnan model. This is for instance the case in a generalized version of the model, where the absolute dispersion is prescribed to be ballistic rather than diffusive.

  14. QCD sum rules study of meson-baryon sigma terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkol, Gueray; Oka, Makoto; Turan, Guersevil

    2008-11-01

    The pion-baryon sigma terms and the strange-quark condensates of the octet and the decuplet baryons are calculated by employing the method of QCD sum rules. We evaluate the vacuum-to-vacuum transition matrix elements of two baryon interpolating fields in an external isoscalar-scalar field and use a Monte Carlo-based approach to systematically analyze the sum rules and the uncertainties in the results. We extract the ratios of the sigma terms, which have rather high accuracy and minimal dependence on QCD parameters. We discuss the sources of uncertainties and comment on possible strangeness content of the nucleon and the Delta.

  15. Long-Term Spectral and Timing Behavior of the Black Hole Candidate XTE J1908+094

    NASA Technical Reports Server (NTRS)

    Gogus, Ersin; Finger, Mark H.; Kouveliotou, Chryssa; Woods, Peter M.; Patel, Sandeep K.; Ruppen, Michael; Swank, Jean H.; Markwardt, Craig B.; VanDerKlis, Michiel

    2004-01-01

    We present the long-term X-ray light curves and detailed spectral and timing analyses of XTE J1908+094 using the Rossi X-Ray Timing Explorer Proportional Counter Array observations covering two outbursts in 2002 and early 2003. At the onset of the first outburst, the source was found in a spectrally low/hard state lasting for approx.40 days, followed by a 3 day long transition to the high/soft state. The source flux (in 2- 10 keV) reached approx.100 mcrab on 2002 April 6, then decayed rapidly. In power spectra, we detect strong band-limited noise and varying low- frequency quasi-periodic oscillations that evolved from approx.0.5 to approx.5 Hz during the initial low/hard state of the source. We find that the second outburst closely resembled the spectral evolution of the first. The X-ray transient s overall outburst characteristics led us to classify XTE J1908+094 as a black hole candidate. Here we also derive precise X-ray position of the source using Chandra observations that were performed during the decay phase of the first outburst and following the second outburst.

  16. The Combined Effect of Periodic Signals and Noise on the Dilution of Precision of GNSS Station Velocity Uncertainties

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Olivares, German; Teferle, Felix Norman; Bogusz, Janusz

    2016-04-01

    Station velocity uncertainties determined from a series of Global Navigation Satellite System (GNSS) position estimates depend on both the deterministic and stochastic models applied to the time series. While the deterministic model generally includes parameters for a linear and several periodic terms the stochastic model is a representation of the noise character of the time series in form of a power-law process. For both of these models the optimal model may vary from one time series to another while the models also depend, to some degree, on each other. In the past various power-law processes have been shown to fit the time series and the sources for the apparent temporally-correlated noise were attributed to, for example, mismodelling of satellites orbits, antenna phase centre variations, troposphere, Earth Orientation Parameters, mass loading effects and monument instabilities. Blewitt and Lavallée (2002) demonstrated how improperly modelled seasonal signals affected the estimates of station velocity uncertainties. However, in their study they assumed that the time series followed a white noise process with no consideration of additional temporally-correlated noise. Bos et al. (2010) empirically showed for a small number of stations that the noise character was much more important for the reliable estimation of station velocity uncertainties than the seasonal signals. In this presentation we pick up from Blewitt and Lavallée (2002) and Bos et al. (2010), and have derived formulas for the computation of the General Dilution of Precision (GDP) under presence of periodic signals and temporally-correlated noise in the time series. We show, based on simulated and real time series from globally distributed IGS (International GNSS Service) stations processed by the Jet Propulsion Laboratory (JPL), that periodic signals dominate the effect on the velocity uncertainties at short time scales while for those beyond four years, the type of noise becomes much more important. In other words, for time series long enough, the assumed periodic signals do not affect the velocity uncertainties as much as the assumed noise model. We calculated the GDP to be the ratio between two errors of velocity: without and with inclusion of seasonal terms of periods equal to one year and its overtones till 3rd. To all these cases power-law processes of white, flicker and random-walk noise were added separately. Few oscillations in GDP can be noticed for integer years, which arise from periodic terms added. Their amplitudes in GDP increase along with the increasing spectral index. Strong peaks of oscillations in GDP are indicated for short time scales, especially for random-walk processes. This means that badly monumented stations are affected the most. Local minima and maxima in GDP are also enlarged as the noise approaches random walk. We noticed that the semi-annual signal increased the local GDP minimum for white noise. This suggests that adding power-law noise to a deterministic model with annual term or adding a semi-annual term to white noise causes an increased velocity uncertainty even at the points, where determined velocity is not biased.

  17. A modification of the Regional Nutrient Management model (ReNuMa) to identify long-term changes in riverine nitrogen sources

    NASA Astrophysics Data System (ADS)

    Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang

    2018-06-01

    Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.

  18. The study of flow pattern and phase-change problem in die casting process

    NASA Technical Reports Server (NTRS)

    Wang, T. S.; Wei, H.; Chen, Y. S.; Shang, H. M.

    1996-01-01

    The flow pattern and solidification phenomena in die casting process have been investigated in the first phase study. The flow pattern in filling process is predicted by using a VOF (volume of fluid) method. A good agreement with experimental observation is obtained for filling the water into a die cavity with different gate geometry and with an obstacle in the cavity. An enthalpy method has been applied to solve the solidification problem. By treating the latent heat implicitly into the enthalpy instead of explicitly into the source term, the CPU time can be reduced at least 20 times. The effect of material properties on solidification fronts is tested. It concludes that the dependence of properties on temperature is significant. The influence of the natural convection over the diffusion has also been studied. The result shows that the liquid metal solidification phenomena is diffusion dominant, and the natural convection can affect the shape of the interface. In the second phase study, the filling and solidification processes will be considered simultaneously.

  19. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.

  20. Study on ion energy distribution in low-frequency oscillation time scale of Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Li, Wenbo; Ding, Yongjie; Han, Liang; Yu, Daren; Cao, Yong

    2017-11-01

    This paper reports on the dynamic characteristics of the distribution of ion energy during Hall thruster discharge in the low-frequency oscillation time scale through experimental studies, and a statistical analysis of the time-varying peak and width of ion energy and the ratio of high-energy ions during the low-frequency oscillation. The results show that the ion energy distribution exhibits a periodic change during the low-frequency oscillation. Moreover, the variation in the ion energy peak is opposite to that of the discharge current, and the variations in width of the ion energy distribution and the ratio of high-energy ions are consistent with that of the discharge current. The variation characteristics of the ion density and discharge potential were simulated by one-dimensional hybrid-direct kinetic simulations; the simulation results and analysis indicate that the periodic change in the distribution of ion energy during the low-frequency oscillation depends on the relationship between the ionization source term and discharge potential distribution during ionization in the discharge channel.

  1. Characterizing the spatio-temporal and energy-dependent response of riometer absorption to particle precipitation

    NASA Astrophysics Data System (ADS)

    Kellerman, Adam; Makarevich, Roman; Spanswick, Emma; Donovan, Eric; Shprits, Yuri

    2016-07-01

    Energetic electrons in the 10's of keV range precipitate to the upper D- and lower E-region ionosphere, and are responsible for enhanced ionization. The same particles are important in the inner magnetosphere, as they provide a source of energy for waves, and thus relate to relativistic electron enhancements in Earth's radiation belts.In situ observations of plasma populations and waves are usually limited to a single point, which complicates temporal and spatial analysis. Also, the lifespan of satellite missions is often limited to several years which does not allow one to infer long-term climatology of particle precipitation, important for affecting ionospheric conditions at high latitudes. Multi-point remote sensing of the ionospheric plasma conditions can provide a global view of both ionospheric and magnetospheric conditions, and the coupling between magnetospheric and ionospheric phenomena can be examined on time-scales that allow comprehensive statistical analysis. In this study we utilize multi-point riometer measurements in conjunction with in situ satellite data, and physics-based modeling to investigate the spatio-temporal and energy-dependent response of riometer absorption. Quantifying this relationship may be a key to future advancements in our understanding of the complex D-region ionosphere, and may lead to enhanced specification of auroral precipitation both during individual events and over climatological time-scales.

  2. Inactivation and injury of total coliform bacteria after primary disinfection of drinking water by TiO2 photocatalysis.

    PubMed

    Rizzo, Luigi

    2009-06-15

    In this study the potential application of TiO(2) photocatalysis as primary disinfection system of drinking water was investigated in terms of coliform bacteria inactivation and injury. As model water the effluent of biological denitrification unit for nitrate removal from groundwater, which is characterized by high organic matter and bacteria release, was used. The injury of photocatalysis on coliform bacteria was characterized by means of selective (mEndo) and less selective (mT7) culture media. Different catalyst loadings as well as photolysis and adsorption effects were investigated. Photocatalysis was effective in coliform bacteria inactivation (91-99% after 60 min irradiation time, depending on both catalyst loading and initial density of coliform bacteria detected by mEndo), although no total removal was observed after 60 min irradiation time. The contribution of adsorption mechanism was significant (60-98% after 60 min, depending on catalyst loading) compared to previous investigations probably due to the nature of source water rich in particulate organic matter and biofilm. Photocatalysis process did not result in any irreversible injury (98.8% being the higher injury) under investigated conditions, thus a bacteria regrowth may take place under optimum environment conditions if any final disinfection process (e.g., chlorine or chlorine dioxide) is not used.

  3. Kinetic memory based on the enzyme-limited competition.

    PubMed

    Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko

    2014-08-01

    Cellular memory, which allows cells to retain information from their environment, is important for a variety of cellular functions, such as adaptation to external stimuli, cell differentiation, and synaptic plasticity. Although posttranslational modifications have received much attention as a source of cellular memory, the mechanisms directing such alterations have not been fully uncovered. It may be possible to embed memory in multiple stable states in dynamical systems governing modifications. However, several experiments on modifications of proteins suggest long-term relaxation depending on experienced external conditions, without explicit switches over multi-stable states. As an alternative to a multistability memory scheme, we propose "kinetic memory" for epigenetic cellular memory, in which memory is stored as a slow-relaxation process far from a stable fixed state. Information from previous environmental exposure is retained as the long-term maintenance of a cellular state, rather than switches over fixed states. To demonstrate this kinetic memory, we study several models in which multimeric proteins undergo catalytic modifications (e.g., phosphorylation and methylation), and find that a slow relaxation process of the modification state, logarithmic in time, appears when the concentration of a catalyst (enzyme) involved in the modification reactions is lower than that of the substrates. Sharp transitions from a normal fast-relaxation phase into this slow-relaxation phase are revealed, and explained by enzyme-limited competition among modification reactions. The slow-relaxation process is confirmed by simulations of several models of catalytic reactions of protein modifications, and it enables the memorization of external stimuli, as its time course depends crucially on the history of the stimuli. This kinetic memory provides novel insight into a broad class of cellular memory and functions. In particular, applications for long-term potentiation are discussed, including dynamic modifications of calcium-calmodulin kinase II and cAMP-response element-binding protein essential for synaptic plasticity.

  4. Pressure evolution equation for the particulate phase in inhomogeneous compressible disperse multiphase flows

    NASA Astrophysics Data System (ADS)

    Annamalai, Subramanian; Balachandar, S.; Sridharan, P.; Jackson, T. L.

    2017-02-01

    An analytical expression describing the unsteady pressure evolution of the dispersed phase driven by variations in the carrier phase is presented. In this article, the term "dispersed phase" represents rigid particles, droplets, or bubbles. Letting both the dispersed and continuous phases be inhomogeneous, unsteady, and compressible, the developed pressure equation describes the particle response and its eventual equilibration with that of the carrier fluid. The study involves impingement of a plane traveling wave of a given frequency and subsequent volume-averaged particle pressure calculation due to a single wave. The ambient or continuous fluid's pressure and density-weighted normal velocity are identified as the source terms governing the particle pressure. Analogous to the generalized Faxén theorem, which is applicable to the particle equation of motion, the pressure expression is also written in terms of the surface average of time-varying incoming flow properties. The surface average allows the current formulation to be generalized for any complex incident flow, including situations where the particle size is comparable to that of the incoming flow. Further, the particle pressure is also found to depend on the dispersed-to-continuous fluid density ratio and speed of sound ratio in addition to dynamic viscosities of both fluids. The model is applied to predict the unsteady pressure variation inside an aluminum particle subjected to normal shock waves. The results are compared against numerical simulations and found to be in good agreement. Furthermore, it is shown that, although the analysis is conducted in the limit of negligible flow Reynolds and Mach numbers, it can be used to compute the density and volume of the dispersed phase to reasonable accuracy. Finally, analogous to the pressure evolution expression, an equation describing the time-dependent particle radius is deduced and is shown to reduce to the Rayleigh-Plesset equation in the linear limit.

  5. Import and export fluxes of macrozooplankton are taxa- and season-dependent at Jiuduansha marsh, Yangtze River estuary

    NASA Astrophysics Data System (ADS)

    Qin, Haiming; Sheng, Qiang; Chu, Tianjiang; Wang, Sikai; Wu, Jihua

    2015-09-01

    Macrozooplankton may play important roles in influencing nutrient exchange between salt marsh and nearby estuarine ecosystems through predator-prey interactions and their transport by tidal flows. In this study, macrozooplankton transport through year-round monthly sampling was investigated in a salt marsh creek of the Yangtze River estuary. Twenty-one orders of macrozooplankton were captured. Calanoida and Decapoda were dominant and numerically comprised 59.59% and 37.59% respectively of the total captured macrozooplankton throughout the year. Decapoda mainly occurred in April, May and June. In other months, the Calanoida contributed over 90% of the total individuals. The annual Ferrari index (I) for total individual number of macrozooplankton was 0.27, which generally supports the viewpoint that salt marshes are sources of zooplankton. The salt marsh was mainly a source for decapods and mysids, possibly because of larval release in their breeding seasons. The marsh was also a source for amphipods, probably because some benthic forms became transient planktonic forms during tidal water flushing. Copepods and fish larvae exhibited net import into the salt marsh, which may result from predation from salt marsh settlers or retention in the salt marsh. Monthly Ferrari index (I) estimations revealed that the role of the salt marsh as a sink or source of macrozooplankton was time-dependent, which is related to the life history of animals. This study showed that whether the salt marsh zooplankton act as energy importers or exporters is group/taxa-dependent and time-dependent.

  6. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  7. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  8. Time-correlated neutron analysis of a multiplying HEU source

    NASA Astrophysics Data System (ADS)

    Miller, E. C.; Kalter, J. M.; Lavelle, C. M.; Watson, S. M.; Kinlaw, M. T.; Chichester, D. L.; Noonan, W. A.

    2015-06-01

    The ability to quickly identify and characterize special nuclear material remains a national security challenge. In counter-proliferation applications, identifying the neutron multiplication of a sample can be a good indication of the level of threat. Currently neutron multiplicity measurements are performed with moderated 3He proportional counters. These systems rely on the detection of thermalized neutrons, a process which obscures both energy and time information from the source. Fast neutron detectors, such as liquid scintillators, have the ability to detect events on nanosecond time scales, providing more information on the temporal structure of the arriving signal, and provide an alternative method for extracting information from the source. To explore this possibility, a series of measurements were performed on the Idaho National Laboratory's MARVEL assembly, a configurable HEU source. The source assembly was measured in a variety of different HEU configurations and with different reflectors, covering a range of neutron multiplications from 2 to 8. The data was collected with liquid scintillator detectors and digitized for offline analysis. A gap based approach for identifying the bursts of detected neutrons associated with the same fission chain was used. Using this approach, we are able to study various statistical properties of individual fission chains. One of these properties is the distribution of neutron arrival times within a given burst. We have observed two interesting empirical trends. First, this distribution exhibits a weak, but definite, dependence on source multiplication. Second, there are distinctive differences in the distribution depending on the presence and type of reflector. Both of these phenomena might prove to be useful when assessing an unknown source. The physical origins of these phenomena can be illuminated with help of MCNPX-PoliMi simulations.

  9. Limits on an energy dependence of the speed of light from a flare of the active galaxy PKS 2155-304.

    PubMed

    Aharonian, F; Akhperjanian, A G; Barres de Almeida, U; Bazer-Bachi, A R; Becherini, Y; Behera, B; Beilicke, M; Benbow, W; Bernlöhr, K; Boisson, C; Bochow, A; Borrel, V; Braun, I; Brion, E; Brucker, J; Brun, P; Bühler, R; Bulik, T; Büsching, I; Boutelier, T; Carrigan, S; Chadwick, P M; Charbonnier, A; Chaves, R C G; Chounet, L-M; Clapson, A C; Coignet, G; Costamante, L; Dalton, M; Degrange, B; Deil, C; Dickinson, H J; Djannati-Ataï, A; Domainko, W; Drury, L O'C; Dubois, F; Dubus, G; Dyks, J; Egberts, K; Emmanoulopoulos, D; Espigat, P; Farnier, C; Feinstein, F; Fiasson, A; Förster, A; Fontaine, G; Füssling, M; Gabici, S; Gallant, Y A; Gérard, L; Giebels, B; Glicenstein, J F; Glück, B; Goret, P; Hadjichristidis, C; Hauser, D; Hauser, M; Heinz, S; Heinzelmann, G; Henri, G; Hermann, G; Hinton, J A; Hoffmann, A; Hofmann, W; Holleran, M; Hoppe, S; Horns, D; Jacholkowska, A; de Jager, O C; Jung, I; Katarzyński, K; Kaufmann, S; Kendziorra, E; Kerschhaggl, M; Khangulyan, D; Khélifi, B; Keogh, D; Komin, Nu; Kosack, K; Lamanna, G; Lenain, J-P; Lohse, T; Marandon, V; Martin, J M; Martineau-Huynh, O; Marcowith, A; Maurin, D; McComb, T J L; Medina, C; Moderski, R; Moulin, E; Naumann-Godo, M; de Naurois, M; Nedbal, D; Nekrassov, D; Niemiec, J; Nolan, S J; Ohm, S; Olive, J-F; de Oña Wilhelmi, E; Orford, K J; Osborne, J L; Ostrowski, M; Panter, M; Pedaletti, G; Pelletier, G; Petrucci, P-O; Pita, S; Pühlhofer, G; Punch, M; Quirrenbach, A; Raubenheimer, B C; Raue, M; Rayner, S M; Renaud, M; Rieger, F; Ripken, J; Rob, L; Rosier-Lees, S; Rowell, G; Rudak, B; Ruppel, J; Sahakian, V; Santangelo, A; Schlickeiser, R; Schöck, F M; Schröder, R; Schwanke, U; Schwarzburg, S; Schwemmer, S; Shalchi, A; Skilton, J L; Sol, H; Spangler, D; Stawarz, Ł; Steenkamp, R; Stegmann, C; Superina, G; Tam, P H; Tavernet, J-P; Terrier, R; Tibolla, O; van Eldik, C; Vasileiadis, G; Venter, C; Vialle, J P; Vincent, P; Vivier, M; Völk, H J; Volpe, F; Wagner, S J; Ward, M; Zdziarski, A A; Zech, A

    2008-10-24

    In the past few decades, several models have predicted an energy dependence of the speed of light in the context of quantum gravity. For cosmological sources such as active galaxies, this minuscule effect can add up to measurable photon-energy dependent time lags. In this Letter a search for such time lags during the High Energy Stereoscopic System observations of the exceptional very high energy flare of the active galaxy PKS 2155-304 on 28 July 2006 is presented. Since no significant time lag is found, lower limits on the energy scale of speed of light modifications are derived.

  10. The Limited Role of Number of Nested Syntactic Dependencies in Accounting for Processing Cost: Evidence from German Simplex and Complex Verbal Clusters

    PubMed Central

    Bader, Markus

    2018-01-01

    This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages. PMID:29410633

  11. The Limited Role of Number of Nested Syntactic Dependencies in Accounting for Processing Cost: Evidence from German Simplex and Complex Verbal Clusters.

    PubMed

    Bader, Markus

    2017-01-01

    This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages.

  12. The development of the room temperature LWIR HgCdTe detectors for free space optics communication systems

    NASA Astrophysics Data System (ADS)

    Martyniuk, Piotr; Gawron, Waldemar; Mikołajczyk, Janusz

    2017-10-01

    There are many room temperature applications to include free space optics (FSO) communication system combining quantum cascade lasers sources where HgCdTe long-wave (8-12 micrometer) infrared radiation (LWIR) detector reaching ultrafast response time < 1 ns and nearly background limited infrared photodetection (BLIP) condition are implemented. Both nearly BLIP detectivity and ultra-response time stay in contradiction in detector's optimization process. That issue could be circumvented by implementation of the hyperhemispherical GaAs immersion lens into structure to increase optical to electrical area ratio giving flexibility in terms of response time optimization. The optimization approach depends on voltage condition. The generation - recombination (GR) mechanism within active layer was found to be important for forward and weak reverse conditions while photogenerated carrier transport is significant for higher reverse bias. Except of applied voltage, the drift time strongly depends on thickness of the absorption region. Reducing the thickness of the active region, the short drift times could be reached, but that solution significantly reduces quantum efficiency and lowers detectivity. Taking that into consideration a special multilayer heterostructure designs are developed. The p-type absorber is promising due to both high ambipolar mobility and low thermal GR driven by the Auger 7 mechanism. Theoretical simulations indicate that depending on bias condition and T = 300 K the multilayer barrier LWIR HgCdTe structure could reach response time below < 100 ps while biased and <= 1 ns while unbiased. Immersed detectivity reaches > 109 cmHz1/2/W. Since commercially available FSO could operate separately in SWIR, MWIR and LWIR range - the dual band detectors should be implemented into FSO. This paper shows theoretical performance of the dual band back-to-back MWIR and LWIR HgCdTe detector operating at 300 K pointing out the MWIR active layer influence on LWIR operating regime.

  13. Cramer-Rao bound analysis of wideband source localization and DOA estimation

    NASA Astrophysics Data System (ADS)

    Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung

    2002-12-01

    In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.

  14. Operational source receptor calculations for large agglomerations

    NASA Astrophysics Data System (ADS)

    Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael

    2016-04-01

    For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission reduction measures but they also indicate the relative importance of indigenous versus imported air pollution. The calculations are currently performed weekly by MET Norway for the Paris, London, Berlin, Oslo, Po Valley and Rhine-Ruhr regions and the results are provided free of charge at the MACC website (http://www.gmes-atmosphere.eu/services/aqac/policy_interface/regional_sr/). A proposal to extend this service to all EU capitals on a daily basis within the Copernicus Atmosphere Monitoring Service is currently under review. The tool is an important example illustrating the increased application of scientific tools to operational services that support Air Quality policy. This paper will describe this tool in more detail, focusing on the experimental setup, underlying assumptions, uncertainties, computational demand, and the usefulness for air quality for policy. Options to apply the tool for agglomerations outside the EU will also be discussed (making reference to, e.g., PANDA, which is a European-Chinese collaboration project).

  15. Isotopic composition and neutronics of the Okelobondo natural reactor

    NASA Astrophysics Data System (ADS)

    Palenik, Christopher Samuel

    The Oklo-Okelobondo and Bangombe uranium deposits, in Gabon, Africa host Earth's only known natural nuclear fission reactors. These 2 billion year old reactors represent a unique opportunity to study used nuclear fuel over geologic periods of time. The reactors in these deposits have been studied as a means by which to constrain the source term of fission product concentrations produced during reactor operation. The source term depends on the neutronic parameters, which include reactor operation duration, neutron flux and the neutron energy spectrum. Reactor operation has been modeled using a point-source computer simulation (Oak Ridge Isotope Generation and Depletion, ORIGEN, code) for a light water reactor. Model results have been constrained using secondary ionization mass spectroscopy (SIMS) isotopic measurements of the fission products Nd and Te, as well as U in uraninite from samples collected in the Okelobondo reactor zone. Based upon the constraints on the operating conditions, the pre-reactor concentrations of Nd (150 ppm +/- 75 ppm) and Te (<1 ppm) in uraninite were estimated. Related to the burnup measured in Okelobondo samples (0.7 to 13.8 GWd/MTU), the final fission product inventories of Nd (90 to 1200 ppm) and Te (10 to 110 ppm) were calculated. By the same means, the ranges of all other fission products and actinides produced during reactor operation were calculated as a function of burnup. These results provide a source term against which the present elemental and decay abundances at the fission reactor can be compared. Furthermore, they provide new insights into the extent to which a "fossil" nuclear reactor can be characterized on the basis of its isotopic signatures. In addition, results from the study of two other natural systems related to the radionuclide and fission product transport are included. A detailed mineralogical characterization of the uranyl mineralogy at the Bangombe uranium deposit in Gabon, Africa was completed to improve geochemical models of the solubility-limiting phase. A study of the competing effects of radiation damage and annealing in a U-bearing crystal of zircon shows that low temperature annealing in actinide-bearing phases is significant in the annealing of radiation damage.

  16. A Geomorphologic Synthesis of Nonlinearity in Surface Runoff

    NASA Astrophysics Data System (ADS)

    Wang, C. T.; Gupta, Vijay K.; Waymire, Ed

    1981-06-01

    The geomorphic approach leading to a representation of an instantaneous unit hydrograph (iuh) which we developed earlier is generalized to incorporate nonlinear effects in the rainfall-runoff transformation. It is demonstrated that the nonlinearity in the transformation enters in part through the dependence of the mean holding time on the rainfall intensity. Under an assumed first approximation that this dependence is the sole source of nonlinearity an explicit quasi-linear representation results for the rainfall- runoff transformation. The kernel function of this transformation can be termed as the instantaneous response function (irf) in contradistinction to the notion of an iuh for the case of a linear rainfall-runoff transformation. The predictions from the quasi-linear theory agree very well with predictions from the kinematic wave approach for the one small basin that is analyzed. Also, for two large basins in Illinois having areas of about 1100 mi2 the predictions from the quasi-linear approach compare very well with the observed flows. A measure of nonlinearity, α naturally arises through the dependence of the mean holding time KB(i0) on the rainfall intensity i0via KB (i0) ˜ i0 -α. Computations of α for four basins show that α approaches ⅔ as basin size decreases and approaches zero as the basin size increases. A semilog plot of α versus the square root of the basin area gives a straight line. Confirmation of this relationship for other basins would be of basic importance in predicting flows from ungaged basins.

  17. Are the early predictors of long-term work absence following injury time dependent? Results from the Prospective Outcomes of Injury Study

    PubMed Central

    Lilley, Rebbecca; Davie, Gabrielle; Derrett, Sarah

    2017-01-01

    Objectives Few studies examine the influence of early predictors of work absence beyond 12 months following injury or the time-dependent relative importance of these factors. This study aimed to identify the most important sociodemographic, occupational, health, lifestyle and injury predictors of work absence at 12 and 24 months following injury and to examine changes in the relative importance of these over time. Design Prospective cohort study. Setting The Prospective Outcomes of Injury Study, New Zealand. Participants 2626 injured New Zealand workers aged 18–64 years were identified from the Prospective Outcomes of Injury Study recruited form New Zealand’s monopoly injury compensation provider injury claims register: 2092 completed the 12-month interview (80% follow-up) and 2082 completed the 24-month interview (79% follow-up). Primary and secondary outcomes measures The primary outcomes of interest was absence from work at the time of the 12-month and 24-month follow-up interviews. Results Using modified Poisson regression to estimate relative risks, important groups of workers were identified at increased risk of work absence at both 12 and 24 months: males, low-income workers, trade/manual workers, temporary employees, those reporting two or more comorbidities and those experiencing a work-related injury. Important factors unique to predicting work absence at 12 months included financial insecurity, fixed-term employment and long weekly hours worked; unique factors at 24 months included job dissatisfaction, long weekly days worked, a prior injury and sustaining an injury that was perceived to be a threat to life. Conclusions Important early predictors of work absence at 12 or 24 months following injury are multidimensional and have a time dependent pattern. A consistent set of predictors was, however, present at both time periods that are prime for early intervention. Understanding the multidimensional, time-dependent patterns of early predictors of long-term disability is important to optimally target timely interventions to prevent long-term work disability. PMID:29150466

  18. Time dependent behavior of a graphite/thermoplastic composite and the effects of stress and physical aging

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Feldman, Mark

    1993-01-01

    Two complimentary studies were performed to determine the effects of stress and physical aging on the matrix dominated time dependent properties of IM7/8320 composite. The first of these studies, experimental in nature, used isothermal tensile creep/aging test techniques developed for polymers and adapted them for testing of the composite material. From these tests, the time dependent transverse (S22) and shear (S66) compliance's for an orthotropic plate were found from short term creep compliance measurements at constant, sub-T(sub g) temperatures. These compliance terms were shown to be affected by physical aging. Aging time shift factors and shift rates were found to be a function of temperature and applied stress. The second part of the study relied upon isothermal uniaxial tension tests of IM7/8320 to determine the effects of physical aging on the nonlinear material behavior at elevated temperature. An elastic/viscoplastic constitutive model was used to quantify the effects of aging on the rate-independent plastic and rate-dependent viscoplastic response. Sensitivity of the material constants required by the model to aging time were determined for aging times up to 65 hours. Verification of the analytical model indicated that the effects of prior aging on the nonlinear stress/strain/time data of matrix dominated laminates can be predicted.

  19. Using the example of Istanbul to outline general aspects of protecting reservoirs, rivers and lakes used for drinking water abstraction.

    PubMed

    Tanik, A

    2000-01-01

    The six main drinking water reservoirs of Istanbul are under the threat of pollution due to rapid population increase, unplanned urbanisation and insufficient infrastructure. In contrast to the present land use profile, the environmental evaluation of the catchment areas reveals that point sources of pollutants, especially of domestic origin, dominate over those from diffuse sources. The water quality studies also support these findings, emphasising that if no substantial precautions are taken, there will be no possibility of obtaining drinking water from them. In this paper, under the light of the present status of the reservoirs, possible and probable short- and long-term protective measures are outlined for reducing the impact of point sources. Immediate precautions mostly depend on reducing the pollution arising from the existing settlements. Long-term measures mainly emphasise the preparation of new land use plans taking into consideration the protection of unoccupied lands. Recommendations on protection and control of the reservoirs are stated.

  20. High-power, high-repetition-rate performance characteristics of β-BaB₂O₄ for single-pass picosecond ultraviolet generation at 266 nm.

    PubMed

    Kumar, S Chaitanya; Casals, J Canals; Wei, Junxiong; Ebrahim-Zadeh, M

    2015-10-19

    We report a systematic study on the performance characteristics of a high-power, high-repetition-rate, picosecond ultraviolet (UV) source at 266 nm based on β-BaB2O4 (BBO). The source, based on single-pass fourth harmonic generation (FHG) of a compact Yb-fiber laser in a two-crystal spatial walk-off compensation scheme, generates up to 2.9 W of average power at 266 nm at a pulse repetition rate of ~80 MHz with a single-pass FHG efficiency of 35% from the green to UV. Detrimental issues such as thermal effects have been studied and confirmed by performing relevant measurements. Angular and temperature acceptance bandwidths in BBO for FHG to 266 nm are experimentally determined, indicating that the effective interaction length is limited by spatial walk-off and thermal gradients under high-power operation. The origin of dynamic color center formation due to two-photon absorption in BBO is investigated by measurements of intensity-dependent transmission at 266 nm. Using a suitable theoretical model, two-photon absorption coefficients as well as the color center densities have been estimated at different temperatures. The measurements show that the two-photon absorption coefficient in BBO at 266 nm is ~3.5 times lower at 200°C compared to that at room temperature. The long-term power stability as well as beam pointing stability is analyzed at different output power levels and focusing conditions. Using cylindrical optics, we have circularized the generated elliptic UV beam to a circularity of >90%. To our knowledge, this is the first time such high average powers and temperature-dependent two-photon absorption measurements at 266 nm are reported at repetition rates as high as ~80 MHz.

  1. Sediment Connectivity and Transport Pathways in Tidal Inlets: a Conceptual Framework with Application to Ameland Inlet

    NASA Astrophysics Data System (ADS)

    Pearson, S.; van Prooijen, B. C.; Zheng Bing, W.; Bak, J.

    2017-12-01

    Predicting the response of tidal inlets and adjacent coastlines to sea level rise and anthropogenic interventions (e.g. sand nourishments) requires understanding of sediment transport pathways. These pathways are strongly dependent on hydrodynamic forcing, grain size, underlying morphology, and the timescale considered. To map and describe these pathways, we considered the concept of sediment connectivity, which quantifies the degree to which sediment transport pathways link sources to receptors. In this study we established a framework for understanding sediment transport pathways in coastal environments, using Ameland Inlet in the Dutch Wadden Sea as a basis. We used the Delft3D morphodynamic model to assess the fate of sediment as it moved between specific morphological units defined in the model domain. Simulation data was synthesized in a graphical network and then graph theory used to analyze connectivity at different space and time scales. At decadal time scales, fine and very fine sand (<250μm) have greater connectivity with receptor areas further away from their sources. Conversely, medium sand (>250μm) shows lower connectivity, even in more energetic areas. Greater sediment connectivity was found under the influence of wind and waves when compared to purely tidal forcing. Connectivity shows considerable spatial variation in cross shore and alongshore directions, depending on proximity to the inlet and dominant wave direction. Furthermore, connectivity generally increases at longer timescales. Asymmetries in connectivity (i.e. unidirectional transport) can be used to explain long-term erosional or depositional trends. As such, an understanding of sediment connectivity as a function of grain size could yield useful insights for resolving sediment transport pathways and the fate of a nourishment in coastal environments.

  2. A rapid form of activity-dependent recovery from short-term synaptic depression in the intensity pathway of the auditory brainstem

    PubMed Central

    Horiuchi, Timothy K.

    2011-01-01

    Short-term synaptic plasticity acts as a time- and firing rate-dependent filter that mediates the transmission of information across synapses. In the avian auditory brainstem, specific forms of plasticity are expressed at different terminals of the same auditory nerve fibers and contribute to the divergence of acoustic timing and intensity information. To identify key differences in the plasticity properties, we made patch-clamp recordings from neurons in the cochlear nucleus responsible for intensity coding, nucleus angularis, and measured the time course of the recovery of excitatory postsynaptic currents following short-term synaptic depression. These synaptic responses showed a very rapid recovery, following a bi-exponential time course with a fast time constant of ~40 ms and a dependence on the presynaptic activity levels, resulting in a crossing over of the recovery trajectories following high-rate versus low-rate stimulation trains. We also show that the recorded recovery in the intensity pathway differs from similar recordings in the timing pathway, specifically the cochlear nucleus magnocellularis, in two ways: (1) a fast recovery that was not due to recovery from postsynaptic receptor desensitization and (2) a recovery trajectory that was characterized by a non-monotonic bump that may be due in part to facilitation mechanisms more prevalent in the intensity pathway. We tested whether a previously proposed model of synaptic transmission based on vesicle depletion and sequential steps of vesicle replenishment could account for the recovery responses, and found it was insufficient, suggesting an activity-dependent feedback mechanism is present. We propose that the rapid recovery following depression allows improved coding of natural auditory signals that often consist of sound bursts separated by short gaps. PMID:21409439

  3. A fully-neoclassical finite-orbit-width version of the CQL3D Fokker–Planck code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrov, Yu V.; Harvey, R. W.

    The time-dependent bounce-averaged CQL3D flux-conservative finite-difference Fokker–Planck equation (FPE) solver has been upgraded to include finite-orbit-width (FOW) capabilities which are necessary for an accurate description of neoclassical transport, losses to the walls, and transfer of particles, momentum, and heat to the scrape-off layer. The FOW modifications are implemented in the formulation of the neutral beam source, collision operator, RF quasilinear diffusion operator, and in synthetic particle diagnostics. The collisional neoclassical radial transport appears naturally in the FOW version due to the orbit-averaging of local collision coefficients coupled with transformation coefficients from local (R, Z) coordinates along each guiding-center orbit tomore » the corresponding midplane computational coordinates, where the FPE is solved. In a similar way, the local quasilinear RF diffusion terms give rise to additional radial transport of orbits. We note that the neoclassical results are obtained for ‘full’ orbits, not dependent on a common small orbit-width approximation. Results of validation tests for the FOW version are also presented.« less

  4. A fully-neoclassical finite-orbit-width version of the CQL3D Fokker–Planck code

    DOE PAGES

    Petrov, Yu V.; Harvey, R. W.

    2016-09-08

    The time-dependent bounce-averaged CQL3D flux-conservative finite-difference Fokker–Planck equation (FPE) solver has been upgraded to include finite-orbit-width (FOW) capabilities which are necessary for an accurate description of neoclassical transport, losses to the walls, and transfer of particles, momentum, and heat to the scrape-off layer. The FOW modifications are implemented in the formulation of the neutral beam source, collision operator, RF quasilinear diffusion operator, and in synthetic particle diagnostics. The collisional neoclassical radial transport appears naturally in the FOW version due to the orbit-averaging of local collision coefficients coupled with transformation coefficients from local (R, Z) coordinates along each guiding-center orbit tomore » the corresponding midplane computational coordinates, where the FPE is solved. In a similar way, the local quasilinear RF diffusion terms give rise to additional radial transport of orbits. We note that the neoclassical results are obtained for ‘full’ orbits, not dependent on a common small orbit-width approximation. Results of validation tests for the FOW version are also presented.« less

  5. Understanding the acoustics of Papal Basilicas in Rome by means of a coupled-volumes approach

    NASA Astrophysics Data System (ADS)

    Martellotta, Francesco

    2016-11-01

    The paper investigates the acoustics of the four World-famous Papal Basilicas in Rome, namely Saint Peter's, St. John Lateran's, St. Paul's outside the Walls, and Saint Mary's Major. They are characterized by different dimensions, materials, and architectural features, as well as by a certain number of similarities. In addition, their complexity determines significant variation in their acoustics depending on the relative position of source and receivers. A detailed set of acoustic measurements was carried out in each church, using both spatial (B-format) and binaural microphones, and determining the standard ISO 3382 descriptors. The results are analyzed in relation to the architectural features, pointing out the differences observed in terms of listening experience. Finally, in order to explain some of the results found in energy-based parameters, the churches were analyzed as a system of acoustically coupled volumes. The latter explained most of the anomalies observed in the distribution of acoustic parameters, while showing at the same time that secondary spaces (aisles, chapels) play a different role depending on the amount of sound absorption located in the main nave.

  6. A mathematical model of the heat and fluid flows in direct-chill casting of aluminum sheet ingots and billets

    NASA Astrophysics Data System (ADS)

    Mortensen, Dag

    1999-02-01

    A finite-element method model for the time-dependent heat and fluid flows that develop during direct-chill (DC) semicontinuous casting of aluminium ingots is presented. Thermal convection and turbulence are included in the model formulation and, in the mushy zone, the momentum equations are modified with a Darcy-type source term dependent on the liquid fraction. The boundary conditions involve calculations of the air gap along the mold wall as well as the heat transfer to the falling water film with forced convection, nucleate boiling, and film boiling. The mold wall and the starting block are included in the computational domain. In the start-up period of the casting, the ingot domain expands over the starting-block level. The numerical method applies a fractional-step method for the dynamic Navier-Stokes equations and the “streamline upwind Petrov-Galerkin” (SUPG) method for mixed diffusion and convection in the momentum and energy equations. The modeling of the start-up period of the casting is demonstrated and compared to temperature measurements in an AA1050 200×600 mm sheet ingot.

  7. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  8. A continuous time random walk (CTRW) integro-differential equation with chemical interaction

    NASA Astrophysics Data System (ADS)

    Ben-Zvi, Rami; Nissan, Alon; Scher, Harvey; Berkowitz, Brian

    2018-01-01

    A nonlocal-in-time integro-differential equation is introduced that accounts for close coupling between transport and chemical reaction terms. The structure of the equation contains these terms in a single convolution with a memory function M ( t), which includes the source of non-Fickian (anomalous) behavior, within the framework of a continuous time random walk (CTRW). The interaction is non-linear and second-order, relevant for a bimolecular reaction A + B → C. The interaction term ΓP A ( s, t) P B ( s, t) is symmetric in the concentrations of A and B (i.e. P A and P B ); thus the source terms in the equations for A, B and C are similar, but with a change in sign for that of C. Here, the chemical rate coefficient, Γ, is constant. The fully coupled equations are solved numerically using a finite element method (FEM) with a judicious representation of M ( t) that eschews the need for the entire time history, instead using only values at the former time step. To begin to validate the equations, the FEM solution is compared, in lieu of experimental data, to a particle tracking method (CTRW-PT); the results from the two approaches, particularly for the C profiles, are in agreement. The FEM solution, for a range of initial and boundary conditions, can provide a good model for reactive transport in disordered media.

  9. On the structure of pressure fluctuations in simulated turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Kim, John

    1989-01-01

    Pressure fluctuations in a turbulent channel flow are investigated by analyzing a database obtained from a direct numerical simulation. Detailed statistics associated with the pressure fluctuations are presented. Characteristics associated with the rapid (linear) and slow (nonlinear) pressure are discussed. It is found that the slow pressure fluctuations are larger than the rapid pressure fluctuations throughout the channel except very near the wall, where they are about the same magnitude. This is contrary to the common belief that the nonlinear source terms are negligible compared to the linear source terms. Probability density distributions, power spectra, and two-point correlations are examined to reveal the characteristics of the pressure fluctuations. The global dependence of the pressure fluctuations and pressure-strain correlations are also examined by evaluating the integral associated with Green's function representations of them. In the wall region where the pressure-strain terms are large, most contributions to the pressure-strain terms are from the wall region (i.e., local), whereas away from the wall where the pressure-strain terms are small, contributions are global. Structures of instantaneous pressure and pressure gradients at the wall and the corresponding vorticity field are examined.

  10. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  11. Indispensable finite time corrections for Fokker-Planck equations from time series data.

    PubMed

    Ragwitz, M; Kantz, H

    2001-12-17

    The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.

  12. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  13. Dissolved air flotation as a potential treatment process to remove Giardia cysts from anaerobically treated sewage.

    PubMed

    Santos, Priscila Ribeiro Dos; Daniel, Luiz Antonio

    2017-10-01

    Controlling Giardia cysts in sewage is an essential barrier for public health protection, reducing possible routes of protozoa transmission. The aim of this study was to evaluate the capability of dissolved air flotation (DAF), on a bench scale, to remove Giardia cysts from anaerobic effluent. Moreover, removals of indicator microorganisms and physical variables were also investigated. Flocculation conditions were studied, associating different flocculation times with different mean velocity gradients. DAF treatment achieved mean log removals in the range of 2.52-2.62 for Giardia cysts, depending on the flocculation condition. No statistical differences were observed among the flocculation conditions in terms of cyst removal. Low levels of turbidity and apparent color obtained from the treated effluent may indicate good treatment conditions for the DAF process in cyst removal. Indicator microorganisms were not able to predict the parasitological quality of the wastewater treated by flotation in terms of cyst concentrations. The DAF process provided an effective barrier to control cysts from sewage, which is an important parasite source.

  14. A critical reappraisal of bilateral adrenalectomy for ACTH-dependent Cushing's syndrome.

    PubMed

    Reincke, Martin; Ritzel, Katrin; Oßwald, Andrea; Berr, Christina; Stalla, Günter; Hallfeldt, Klaus; Reisch, Nicole; Schopohl, Jochen; Beuschlein, Felix

    2015-10-01

    Our aim was to review short- and long-term outcomes of patients treated with bilateral adrenalectomy (BADx) in ACTH-dependent Cushing's syndrome. We reviewed the literature and analysed our experience with 53 patients treated with BADx since 1990 in our institution. BADx is considered if ACTH-dependent Cushing's syndrome is refractory to other treatment modalities. In Cushing's disease (CD), BADx is mainly used as an ultima ratio after transsphenoidal surgery and medical therapies have failed. In these cases, the time span between the first diagnosis of CD and treatment with BADx is relatively long (median 44 months). In ectopic Cushing's syndrome, the time from diagnosis to BADx is shorter (median 2 months), and BADx is often performed as an emergency procedure because of life-threatening complications of severe hypercortisolism. In both situations, BADx is relatively safe (median surgical morbidity 15%; median surgical mortality 3%) and provides excellent control of hypercortisolism; Cushing's-associated signs and symptoms are rapidly corrected, and co-morbidities are stabilised. In CD, the quality of life following BADx is rapidly improving, and long-term mortality is low. Specific long-term complications include the development of adrenal crisis and Nelson's syndrome. In ectopic Cushing's syndrome, long-term mortality is high but is mostly dependent on the prognosis of the underlying malignant neuroendocrine tumour. BADx is a relatively safe and highly effective treatment, and it provides adequate control of long-term co-morbidities associated with hypercortisolism. © 2015 European Society of Endocrinology.

  15. Exploring external time-dependent sources of H2O into Titan's atmosphere

    NASA Astrophysics Data System (ADS)

    Lara, Luisa-Maria; Lellouch, Emmanuel; González, Marta; Moreno, Raphael; Rengel, Miriam

    2014-05-01

    Recent observations (Cottini et al., 2012, and Moreno et al., 2012) and steady-state photochemical modelling (Moreno et al., 2012; Dobrijevic et al., 2014) indicate that the amounts of CO2 and H2O in Titan's stratosphere imply relatively inconsistent values of the OH/H2O input flux. Moreno et al. (2012) proposed that the oxygen source is time-variable, whereas Dobrijevic et al. (2014) arrived to the same conclusion of Moreno et al. (2012) that the HSO (Herschel Space Observatory) measured H2O profile is'inconsistent" with the CO2 abundance. Furthermore, Dobrijevic et al. (2014) also found that reconciliation was possible if abundances reported by Cottini et al. (2012) are correct instead, though in this situation and for an Enceladus source, their model tended to overpredict the thermospheric abundance of H2O , compared to the upper limit by Cui et al. (2009). We attempt to reconcile the H2O and CO2 observed profiles in Titan's atmosphere by considering several time-dependent scenarios for the infux/evolution of oxygen species. To explore this, we use a time-dependent photochemical model of Titan's atmosphere to calculate effective lifetimes and the response of Titan's oxygen compounds to changes in the oxygen input flux. We consider a time-variable Enceladus source, as well as the evolution of material delivered by a cometary impact. We will show results on effective H2O and CO2 effective lifetimes, on the feasibility of time-variable Enceladus source, and on an additional H2O loss-to-the-haze. Regarding CO2, we will analyse its production following a cometary impact. A summary on viable scenarios to explain the H2O / CO2 puzzle will be given. References Moreno, R., Lellouch, E., Lara, L. M., et al. 2012, Icarus, 221, 753. Cottini, V., Nixon, C. A., Jennings, D. E., et al. 2012, Icarus, 220, 855. Cui, J., Yelle, R. V., Vuitton, V., et al. 2009, Icarus, 200, 581. Dobrijevic, M., Hébrard, E., Loison, J., and Hickson, K. 2014, Icarus, 228, 324.

  16. Functional requirements for reward-modulated spike-timing-dependent plasticity.

    PubMed

    Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram

    2010-10-06

    Recent experiments have shown that spike-timing-dependent plasticity is influenced by neuromodulation. We derive theoretical conditions for successful learning of reward-related behavior for a large class of learning rules where Hebbian synaptic plasticity is conditioned on a global modulatory factor signaling reward. We show that all learning rules in this class can be separated into a term that captures the covariance of neuronal firing and reward and a second term that presents the influence of unsupervised learning. The unsupervised term, which is, in general, detrimental for reward-based learning, can be suppressed if the neuromodulatory signal encodes the difference between the reward and the expected reward-but only if the expected reward is calculated for each task and stimulus separately. If several tasks are to be learned simultaneously, the nervous system needs an internal critic that is able to predict the expected reward for arbitrary stimuli. We show that, with a critic, reward-modulated spike-timing-dependent plasticity is capable of learning motor trajectories with a temporal resolution of tens of milliseconds. The relation to temporal difference learning, the relevance of block-based learning paradigms, and the limitations of learning with a critic are discussed.

  17. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  18. Spectrum of Quantized Energy for a Lengthening Pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jeong Ryeol; Song, Ji Nny; Hong, Seong Ju

    We considered a quantum system of simple pendulum whose length of string is increasing at a steady rate. Since the string length is represented as a time function, this system is described by a time-dependent Hamiltonian. The invariant operator method is very useful in solving the quantum solutions of time-dependent Hamiltonian systems like this. The invariant operator of the system is represented in terms of the lowering operator a(t) and the raising operator a{sup {dagger}}(t). The Schroedinger solutions {psi}{sub n}({theta}, t) whose spectrum is discrete are obtained by means of the invariant operator. The expectation value of the Hamiltonian inmore » the {psi}{sub n}({theta}, t) state is the same as the quantum energy. At first, we considered only {theta}{sup 2} term in the Hamiltonian in order to evaluate the quantized energy. The numerical study for quantum energy correction is also made by considering the angle variable not only up to {theta}{sup 4} term but also up to {theta}{sup 6} term in the Hamiltonian, using the perturbation theory.« less

  19. The 2009-2010 MU radar head echo observation programme for sporadic and shower meteors: radiant densities and diurnal rates

    NASA Astrophysics Data System (ADS)

    Kero, J.; Szasz, C.; Nakamura, T.; Meisel, D. D.; Ueda, M.; Fujiwara, Y.; Terasawa, T.; Nishimura, K.; Watanabe, J.

    2012-09-01

    The aim of this paper is to give an overview of the monthly meteor head echo observations (528.8 h) conducted between 2009 June and 2010 December using the Shigaraki Middle and Upper atmosphere radar in Japan (34°.85 N, 136°.10 E). We present diurnal detection rates and radiant density plots from 18 separate observational campaigns, each lasting for at least one diurnal cycle. Our data comprise more than 106 000 meteors. All six recognized apparent sporadic meteor sources are discernable and their average orbital distributions are presented in terms of geocentric velocity, semimajor axis, inclination and eccentricity. The north and south apex have radiant densities an order of magnitude higher than other apparent source regions. The diurnal detection rates show clear seasonal dependence. The main cause of the seasonal variation is the tilt of the Earth's axis, causing the elevation of the Earth's apex above the local horizon to change as the Earth revolves around the Sun. Yet, the meteor rate variation is not symmetric with respect to the equinoxes. When comparing the radiant density at different times of the year, and thus at different solar longitudes along the Earth's orbit, we have found that the north and south apex source regions fluctuate in strength.

  20. Ensemble hydrological forecast efficiency evolution over various issue dates and lead-time: case study for the Cheboksary reservoir (Volga River)

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreido, Vsevolod

    2017-04-01

    Ensemble hydrological forecasting allows for describing uncertainty caused by variability of meteorological conditions in the river basin for the forecast lead-time. At the same time, in snowmelt-dependent river basins another significant source of uncertainty relates to variability of initial conditions of the basin (snow water equivalent, soil moisture content, etc.) prior to forecast issue. Accurate long-term hydrological forecast is most crucial for large water management systems, such as the Cheboksary reservoir (the catchment area is 374 000 sq.km) located in the Middle Volga river in Russia. Accurate forecasts of water inflow volume, maximum discharge and other flow characteristics are of great value for this basin, especially before the beginning of the spring freshet season that lasts here from April to June. The semi-distributed hydrological model ECOMAG was used to develop long-term ensemble forecast of daily water inflow into the Cheboksary reservoir. To describe variability of the meteorological conditions and construct ensemble of possible weather scenarios for the lead-time of the forecast, two approaches were applied. The first one utilizes 50 weather scenarios observed in the previous years (similar to the ensemble streamflow prediction (ESP) procedure), the second one uses 1000 synthetic scenarios simulated by a stochastic weather generator. We investigated the evolution of forecast uncertainty reduction, expressed as forecast efficiency, over various consequent forecast issue dates and lead time. We analyzed the Nash-Sutcliffe efficiency of inflow hindcasts for the period 1982 to 2016 starting from 1st of March with 15 days frequency for lead-time of 1 to 6 months. This resulted in the forecast efficiency matrix with issue dates versus lead-time that allows for predictability identification of the basin. The matrix was constructed separately for observed and synthetic weather ensembles.

  1. Understanding the amplitudes of noise correlation measurements

    USGS Publications Warehouse

    Tsai, Victor C.

    2011-01-01

    Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.

  2. Dynamic Response of a Magnetized Plasma to AN External Source: Application to Space and Solid State Plasmas

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-Bei

    This dissertation examines the dynamic response of a magnetoplasma to an external time-dependent current source. To achieve this goal a new method which combines analytic and numerical techniques to study the dynamic response of a 3-D magnetoplasma to a time-dependent current source imposed across the magnetic field was developed. The set of the cold electron and/or ion plasma equations and Maxwell's equations are first solved analytically in (k, omega)^ace; inverse Laplace and 3 -D complex Fast Fourier Transform (FFT) techniques are subsequently used to numerically transform the radiation fields and plasma currents from the (k, omega) ^ace to the (r, t) space. The dynamic responses of the electron plasma and of the compensated two-component plasma to external current sources are studied separately. The results show that the electron plasma responds to a time -varying current source imposed across the magnetic field by exciting whistler/helicon waves and forming of an expanding local current loop, induced by field aligned plasma currents. The current loop consists of two anti-parallel field-aligned current channels concentrated at the ends of the imposed current and a cross-field current region connecting these channels. The latter is driven by an electron Hall drift. A compensated two-component plasma responds to the same current source as following: (a) For slow time scales tau > Omega_sp{i}{-1} , it generates Alfven waves and forms a non-local current loop in which the ion polarization currents dominate the cross-field current; (b) For fast time scales tau < Omega_sp{i}{-1} , the dynamic response of the compensated two-component plasma is the same as that of the electron plasma. The characteristics of the current closure region are determined by the background plasma density, the magnetic field and the time scale of the current source. This study has applications to a diverse range of space and solid state plasma problems. These problems include current closure in emf inducing tethered satellite systems (TSS), generation of ELF/VLF waves by ionospheric heating, current closure and quasineutrality in thin magnetopause transitions, and short electromagnetic pulse generation in solid state plasmas. The cross-field current in TSS builds up on a time scale corresponding to the whistler waves and results in local current closure. Amplitude modulated HF ionospheric heating generates ELF/VLF waves by forming a horizontal magnetic dipole. The dipole is formed by the current closure in the modified region. For thin transition the time-dependent cross-field polarization field at the magnetopause could be neutralized by the formation of field aligned current loops that close by a cross-field electron Hall current. A moving current source in a solid state plasma results in microwave emission if the speed of the source exceeds the local phase velocity of the helicon or Alfven waves. Detailed analysis of the above problems is presented in the thesis.

  3. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  4. Study of the effect of electron irradiation on the density of the activated sludge in aqueous solution

    NASA Astrophysics Data System (ADS)

    Kupchishin, A. I.; Niyazov, M. N.; Taipova, B. G.; Voronova, N. A.; Khodarina, N. N.

    2018-01-01

    Complex experimental studies on the effect of electron irradiation on the deposition rate of active sludge in aqueous systems by the optical method have been carried out. The obtained dependences of density (ρ) on time (t) are of the same nature for different radiation sources. The experimental curves of the dependence of the active sludge density on time are satisfactorily described by an exponential model.

  5. Near real time inverse source modeling and stress filed assessment: the requirement of a volcano fast response system

    NASA Astrophysics Data System (ADS)

    Shirzaei, Manoochehr; Walter, Thomas

    2010-05-01

    Volcanic unrest and eruptions are one of the major natural hazards next to earthquakes, floods, and storms. It has been shown that many of volcanic and tectonic unrests are triggered by changes in the stress field induced by nearby seismic and magmatic activities. In this study, as part of a mobile volcano fast response system so-called "Exupery" (www.exupery-vfrs.de) we present an arrangement for semi real time assessing the stress field excited by volcanic activity. This system includes; (1) an approach called "WabInSAR" dedicated for advanced processing of the satellite data and providing an accurate time series of the surface deformation [1, 2], (2) a time dependent inverse source modeling method to investigate the source of volcanic unrest using observed surface deformation data [3, 4], (3) the assessment of the changes in stress field induced by magmatic activity at the nearby volcanic and tectonic systems. This system is implemented in a recursive manner that allows handling large 3D data sets in an efficient and robust way which is requirement of an early warning system. We have applied and validated this arrangement on Mauna Loa volcano, Hawaii Island, to assess the influence of the time dependent activities of Mauna Loa on earthquake occurrence at the Kaoiki seismic zone. References [1] M. Shirzaei and T. R. Walter, "Wavelet based InSAR (WabInSAR): a new advanced time series approach for accurate spatiotemporal surface deformation monitoring," IEEE, pp. submitted, 2010. [2] M. Shirzaei and R. T. Walter, "Deformation interplay at Hawaii Island through InSAR time series and modeling," J. Geophys Res., vol. submited, 2009. [3] M. Shirzaei and T. R. Walter, "Randomly Iterated Search and Statistical Competency (RISC) as powerful inversion tools for deformation source modeling: application to volcano InSAR data," J. Geophys. Res., vol. 114, B10401, doi:10.1029/2008JB006071, 2009. [4] M. Shirzaei and T. R. Walter, "Genetic algorithm combined with Kalman filter as powerful tool for nonlinear time dependent inverse modelling: Application to volcanic deformation time series," J. Geophys. Res., pp. submitted, 2010.

  6. A model for homeopathic remedy effects: low dose nanoparticles, allostatic cross-adaptation, and time-dependent sensitization in a complex adaptive system

    PubMed Central

    2012-01-01

    Background This paper proposes a novel model for homeopathic remedy action on living systems. Research indicates that homeopathic remedies (a) contain measurable source and silica nanoparticles heterogeneously dispersed in colloidal solution; (b) act by modulating biological function of the allostatic stress response network (c) evoke biphasic actions on living systems via organism-dependent adaptive and endogenously amplified effects; (d) improve systemic resilience. Discussion The proposed active components of homeopathic remedies are nanoparticles of source substance in water-based colloidal solution, not bulk-form drugs. Nanoparticles have unique biological and physico-chemical properties, including increased catalytic reactivity, protein and DNA adsorption, bioavailability, dose-sparing, electromagnetic, and quantum effects different from bulk-form materials. Trituration and/or liquid succussions during classical remedy preparation create “top-down” nanostructures. Plants can biosynthesize remedy-templated silica nanostructures. Nanoparticles stimulate hormesis, a beneficial low-dose adaptive response. Homeopathic remedies prescribed in low doses spaced intermittently over time act as biological signals that stimulate the organism’s allostatic biological stress response network, evoking nonlinear modulatory, self-organizing change. Potential mechanisms include time-dependent sensitization (TDS), a type of adaptive plasticity/metaplasticity involving progressive amplification of host responses, which reverse direction and oscillate at physiological limits. To mobilize hormesis and TDS, the remedy must be appraised as a salient, but low level, novel threat, stressor, or homeostatic disruption for the whole organism. Silica nanoparticles adsorb remedy source and amplify effects. Properly-timed remedy dosing elicits disease-primed compensatory reversal in direction of maladaptive dynamics of the allostatic network, thus promoting resilience and recovery from disease. Summary Homeopathic remedies are proposed as source nanoparticles that mobilize hormesis and time-dependent sensitization via non-pharmacological effects on specific biological adaptive and amplification mechanisms. The nanoparticle nature of remedies would distinguish them from conventional bulk drugs in structure, morphology, and functional properties. Outcomes would depend upon the ability of the organism to respond to the remedy as a novel stressor or heterotypic biological threat, initiating reversals of cumulative, cross-adapted biological maladaptations underlying disease in the allostatic stress response network. Systemic resilience would improve. This model provides a foundation for theory-driven research on the role of nanomaterials in living systems, mechanisms of homeopathic remedy actions and translational uses in nanomedicine. PMID:23088629

  7. A model for homeopathic remedy effects: low dose nanoparticles, allostatic cross-adaptation, and time-dependent sensitization in a complex adaptive system.

    PubMed

    Bell, Iris R; Koithan, Mary

    2012-10-22

    This paper proposes a novel model for homeopathic remedy action on living systems. Research indicates that homeopathic remedies (a) contain measurable source and silica nanoparticles heterogeneously dispersed in colloidal solution; (b) act by modulating biological function of the allostatic stress response network (c) evoke biphasic actions on living systems via organism-dependent adaptive and endogenously amplified effects; (d) improve systemic resilience. The proposed active components of homeopathic remedies are nanoparticles of source substance in water-based colloidal solution, not bulk-form drugs. Nanoparticles have unique biological and physico-chemical properties, including increased catalytic reactivity, protein and DNA adsorption, bioavailability, dose-sparing, electromagnetic, and quantum effects different from bulk-form materials. Trituration and/or liquid succussions during classical remedy preparation create "top-down" nanostructures. Plants can biosynthesize remedy-templated silica nanostructures. Nanoparticles stimulate hormesis, a beneficial low-dose adaptive response. Homeopathic remedies prescribed in low doses spaced intermittently over time act as biological signals that stimulate the organism's allostatic biological stress response network, evoking nonlinear modulatory, self-organizing change. Potential mechanisms include time-dependent sensitization (TDS), a type of adaptive plasticity/metaplasticity involving progressive amplification of host responses, which reverse direction and oscillate at physiological limits. To mobilize hormesis and TDS, the remedy must be appraised as a salient, but low level, novel threat, stressor, or homeostatic disruption for the whole organism. Silica nanoparticles adsorb remedy source and amplify effects. Properly-timed remedy dosing elicits disease-primed compensatory reversal in direction of maladaptive dynamics of the allostatic network, thus promoting resilience and recovery from disease. Homeopathic remedies are proposed as source nanoparticles that mobilize hormesis and time-dependent sensitization via non-pharmacological effects on specific biological adaptive and amplification mechanisms. The nanoparticle nature of remedies would distinguish them from conventional bulk drugs in structure, morphology, and functional properties. Outcomes would depend upon the ability of the organism to respond to the remedy as a novel stressor or heterotypic biological threat, initiating reversals of cumulative, cross-adapted biological maladaptations underlying disease in the allostatic stress response network. Systemic resilience would improve. This model provides a foundation for theory-driven research on the role of nanomaterials in living systems, mechanisms of homeopathic remedy actions and translational uses in nanomedicine.

  8. Alignment of leading-edge and peak-picking time of arrival methods to obtain accurate source locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussel-Dupre, R.; Symbalisty, E.; Fox, C.

    2009-08-01

    The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but themore » final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).« less

  9. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  10. 40 CFR 35.3555 - Intended Use Plan (IUP).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... description of the financial planning process undertaken for the Fund and the impact of funding decisions on the long-term financial health of the Fund. (4) Financial status. The IUP must describe the sources... project; the expected terms of financial assistance based on the best information available at the time...

  11. Two-stage unified stretched-exponential model for time-dependence of threshold voltage shift under positive-bias-stresses in amorphous indium-gallium-zinc oxide thin-film transistors

    NASA Astrophysics Data System (ADS)

    Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In

    2017-08-01

    In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.

  12. Orientation dependence of temporal and spectral properties of high-order harmonics in solids

    NASA Astrophysics Data System (ADS)

    Wu, Mengxi; You, Yongsing; Ghimire, Shambhu; Reis, David A.; Browne, Dana A.; Schafer, Kenneth J.; Gaarde, Mette B.

    2017-12-01

    We investigate the connection between crystal symmetry and temporal and spectral properties of high-order harmonics in solids. We calculate the orientation-dependent harmonic spectrum driven by an intense, linearly polarized infrared laser field, using a momentum-space description of the generation process in terms of strong-field-driven electron dynamics on the band structure. We show that the orientation dependence of both the spectral yield and the subcycle time profile of the harmonic radiation can be understood in terms of the coupling strengths and relative curvatures of the valence band and the low-lying conduction bands. In particular, we show that in some systems this gives rise to a rapid shift of a quarter optical cycle in the timing of harmonics in the secondary plateau as the crystal is rotated relative to the laser polarization. We address recent experimental results in MgO [Y. S. You et al., Nat. Phys. 13, 345 (2017)., 10.1038/nphys3955] and show that the observed change in orientation dependence for the highest harmonics can be interpreted in the momentum space picture in terms of the contributions of several different conduction bands.

  13. Experiment on search for neutron-antineutron oscillations using a projected UCN source at the WWR-M reactor

    NASA Astrophysics Data System (ADS)

    Fomin, A. K.; Serebrov, A. P.; Zherebtsov, O. M.; Leonova, E. N.; Chaikovskii, M. E.

    2017-01-01

    We propose an experiment on search for neutron-antineutron oscillations based on the storage of ultracold neutrons (UCN) in a material trap. The sensitivity of the experiment mostly depends on the trap size and the amount of UCN in it. In Petersburg Nuclear Physics Institute (PNPI) a high-intensity UCN source is projected at the WWR-M reactor, which must provide UCN density 2-3 orders of magnitude higher than existing sources. The results of simulations of the designed experimental scheme show that the sensitivity can be increased by ˜ 10-40 times compared to sensitivity of previous experiment depending on the model of neutron reflection from walls.

  14. Reducing mortality risk by targeting specific air pollution sources: Suva, Fiji.

    PubMed

    Isley, C F; Nelson, P F; Taylor, M P; Stelcer, E; Atanacio, A J; Cohen, D D; Mani, F S; Maata, M

    2018-01-15

    Health implications of air pollution vary dependent upon pollutant sources. This work determines the value, in terms of reduced mortality, of reducing ambient particulate matter (PM 2.5 : effective aerodynamic diameter 2.5μm or less) concentration due to different emission sources. Suva, a Pacific Island city with substantial input from combustion sources, is used as a case-study. Elemental concentration was determined, by ion beam analysis, for PM 2.5 samples from Suva, spanning one year. Sources of PM 2.5 have been quantified by positive matrix factorisation. A review of recent literature has been carried out to delineate the mortality risk associated with these sources. Risk factors have then been applied for Suva, to calculate the possible mortality reduction that may be achieved through reduction in pollutant levels. Higher risk ratios for black carbon and sulphur resulted in mortality predictions for PM 2.5 from fossil fuel combustion, road vehicle emissions and waste burning that surpass predictions for these sources based on health risk of PM 2.5 mass alone. Predicted mortality for Suva from fossil fuel smoke exceeds the national toll from road accidents in Fiji. The greatest benefit for Suva, in terms of reduced mortality, is likely to be accomplished by reducing emissions from fossil fuel combustion (diesel), vehicles and waste burning. Copyright © 2017. Published by Elsevier B.V.

  15. Transmodal comparison of auditory, motor, and visual post-processing with and without intentional short-term memory maintenance.

    PubMed

    Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias

    2010-12-01

    To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Stress-strain state of the lithosphere in the southern Baikal region and northern Mongolia from data on seismic moments of earthquakes

    NASA Astrophysics Data System (ADS)

    Klyuchevskii, A. V.; Dem'yanovich, V. M.

    2006-05-01

    Investigation and understanding of the present-day geodynamic situation are of key importance for the elucidation of the laws and evolution of the seismic process in a seismically active region. In this work, seismic moments of nearly 26000 earthquakes with K p ≥ 7 ( M LH ≥ 2) that occurred in the southern Baikal region and northern Mongolia (SBNM) (48° 54°N, 96° 108°E) from 1968 through 1994 are determined from amplitudes and periods of maximum displacements in transverse body waves. The resulting set of seismic moments is used for spatial-temporal analysis of the stress-strain state of the SBNM lithosphere. The stress fields of the Baikal rift and the India-Asia collision zone are supposed to interact in the region studied. Since the seismic moment of a tectonic earthquake depends on the type of motion in the source, seismic moments and focal mechanisms of earthquakes belonging to four long-term aftershock and swarm clusters of shocks in the Baikal region were used to “calibrate” average seismic moments in accordance with the source faulting type. The study showed that the stress-strain state of the SBNM lithosphere is spatially inhomogeneous and nonstationary. A space-time discrepancy is observed in the formation of faulting types in sources of weak ( K p = 7 and 8) and stronger ( K p ≥ 9) earthquakes. This discrepancy is interpreted in terms of rock fracture at various hierarchical levels of ruptures on differently oriented general, regional, and local faults. A gradual increase and an abrupt, nearly pulsed, decrease in the vertical component of the stress field S v is a characteristic feature of time variations. The zones where the stress S v prevails are localized at “singular points” of the lithosphere. Shocks of various energy classes in these zones are dominated by the normal-fault slip mechanism. For earthquakes with K p = 9, the source faulting changes with depth from the strike-slip type to the normal-strike-slip and normal types, suggesting an increase in S v . On the whole, the results of this study are well consistent with the synergism of open unstable dissipative systems and are usable for interpreting the main observable variations in the stress-strain state of the lithosphere in terms of spatiotemporal variations in the vertical component of the stress field S v . This suggests the influence of rifting on the present-day geodynamic processes in the SBNM lithosphere.

  17. Associative spike timing-dependent potentiation of the basal dendritic excitatory synapses in the hippocampus in vivo.

    PubMed

    Fung, Thomas K; Law, Clayton S; Leung, L Stan

    2016-06-01

    Spike timing-dependent plasticity in the hippocampus has rarely been studied in vivo. Using extracellular potential and current source density analysis in urethane-anesthetized adult rats, we studied synaptic plasticity at the basal dendritic excitatory synapse in CA1 after excitation-spike (ES) pairing; E was a weak basal dendritic excitation evoked by stratum oriens stimulation, and S was a population spike evoked by stratum radiatum apical dendritic excitation. We hypothesize that positive ES pairing-generating synaptic excitation before a spike-results in long-term potentiation (LTP) while negative ES pairing results in long-term depression (LTD). Pairing (50 pairs at 5 Hz) at ES intervals of -10 to 0 ms resulted in significant input-specific LTP of the basal dendritic excitatory sink, lasting 60-120 min. Pairing at +10- to +20-ms ES intervals, or unpaired 5-Hz stimulation, did not induce significant basal dendritic or apical dendritic LTP or LTD. No basal dendritic LTD was found after stimulation of stratum oriens with 200 pairs of high-intensity pulses at 25-ms interval. Pairing-induced LTP was abolished by pretreatment with an N-methyl-d-aspartate receptor antagonist, 3-(2-carboxypiperazin-4-yl)-propyl-1-phosphonic acid (CPP), which also reduced spike bursting during 5-Hz pairing. Pairing at 0.5 Hz did not induce spike bursts or basal dendritic LTP. In conclusion, ES pairing at 5 Hz resulted in input-specific basal dendritic LTP at ES intervals of -10 ms to 0 ms but no LTD at ES intervals of -20 to +20 ms. Associative LTP likely occurred because of theta-rhythmic coincidence of subthreshold excitation with a backpropagated spike burst, which are conditions that can occur naturally in the hippocampus. Copyright © 2016 the American Physiological Society.

  18. Associative spike timing-dependent potentiation of the basal dendritic excitatory synapses in the hippocampus in vivo

    PubMed Central

    Fung, Thomas K.; Law, Clayton S.

    2016-01-01

    Spike timing-dependent plasticity in the hippocampus has rarely been studied in vivo. Using extracellular potential and current source density analysis in urethane-anesthetized adult rats, we studied synaptic plasticity at the basal dendritic excitatory synapse in CA1 after excitation-spike (ES) pairing; E was a weak basal dendritic excitation evoked by stratum oriens stimulation, and S was a population spike evoked by stratum radiatum apical dendritic excitation. We hypothesize that positive ES pairing—generating synaptic excitation before a spike—results in long-term potentiation (LTP) while negative ES pairing results in long-term depression (LTD). Pairing (50 pairs at 5 Hz) at ES intervals of −10 to 0 ms resulted in significant input-specific LTP of the basal dendritic excitatory sink, lasting 60–120 min. Pairing at +10- to +20-ms ES intervals, or unpaired 5-Hz stimulation, did not induce significant basal dendritic or apical dendritic LTP or LTD. No basal dendritic LTD was found after stimulation of stratum oriens with 200 pairs of high-intensity pulses at 25-ms interval. Pairing-induced LTP was abolished by pretreatment with an N-methyl-d-aspartate receptor antagonist, 3-(2-carboxypiperazin-4-yl)-propyl-1-phosphonic acid (CPP), which also reduced spike bursting during 5-Hz pairing. Pairing at 0.5 Hz did not induce spike bursts or basal dendritic LTP. In conclusion, ES pairing at 5 Hz resulted in input-specific basal dendritic LTP at ES intervals of −10 ms to 0 ms but no LTD at ES intervals of −20 to +20 ms. Associative LTP likely occurred because of theta-rhythmic coincidence of subthreshold excitation with a backpropagated spike burst, which are conditions that can occur naturally in the hippocampus. PMID:27052581

  19. An analytic solution for the minimal bathtub toy model: challenges in the star formation history of high-z galaxies

    NASA Astrophysics Data System (ADS)

    Dekel, Avishai; Mandelker, Nir

    2014-11-01

    We study the minimal `bathtub' toy model as a tool for capturing key processes of galaxy evolution and identifying robust successes and challenges in reproducing high-z observations. The source and sink terms of the continuity equations for gas and stars are expressed in simple terms from first principles. The assumed dependence of star formation rate (SFR) on gas mass self-regulates the system into a unique asymptotic behaviour, which is approximated by an analytic quasi-steady-state (QSS) solution. We address the validity of the QSS at different epochs independent of earlier conditions. At high z, where the accretion is gaseous, the specific SFR (sSFR) is predicted to be sSFR ≃ [(1 + z)/3]5/2 Gyr-1, slightly above the cosmological specific accretion rate, as observed at z = 3-8. The gas fraction is expected to decline slowly, and the observations constrain the SFR efficiency per dynamical time to ɛ ≃ 0.02. The stellar-to-virial mass ratio fsv is predicted to be constant in time, and the observed value requires an outflow mass-loading factor η ≃ 1-3, depending on the penetration efficiency of gas into the galaxy. However, at z ˜ 2, where stars are also accreted through mergers, there is a conflict between model and observations. The model that maximizes the sSFR, with the outflows fully recycled, underestimates the sSFR by a factor of ˜3 and overestimates fsv. With strong outflows, the model can match the observed fsv but then it underestimates the sSFR by an order of magnitude. We discuss potential remedies including a bias due to the exclusion of quenched galaxies.

  20. Global and Regional Temperature-change Potentials for Near-term Climate Forcers

    NASA Technical Reports Server (NTRS)

    Collins, W.J.; Fry, M.M.; Yu, H.; Fuglestvedt, J. S.; Shindell, D. T.; West, J. J.

    2013-01-01

    We examine the climate effects of the emissions of near-term climate forcers (NTCFs) from 4 continental regions (East Asia, Europe, North America and South Asia) using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon) and 4 ozone precursors (methane, reactive nitrogen oxides (NOx), volatile organic compounds and carbon monoxide). We calculate the global climate metrics: global warming potentials (GWPs) and global temperature change potentials (GTPs). For the aerosols these metrics are simply time-dependent scalings of the equilibrium radiative forcings. The GTPs decrease more rapidly with time than the GWPs. The aerosol forcings and hence climate metrics have only a modest dependence on emission region. The metrics for ozone precursors include the effects on the methane lifetime. The impacts via methane are particularly important for the 20 yr GTPs. Emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other Northern Hemisphere regions. The analysis is further extended by examining the temperature-change impacts in 4 latitude bands, and calculating absolute regional temperature-change potentials (ARTPs). The latitudinal pattern of the temperature response does not directly follow the pattern of the diagnosed radiative forcing. We find that temperatures in the Arctic latitudes appear to be particularly sensitive to BC emissions from South Asia. The northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20-30% larger than the global average for methane, VOC and CO emissions.

Top