Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source
NASA Astrophysics Data System (ADS)
Kornienko, Y. V.; Skuratovskiy, S. I.
2014-12-01
The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.
SEARCHES FOR TIME-DEPENDENT NEUTRINO SOURCES WITH ICECUBE DATA FROM 2008 TO 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Ackermann, M.; Adams, J.
2015-07-01
In this paper searches for flaring astrophysical neutrino sources and sources with periodic emission with the IceCube neutrino telescope are presented. In contrast to time-integrated searches, where steady emission is assumed, the analyses presented here look for a time-dependent signal of neutrinos using the information from the neutrino arrival times to enhance the discovery potential. A search was performed for correlations between neutrino arrival times and directions, as well as neutrino emission following time-dependent light curves, sporadic emission, or periodicities of candidate sources. These include active galactic nuclei, soft γ-ray repeaters, supernova remnants hosting pulsars, microquasars, and X-ray binaries. Themore » work presented here updates and extends previously published results to a longer period that covers 4 years of data from 2008 April 5 to 2012 May 16, including the first year of operation of the completed 86 string detector. The analyses did not find any significant time-dependent point sources of neutrinos, and the results were used to set upper limits on the neutrino flux from source candidates.« less
Tinkelman, Igor; Melamed, Timor
2005-06-01
In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.
Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog
NASA Technical Reports Server (NTRS)
Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.
1995-01-01
A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor.
Malinowski, Robert; Volpe, Giovanni; Parkin, Ivan P; Volpe, Giorgio
2018-02-01
The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner.
New theory on the reverberation of rooms. [considering sound wave travel time
NASA Technical Reports Server (NTRS)
Pujolle, J.
1974-01-01
The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.
NASA Astrophysics Data System (ADS)
Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben
2005-09-01
An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.
Dynamic Control of Particle Deposition in Evaporating Droplets by an External Point Source of Vapor
2018-01-01
The deposition of particles on a surface by an evaporating sessile droplet is important for phenomena as diverse as printing, thin-film deposition, and self-assembly. The shape of the final deposit depends on the flows within the droplet during evaporation. These flows are typically determined at the onset of the process by the intrinsic physical, chemical, and geometrical properties of the droplet and its environment. Here, we demonstrate deterministic emergence and real-time control of Marangoni flows within the evaporating droplet by an external point source of vapor. By varying the source location, we can modulate these flows in space and time to pattern colloids on surfaces in a controllable manner. PMID:29363979
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Time-dependent source model of the Lusi mud volcano
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Rudolph, M. L.; Manga, M.
2014-12-01
The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.
Envelope of coda waves for a double couple source due to non-linear elasticity
NASA Astrophysics Data System (ADS)
Calisto, Ignacia; Bataille, Klaus
2014-10-01
Non-linear elasticity has recently been considered as a source of scattering, therefore contributing to the coda of seismic waves, in particular for the case of explosive sources. This idea is analysed further here, theoretically solving the expression for the envelope of coda waves generated by a point moment tensor in order to compare with earthquake data. For weak non-linearities, one can consider each point of the non-linear medium as a source of scattering within a homogeneous and linear medium, for which Green's functions can be used to compute the total displacement of scattered waves. These sources of scattering have specific radiation patterns depending on the incident and scattered P or S waves, respectively. In this approach, the coda envelope depends on three scalar parameters related to the specific non-linearity of the medium; however these parameters only change the scale of the coda envelope. The shape of the coda envelope is sensitive to both the source time function and the intrinsic attenuation. We compare simulations using this model with data from earthquakes in Taiwan, with a good fit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, G.T.
1987-08-01
The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less
Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS
NASA Astrophysics Data System (ADS)
Decker, Glenn; Rosenbaum, Gerd; Singh, Om
2006-11-01
Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.
The VLITE Post-Processing Pipeline
NASA Astrophysics Data System (ADS)
Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.
2018-01-01
A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE;
Correcting STIS CCD Point-Source Spectra for CTE Loss
NASA Technical Reports Server (NTRS)
Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus
2006-01-01
We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Induced Voltage in an Open Wire
NASA Astrophysics Data System (ADS)
Morawetz, K.; Gilbert, M.; Trupp, A.
2017-07-01
A puzzle arising from Faraday's law has been considered and solved concerning the question which voltage will be induced in an open wire with a time-varying homogeneous magnetic field. In contrast to closed wires where the voltage is determined by the time variance of the magnetic field and the enclosed area, in an open wire we have to integrate the electric field along the wire. It is found that the longitudinal electric field with respect to the wave vector contributes with 1/3 and the transverse field with 2/3 to the induced voltage. In order to find the electric fields the sources of the magnetic fields are necessary to know. The representation of a spatially homogeneous and time-varying magnetic field implies unavoidably a certain symmetry point or symmetry line which depend on the geometry of the source. As a consequence the induced voltage of an open wire is found to be the area covered with respect to this symmetry line or point perpendicular to the magnetic field. This in turn allows to find the symmetry points of a magnetic field source by measuring the voltage of an open wire placed with different angles in the magnetic field. We present exactly solvable models of the Maxwell equations for a symmetry point and for a symmetry line, respectively. The results are applicable to open circuit problems like corrosion and for astrophysical applications.
Sources of spurious force oscillations from an immersed boundary method for moving-body problems
NASA Astrophysics Data System (ADS)
Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo
2011-04-01
When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.
Spatial distribution of pollutants in the area of the former CHP plant
NASA Astrophysics Data System (ADS)
Cichowicz, Robert
2018-01-01
The quality of atmospheric air and level of its pollution are now one of the most important issues connected with life on Earth. The frequent nuisance and exceedance of pollution standards often described in the media are generated by both low emission sources and mobile sources. Also local organized energy emission sources such as local boiler houses or CHP plants have impact on air pollution. At the same time it is important to remember that the role of local power stations in shaping air pollution immission fields depends on the height of emitters and functioning of waste gas treatment installations. Analysis of air pollution distribution was carried out in 2 series/dates, i.e. 2 and 10 weeks after closure of the CHP plant. In the analysis as a reference point the largest intersection of streets located in the immediate vicinity of the plant was selected, from which virtual circles were drawn every 50 meters, where 31 measuring points were located. As a result, the impact of carbon dioxide, hydrogen sulfide and ammonia levels could be observed and analyzed, depending on the distance from the street intersection.
NASA Astrophysics Data System (ADS)
Jeffery, David J.; Mazzali, Paolo A.
2007-08-01
Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
NASA Astrophysics Data System (ADS)
Mauzerall, D. L.; Sultan, B.; Kim, N.; Bradford, D.
2004-12-01
We present a proof-of-concept analysis of the measurement of the health damage of ozone (O3) produced from nitrogen oxides (NOx = NO + NO2) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air Quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NOx emitted from individual sources can have on the downwind concentration of surface O3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting ozone-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NOx regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NOx emissions from one place or time to another could result in large changes in the health effects due to ozone formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NOx emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
NASA Astrophysics Data System (ADS)
He, H.-Q.; Zhou, G.; Wan, W.
2017-06-01
A functional form {I}\\max (R)={{kR}}-α , where R is the radial distance of a spacecraft, was usually used to model the radial dependence of peak intensities {I}\\max (R) of solar energetic particles (SEPs). In this work, the five-dimensional Fokker-Planck transport equation incorporating perpendicular diffusion is numerically solved to investigate the radial dependence of SEP peak intensities. We consider two different scenarios for the distribution of a spacecraft fleet: (1) along the radial direction line and (2) along the Parker magnetic field line. We find that the index α in the above expression varies in a wide range, primarily depending on the properties (e.g., location and coverage) of SEP sources and on the longitudinal and latitudinal separations between the sources and the magnetic foot points of the observers. Particularly, whether the magnetic foot point of the observer is located inside or outside the SEP source is a crucial factor determining the values of index α. A two-phase phenomenon is found in the radial dependence of peak intensities. The “position” of the break point (transition point/critical point) is determined by the magnetic connection status of the observers. This finding suggests that a very careful examination of the magnetic connection between the SEP source and each spacecraft should be taken in the observational studies. We obtain a lower limit of {R}-1.7+/- 0.1 for empirically modeling the radial dependence of SEP peak intensities. Our findings in this work can be used to explain the majority of the previous multispacecraft survey results, and especially to reconcile the different or conflicting empirical values of the index α in the literature.
Rounds, Stewart A.
2007-01-01
Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate to within about 0.005 degrees Celsius (?C). In addition to assessing the effects of point-source heat trades, the models were used to evaluate the temperature effects of several shade-restoration scenarios. Restoration of riparian shade along the entire Long Tom River, from its mouth to Fern Ridge Dam, was calculated to have a small but significant effect on daily maximum temperatures in the main-stem Willamette River, on the order of 0.03?C where the Long Tom River enters the Willamette River, and diminishing downstream. Model scenarios also were run to assess the effects of restoring selected 5-mile reaches of riparian vegetation along the main-stem Willamette River from river mile (RM) 176.80, just upstream of the point where the McKenzie River joins the Willamette River, to RM 116.87 near Albany, which is one location where cumulative point-source heating effects are at a maximum. Restoration of riparian vegetation along the main-stem Willamette River was shown by model runs to have a significant local effect on daily maximum river temperatures (0.046 to 0.194?C) at the site of restoration. The magnitude of the cooling depends on many factors including river width, flow, time of year, and the difference in vegetation characteristics between current and restored conditions. Downstream of the restored reach, the cooling effects are complex and have a nodal nature: at one-half day of travel time downstream, shade restoration has little effect on daily maximum temperature because water passes the restoration site at night; at 1 full day of travel time downstream, cooling effects increase to a second, diminished maximum. Such spatial complexities may complicate the trading of heat allocations between point and nonpoint sources. Upstream dams have an important effect on water temperature in the Willamette River system as a result of augmented flows as well as modified temperature releases over the course of the summer and autumn. The TMDL was formulated prior t
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
NASA Technical Reports Server (NTRS)
Miles, J. H.; Wasserbauer, C. A.; Krejsa, E. A.
1983-01-01
Pressure temperature cross spectra are necessary in predicting noise propagation in regions of velocity gradients downstream of combustors if the effect of convective entropy disturbances is included. Pressure temperature cross spectra and coherences were measured at spatially separated points in a combustion rig fueled with hydrogen. Temperature-temperature and pressure-pressure cross spectra and coherences between the spatially separated points as well as temperature and pressure autospectra were measured. These test results were compared with previous results obtained in the same combustion rig using Jet A fuel in order to investigate their dependence on the type of combustion process. The phase relationships are not consistent with a simple source model that assumes that pressure and temperature are in phase at a point in the combustor and at all other points downstream are related to one another by only a time delay due to convection of temperature disturbances. Thus these test results indicate that a more complex model of the source is required.
Kneller, James P.; Mauney, Alex W.
2013-08-23
Here, the transition probabilities describing the evolution of a neutrino with a given energy along some ray through a turbulent supernova profile are random variates unique to each ray. If the proto-neutron-star source of the neutrinos were a point, then one might expect the evolution of the turbulence would cause the flavor composition of the neutrinos to vary in time i.e. the flavor would scintillate. But in reality the proto-neutron star is not a point source—it has a size of order ˜10km, so the neutrinos emitted from different points at the source will each have seen different turbulence. The finitemore » source size will reduce the correlation of the flavor transition probabilities along different trajectories and reduce the magnitude of the flavor scintillation. To determine whether the finite size of the proto-neutron star will preclude flavor scintillation, we calculate the correlation of the neutrino flavor transition probabilities through turbulent supernova profiles as a function of the separation δx between the emission points. The correlation will depend upon the power spectrum used for the turbulence, and we consider two cases: when the power spectrum is isotropic, and the more realistic case of a power spectrum which is anisotropic on large scales and isotropic on small. Although it is dependent on a number of uncalibrated parameters, we show the supernova neutrino source is not of sufficient size to significantly blur flavor scintillation in all mixing channels when using an isotropic spectrum, and this same result holds when using an anisotropic spectrum, except when we greatly reduce the similarity of the turbulence along parallel trajectories separated by ˜10km or less.« less
Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions
NASA Astrophysics Data System (ADS)
Yang, X.
2015-12-01
We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.
Martelli, Fabrizio; Sassaroli, Angelo; Pifferi, Antonio; Torricelli, Alessandro; Spinelli, Lorenzo; Zaccanti, Giovanni
2007-12-24
The Green's function of the time dependent radiative transfer equation for the semi-infinite medium is derived for the first time by a heuristic approach based on the extrapolated boundary condition and on an almost exact solution for the infinite medium. Monte Carlo simulations performed both in the simple case of isotropic scattering and of an isotropic point-like source, and in the more realistic case of anisotropic scattering and pencil beam source, are used to validate the heuristic Green's function. Except for the very early times, the proposed solution has an excellent accuracy (> 98 % for the isotropic case, and > 97 % for the anisotropic case) significantly better than the diffusion equation. The use of this solution could be extremely useful in the biomedical optics field where it can be directly employed in conditions where the use of the diffusion equation is limited, e.g. small volume samples, high absorption and/or low scattering media, short source-receiver distances and early times. Also it represents a first step to derive tools for other geometries (e.g. slab and slab with inhomogeneities inside) of practical interest for noninvasive spectroscopy and diffuse optical imaging. Moreover the proposed solution can be useful to several research fields where the study of a transport process is fundamental.
Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao
2018-06-13
Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.
Time's arrow: A numerical experiment
NASA Astrophysics Data System (ADS)
Fowles, G. Richard
1994-04-01
The dependence of time's arrow on initial conditions is illustrated by a numerical example in which plane waves produced by an initial pressure pulse are followed as they are multiply reflected at internal interfaces of a layered medium. Wave interactions at interfaces are shown to be analogous to the retarded and advanced waves of point sources. The model is linear and the calculation is exact and demonstrably time reversible; nevertheless the results show most of the features expected of a macroscopically irreversible system, including the approach to the Maxwell-Boltzmann distribution, ergodicity, and concomitant entropy increase.
NASA Astrophysics Data System (ADS)
Makoveeva, Eugenya V.; Alexandrov, Dmitri V.
2018-01-01
This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Density and fluence dependence of lithium cell damage and recovery characteristics
NASA Technical Reports Server (NTRS)
Faith, T. J.
1971-01-01
Experimental results on lithium-containing solar cells point toward the lithium donor density gradient dN sub L/dw as being the crucial parameter in the prediction of cell behavior after irradiation by electrons. Recovery measurements on a large number of oxygen-rich and oxygen-lean lithium cells have confirmed that cell recovery speed is directly proportional to the value of the lithium gradient for electron fluences. Gradient measurements have also been correlated with lithium diffusion schedules. Results have shown that long diffusion times (25 h) with a paint-on source result in large cell-to-cell variations in gradient, probably due to a loss of the lithium source with time.
Local time asymmetries and toroidal field line resonances: Global magnetospheric modeling in SWMF
NASA Astrophysics Data System (ADS)
Ellington, S. M.; Moldwin, M. B.; Liemohn, M. W.
2016-03-01
We present evidence of resonant wave-wave coupling via toroidal field line resonance (FLR) signatures in the Space Weather Modeling Framework's (SWMF) global, terrestrial magnetospheric model in one simulation driven by a synthetic upstream solar wind with embedded broadband dynamic pressure fluctuations. Using in situ, stationary point measurements of the radial electric field along the 1500 LT meridian, we show that SWMF reproduces a multiharmonic, continuous distribution of FLRs exemplified by 180° phase reversals and amplitude peaks across the resonant L shells. By linearly increasing the amplitude of the dynamic pressure fluctuations in time, we observe a commensurate increase in the amplitude of the radial electric and azimuthal magnetic field fluctuations, which is consistent with the solar wind driver being the dominant source of the fast mode energy. While we find no discernible local time changes in the FLR frequencies despite large-scale, monotonic variations in the dayside equatorial mass density, in selectively sampling resonant points and examining spectral resonance widths, we observe significant radial, harmonic, and time-dependent local time asymmetries in the radial electric field amplitudes. A weak but persistent local time asymmetry exists in measures of the estimated coupling efficiency between the fast mode and toroidal wave fields, which exhibits a radial dependence consistent with the coupling strength examined by Mann et al. (1999) and Zhu and Kivelson (1988). We discuss internal structural mechanisms and additional external energy sources that may account for these asymmetries as we find that local time variations in the strength of the compressional driver are not the predominant source of the FLR amplitude asymmetries. These include resonant mode coupling of observed Kelvin-Helmholtz surface wave generated Pc5 band ultralow frequency pulsations, local time differences in local ionospheric dampening rates, and variations in azimuthal mode number, which may impact the partitioning of spectral energy between the toroidal and poloidal wave modes.
Comparative Studies for the Sodium and Potassium Atmospheres of the Moon and Mercury
NASA Technical Reports Server (NTRS)
Smyth, William H.
1999-01-01
A summary discussion of recent sodium and potassium observations for the atmospheres of the Moon and Mercury is presented with primary emphasis on new full-disk images that have become available for sodium. For the sodium atmosphere, image observations for both the Moon and Mercury are fitted with model calculations (1) that have the same source speed distribution, one recently measured for electron-stimulated desorption and thought to apply equally well to photon-stimulated desorption, (2) that have similar average surface sodium fluxes, about 2.8 x 10(exp 5) to 8.9 x 10(exp 5) atoms cm(exp -2)s(exp -1) for the Moon and approximately 3.5 x 10(exp 5) to 1.4 x 10(exp 6) atoms cm(exp -2)s(exp -1) for Mercury, but (3) that have very different distributions for the source surface area. For the Moon, a sunlit hemispherical surface source of between approximately 5.3 x 10(exp 22) to 1.2 x 10(exp 23) atoms/s is required with a spatial dependence at least as sharp as the square of the cosine of the solar zenith angle. For Mercury, a time dependent source that varies from 1.5 x 10(exp 22) to 5.8 x l0(exp 22) atoms/s is required which is confined to a small surface area located at, but asymmetrically distributed about, the subsolar point. The nature of the Mercury source suggest that the planetary magnetopause near the subsolar point acts as a time varying and partially protective shield through which charged particles may pass to interact with and liberate gas from the planetary surface. Suggested directions for future research activities are discussed.
NASA Astrophysics Data System (ADS)
Mauzerall, Denise L.; Sultan, Babar; Kim, Namsoug; Bradford, David F.
We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3) produced from nitrogen oxides (NO=NO+NO) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NO x regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Cerra, Frank
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3)more » a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow “pencil” beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source, or b) during the traversal of a point source, is a unifying concept. The “universal source strength” of air kerma rate at a meter from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.« less
Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners.
Strom, Daniel J; Cerra, Frank
2016-06-01
The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3) a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow "pencil" beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source or b) during the traversal of a point source is a unifying concept. The "universal source strength" of air kerma rate at 1 m from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobulnicky, Henry A.; Alexander, Michael J.; Babler, Brian L.
We characterize the completeness of point source lists from Spitzer Space Telescope surveys in the four Infrared Array Camera (IRAC) bandpasses, emphasizing the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) programs (GLIMPSE I, II, 3D, 360; Deep GLIMPSE) and their resulting point source Catalogs and Archives. The analysis separately addresses effects of incompleteness resulting from high diffuse background emission and incompleteness resulting from point source confusion (i.e., crowding). An artificial star addition and extraction analysis demonstrates that completeness is strongly dependent on local background brightness and structure, with high-surface-brightness regions suffering up to five magnitudes of reduced sensitivity to pointmore » sources. This effect is most pronounced at the IRAC 5.8 and 8.0 {mu}m bands where UV-excited polycyclic aromatic hydrocarbon emission produces bright, complex structures (photodissociation regions). With regard to diffuse background effects, we provide the completeness as a function of stellar magnitude and diffuse background level in graphical and tabular formats. These data are suitable for estimating completeness in the low-source-density limit in any of the four IRAC bands in GLIMPSE Catalogs and Archives and some other Spitzer IRAC programs that employ similar observational strategies and are processed by the GLIMPSE pipeline. By performing the same analysis on smoothed images we show that the point source incompleteness is primarily a consequence of structure in the diffuse background emission rather than photon noise. With regard to source confusion in the high-source-density regions of the Galactic Plane, we provide figures illustrating the 90% completeness levels as a function of point source density at each band. We caution that completeness of the GLIMPSE 360/Deep GLIMPSE Catalogs is suppressed relative to the corresponding Archives as a consequence of rejecting stars that lie in the point-spread function wings of saturated sources. This effect is minor in regions of low saturated star density, such as toward the Outer Galaxy; this effect is significant along sightlines having a high density of saturated sources, especially for Deep GLIMPSE and other programs observing closer to the Galactic center using 12 s or longer exposure times.« less
NASA Astrophysics Data System (ADS)
Ajitanand, N. N.; Phenix Collaboration
2014-11-01
Two-pion interferometry measurements in d +Au and Au + Au collisions at √{sNN} = 200 GeV are used to extract and compare the Gaussian source radii Rout, Rside and Rlong, which characterize the space-time extent of the emission sources. The comparisons, which are performed as a function of collision centrality and the mean transverse momentum for pion pairs, indicate strikingly similar patterns for the d +Au and Au + Au systems. They also indicate a linear dependence of Rside on the initial transverse geometric size R bar , as well as a smaller freeze-out size for the d +Au system. These patterns point to the important role of final-state re-scattering effects in the reaction dynamics of d +Au collisions.
Hansman, Jan; Mrdja, Dusan; Slivka, Jaroslav; Krmar, Miodrag; Bikit, Istvan
2015-05-01
The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9in.x9in. NaI well-detector with 3in. thickness and a 3in.×3in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5-50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130-4000Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Congdon, Arthur B.; Keeton, Charles R.; Nordgren, C. Erik
2008-09-01
Gravitational lensing provides a unique and powerful probe of the mass distributions of distant galaxies. Four-image lens systems with fold and cusp configurations have two or three bright images near a critical point. Within the framework of singularity theory, we derive analytic relations that are satisfied for a light source that lies a small but finite distance from the astroid caustic of a four-image lens. Using a perturbative expansion of the image positions, we show that the time delay between the close pair of images in a fold lens scales with the cube of the image separation, with a constant of proportionality that depends on a particular third derivative of the lens potential. We also apply our formalism to cusp lenses, where we develop perturbative expressions for the image positions, magnifications and time delays of the images in a cusp triplet. Some of these results were derived previously for a source asymptotically close to a cusp point, but using a simplified form of the lens equation whose validity may be in doubt for sources that lie at astrophysically relevant distances from the caustic. Along with the work of Keeton, Gaudi & Petters, this paper demonstrates that perturbation theory plays an important role in theoretical lensing studies.
NASA Astrophysics Data System (ADS)
Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.
2017-12-01
During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.
NASA Technical Reports Server (NTRS)
Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)
2017-01-01
A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.
NASA Astrophysics Data System (ADS)
Zander, C.; Plastino, A. R.; Díaz-Alonso, J.
2015-11-01
We investigate time-dependent solutions for a non-linear Schrödinger equation recently proposed by Nassar and Miret-Artés (NM) to describe the continuous measurement of the position of a quantum particle (Nassar, 2013; Nassar and Miret-Artés, 2013). Here we extend these previous studies in two different directions. On the one hand, we incorporate a potential energy term in the NM equation and explore the corresponding wave packet dynamics, while in the previous works the analysis was restricted to the free-particle case. On the other hand, we investigate time-dependent solutions while previous studies focused on a stationary one. We obtain exact wave packet solutions for linear and quadratic potentials, and approximate solutions for the Morse potential. The free-particle case is also revisited from a time-dependent point of view. Our analysis of time-dependent solutions allows us to determine the stability properties of the stationary solution considered in Nassar (2013), Nassar and Miret-Artés (2013). On the basis of these results we reconsider the Bohmian approach to the NM equation, taking into account the fact that the evolution equation for the probability density ρ =| ψ | 2 is not a continuity equation. We show that the effect of the source term appearing in the evolution equation for ρ has to be explicitly taken into account when interpreting the NM equation from a Bohmian point of view.
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
Evaluating tsunami hazards from debris flows
Watts, P.; Walder, J.S.; ,
2003-01-01
Debris flows that enter water bodies may have significant kinetic energy, some of which is transferred to water motion or waves that can impact shorelines and structures. The associated hazards depend on the location of the affected area relative to the point at which the debris flow enters the water. Three distinct regions (splash zone, near field, and far field) may be identified. Experiments demonstrate that characteristics of the near field water wave, which is the only coherent wave to emerge from the splash zone, depend primarily on debris flow volume, debris flow submerged time of motion, and water depth at the point where debris flow motion stops. Near field wave characteristics commonly may be used as & proxy source for computational tsunami propagation. This result is used to assess hazards associated with potential debris flows entering a reservoir in the northwestern USA. ?? 2003 Millpress,.
Strategies for lidar characterization of particulates from point and area sources
NASA Astrophysics Data System (ADS)
Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry
2010-10-01
Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (<10 m) and high sensitivity. Energy Dynamics Laboratory and Utah State University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Thick-target bremsstrahlung interpretation of short time-scale solar hard X-ray features
NASA Technical Reports Server (NTRS)
Emslie, A. G.
1983-01-01
Steady-state analyses of bremsstrahlung hard X-ray production in solar flares are appropriate only if the lifetime of the high energy electrons in the X-ray source is much shorter than the duration of the observed X-ray burst. For a thick-target nonthermal model, this implies that a full time-dependent analysis is required when the duration of the burst is comparable to the collisional lifetime of the injected electrons, in turn set by the lengths and densities of the flaring region. In this paper we present the results of such a time-dependent analysis, and we point out that the intrinsic temporal signature of the thick-target production mechanism, caused by the finite travel time of the electrons through the target, may indeed rule out such a mechanism for extremely short duration hard X-ray events.
NASA Astrophysics Data System (ADS)
Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.
2017-12-01
Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.
Speech-Message Extraction from Interference Introduced by External Distributed Sources
NASA Astrophysics Data System (ADS)
Kanakov, V. A.; Mironov, N. A.
2017-08-01
The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2012-12-01
Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
Frapbot: An open-source application for FRAP data.
Kohze, Robin; Dieteren, Cindy E J; Koopman, Werner J H; Brock, Roland; Schmidt, Samuel
2017-08-01
We introduce Frapbot, a free-of-charge open source software web application written in R, which provides manual and automated analyses of fluorescence recovery after photobleaching (FRAP) datasets. For automated operation, starting from data tables containing columns of time-dependent intensity values for various regions of interests within the images, a pattern recognition algorithm recognizes the relevant columns and identifies the presence or absence of prebleach values and the time point of photobleaching. Raw data, residuals, normalization, and boxplots indicating the distribution of half times of recovery (t 1/2 ) of all uploaded files are visualized instantly in a batch-wise manner using a variety of user-definable fitting options. The fitted results are provided as .zip file, which contains .csv formatted output tables. Alternatively, the user can manually control any of the options described earlier. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Time-integrated Searches for Point-like Sources of Neutrinos with the 40-string IceCube Detector
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brown, A. M.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lehmann, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Singh, K.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration
2011-05-01
We present the results of time-integrated searches for astrophysical neutrino sources in both the northern and southern skies. Data were collected using the partially completed IceCube detector in the 40-string configuration recorded between 2008 April 5 and 2009 May 20, totaling 375.5 days livetime. An unbinned maximum likelihood ratio method is used to search for astrophysical signals. The data sample contains 36,900 events: 14,121 from the northern sky, mostly muons induced by atmospheric neutrinos, and 22,779 from the southern sky, mostly high-energy atmospheric muons. The analysis includes searches for individual point sources and stacked searches for sources in a common class, sometimes including a spatial extent. While this analysis is sensitive to TeV-PeV energy neutrinos in the northern sky, it is primarily sensitive to neutrinos with energy greater than about 1 PeV in the southern sky. No evidence for a signal is found in any of the searches. Limits are set for neutrino fluxes from astrophysical sources over the entire sky and compared to predictions. The sensitivity is at least a factor of two better than previous searches (depending on declination), with 90% confidence level muon neutrino flux upper limits being between E 2 dΦ/dE ~ 2-200 × 10-12 TeV cm-2 s-1 in the northern sky and between 3-700 × 10-12 TeV cm-2 s-1 in the southern sky. The stacked source searches provide the best limits to specific source classes. The full IceCube detector is expected to improve the sensitivity to dΦ/dEvpropE -2 sources by another factor of two in the first year of operation.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J. Kenneth
2000-10-15
A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.
FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com
2016-07-10
In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
NASA Astrophysics Data System (ADS)
Kellerman, Adam; Makarevich, Roman; Spanswick, Emma; Donovan, Eric; Shprits, Yuri
2016-07-01
Energetic electrons in the 10's of keV range precipitate to the upper D- and lower E-region ionosphere, and are responsible for enhanced ionization. The same particles are important in the inner magnetosphere, as they provide a source of energy for waves, and thus relate to relativistic electron enhancements in Earth's radiation belts.In situ observations of plasma populations and waves are usually limited to a single point, which complicates temporal and spatial analysis. Also, the lifespan of satellite missions is often limited to several years which does not allow one to infer long-term climatology of particle precipitation, important for affecting ionospheric conditions at high latitudes. Multi-point remote sensing of the ionospheric plasma conditions can provide a global view of both ionospheric and magnetospheric conditions, and the coupling between magnetospheric and ionospheric phenomena can be examined on time-scales that allow comprehensive statistical analysis. In this study we utilize multi-point riometer measurements in conjunction with in situ satellite data, and physics-based modeling to investigate the spatio-temporal and energy-dependent response of riometer absorption. Quantifying this relationship may be a key to future advancements in our understanding of the complex D-region ionosphere, and may lead to enhanced specification of auroral precipitation both during individual events and over climatological time-scales.
A time-dependent search for high-energy neutrinos from bright GRBs with ANTARES
NASA Astrophysics Data System (ADS)
Celli, Silvia
2017-03-01
Astrophysical point-like neutrino sources, like Gamma-Ray Bursts (GRBs), are one of the main targets for neutrino telescopes, since they are among the best candidates for Ultra-High-Energy Cosmic Ray (UHECR) acceleration. From the interaction between the accelerated protons and the intense radiation fields of the source jet, charged mesons are produced, which then decay into neutrinos. The methods and the results of a search for high-energy neutrinos in spatial and temporal correlation with the detected gamma-ray emission are presented for four bright GRBs observed between 2008 and 2013: a time-dependent analysis, optimised for each flare of the selected bursts, is performed to predict detailed neutrino spectra. The internal shock scenario of the fireball model is investigated, relying on the neutrino spectra computed through the numerical code NeuCosmA. The analysis is optimized on a per burst basis, through the maximization of the signal discovery probability. Since no events in ANTARES data passed the optimised cuts, 90% C.L. upper limits are derived on the expected neutrino fluences.
Generalized Fluid System Simulation Program (GFSSP) - Version 6
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul
2015-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.
Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor
NASA Astrophysics Data System (ADS)
Tsushima, H.
2017-12-01
For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309.
Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes
NASA Astrophysics Data System (ADS)
Yamada, M.; Mori, J. J.
2009-12-01
Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.
X-Pinch And Its Applications In X-ray Radiograph
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou Xiaobing; Wang Xinxin; Liu Rui
2009-07-07
An X-pinch device and the related diagnostics of x-ray emission from X-pinch were briefly described. The time-resolved x-ray measurements with photoconducting diodes show that the x-ray pulse usually consists of two subnanosecond peaks with a time interval of about 0.5 ns. Being consistent with these two peaks of the x-ray pulse, two point x-ray sources of size ranging from 100 mum to 5 mum and depending on cut-off x-ray photon energy were usually observed on the pinhole pictures. The x-pinch was used as x-ray source for backlighting of the electrical explosion of single wire and the evolution of X-pinch, andmore » for phase-contrast imaging of soft biological objects such as a small shrimp and a mosquito.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hep, J.; Konecna, A.; Krysl, V.
2011-07-01
This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less
NASA Astrophysics Data System (ADS)
Aharonian, F. A.; Akhperjanian, A. G.; Beilicke, M.; Bernloehr, K.; Bojahr, H.; Bolz, O.; Boerst, H.; Coarasa, T.; Contreras, J. L.; Cortina, J.; Denninghoff, S.; Fonseca, V.; Girma, M.; Goetting, N.; Heinzelmann, G.; Hermann, G.; Heusler, A.; Hofmann, W.; Horns, D.; Jung, I.; Kankanyan, R.; Kestel, M.; Kettler, J.; Kohnle, A.; Konopelko, A.; Kornmeyer, H.; Kranich, D.; Krawczynski, H.; Lampeitl, H.; Lopez, M.; Lorenz, E.; Lucarelli, F.; Mang, O.; Meyer, H.; Mirzoyan, R.; Moralejo, A.; Ona, E.; Panter, M.; Plyasheshnikov, A.; Puehlhofer, G.; Rauterberg, G.; Reyes, R.; Rhode, W.; Ripken, J.; Roehring, A.; Rowell, G. P.; Sahakian, V.; Samorski, M.; Schilling, M.; Siems, M.; Sobzynska, D.; Stamm, W.; Tluczykont, M.; Voelk, H. J.; Wiedner, C. A.; Wittek, W.
2002-12-01
Using the HEGRA system of imaging atmospheric Cherenkov telescopes, one quarter of the Galactic plane (-2o < l < 85o) was surveyed for TeV gamma-ray emission from point sources and moderately extended sources (φ <= 0.8o). The region covered includes 86 known pulsars (PSR), 63 known supernova remnants (SNR) and nine GeV sources, representing a significant fraction of the known populations. No evidence for emission of TeV gamma radiation was detected, and upper limits range from 0.15 Crab units up to several Crab units, depending on the observation time and zenith angles covered. The ensemble sums over selected SNR and pulsar subsamples and over the GeV-sources yield no indication of emission from these potential sources. The upper limit for the SNR population is 6.7% of the Crab flux and for the pulsar ensemble is 3.6% of the Crab flux.
Madhavan, Sangeetha; Collinson, Mark; Gómez-Olivé, F. Xavier; Ralston, Margaret
2015-01-01
South Africa’s population is aging. Most of the older Black South Africans continue to live in extended household structures with children, grandchildren, and other kin. They also constitute a source of income through a means-tested noncontributory state-funded pension available at age 60. Using census data from the Agincourt Health and Demographic Surveillance System in 2000, 2005, and 2010, we develop a typology of living arrangements that is reflective of the social positioning of elderly persons as dependent or productive household members and analyze changes in the distribution over time. Older persons, in general, live in large, complex, and multigenerational households. Multigenerational households with “productive” older persons are increasing in proportion over the period, although there are few differences by gender or pension eligibility at any time point. PMID:25651584
NASA Astrophysics Data System (ADS)
Gallovič, F.
2017-09-01
Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.
Gridded National Inventory of U.S. Methane Emissions
NASA Technical Reports Server (NTRS)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel;
2016-01-01
We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Gridded National Inventory of U.S. Methane Emissions.
Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L
2016-12-06
We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Mercury Contaminated Sediment Sites: A Review Of Remedial Solutions
Mercury (Hg) can accumulate in sediment from point and non-point sources, depending on a number of physical, chemical, biological, geological and anthropogenic environmental processes. It is believed that the associated Hg contamination in aquatic systems can be decreased by imp...
The Prediction of Scattered Broadband Shock-Associated Noise
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
Theory of Parabolic Arcs in Interstellar Scintillation Spectra
NASA Astrophysics Data System (ADS)
Cordes, James M.; Rickett, Barney J.; Stinebring, Daniel R.; Coles, William A.
2006-01-01
Interstellar scintillation (ISS), observed as time variation in the intensity of a compact radio source, is caused by small-scale structure in the electron density of the interstellar plasma. Dynamic spectra of ISS show modulation in radio frequency and time. Here we relate the (two-dimensional) power spectrum of the dynamic spectrum-the secondary spectrum-to the scattered image of the source. Recent work has identified remarkable parabolic arcs in secondary spectra. Each point in a secondary spectrum corresponds to interference between points in the scattered image with a certain Doppler shift and a certain delay. The parabolic arc corresponds to the quadratic relation between differential Doppler shift and delay through their common dependence on scattering angle. We show that arcs will occur in all media that scatter significant power at angles larger than the rms angle. Thus, effects such as source diameter, steep spectra, and dissipation scales, which truncate high angle scattering, also truncate arcs. Arcs are equally visible in simulations of nondispersive scattering. They are enhanced by anisotropic scattering when the spatial structure is elongated perpendicular to the velocity. In weak scattering the secondary spectrum is directly mapped from the scattered image, and this mapping can be inverted. We discuss additional observed phenomena including multiple arcs and reverse arclets oriented oppositely to the main arc. These phenomena persist for many refractive scattering times, suggesting that they are due to large-scale density structures, rather than low-frequency components of Kolmogorov turbulence.
Local spectrum analysis of field propagation in an anisotropic medium. Part I. Time-harmonic fields.
Tinkelman, Igor; Melamed, Timor
2005-06-01
The phase-space beam summation is a general analytical framework for local analysis and modeling of radiation from extended source distributions. In this formulation, the field is expressed as a superposition of beam propagators that emanate from all points in the source domain and in all directions. In this Part I of a two-part investigation, the theory is extended to include propagation in anisotropic medium characterized by a generic wave-number profile for time-harmonic fields; in a companion paper [J. Opt. Soc. Am. A 22, 1208 (2005)], the theory is extended to time-dependent fields. The propagation characteristics of the beam propagators in a homogeneous anisotropic medium are considered. With use of Gaussian windows for the local processing of either ordinary or extraordinary electromagnetic field distributions, the field is represented by a phase-space spectral distribution in which the propagating elements are Gaussian beams that are formulated by using Gaussian plane-wave spectral distributions over the extended source plane. By applying saddle-point asymptotics, we extract the Gaussian beam phenomenology in the anisotropic environment. The resulting field is parameterized in terms of the spatial evolution of the beam curvature, beam width, etc., which are mapped to local geometrical properties of the generic wave-number profile. The general results are applied to the special case of uniaxial crystal, and it is found that the asymptotics for the Gaussian beam propagators, as well as the physical phenomenology attached, perform remarkably well.
Waveform inversion of volcano-seismic signals for an extended source
Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.
2007-01-01
We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Stamm, C.; Tamburini, F.; Hahn, C.; Stadelmann, F.; Bernasconi, S. M.; Frossard, E.
2011-12-01
Phosphorus is a limiting nutrient in many ecosystems. However, freshwater systems are experiencing nutrient overload and consequent eutrophication, caused mainly by a poor use of resources and in many situations by the input of a surplus of nutrients from agriculture. The sources of nutrient pollution together with the fate of the nutrients once in the water system need to be identified and understood, so that a better management can be implemented. There are multiple agricultural P-sources like mineral fertilizers, animal excreta, plant residues, soils, and since P has only one stable isotope, no analytical method allows to directly distinguishing P from its different possible source. However, the isotopic signature of oxygen associated to phosphate (δ18O-P) has been considered to be a promising tool for such source tracing in the environment. The main limitation of using this tool as a tracer is that biological activity could erase the original source signature, which is overprinted by a temperature dependent equilibration with oxygen in water. We present data from the region of Lake Baldegg (Central Switzerland), which is characterized by a high animal density (dairy cows, pigs) and intensive grassland cultivation. P losses from the grasslands constitute the main source of P for the freshwater system. Using δ18O-P, we have first characterized animal manure, soil available P, and plant P, the three main possible Pi sources to the system, and we have determined the δ18O-P of three brooks at different time points. Phosphorus concentration, oxygen isotopic composition of water and temperature were also monitored. The three sources of P showed well distinct signatures, with values from animal manures and plants being 12% and higher than 20%, respectively. Depending on the time of sampling, the δ18O-P in the brooks showed deviations from the expected equilibrium, pointing to a contribution of P coming from animal manure. Data from runoff experiments in the same region showed an inverse correlation between δ18O-P in runoff water and P concentration in the soil. This indicated that manure P contributed directly to P mobilized into the surface runoff. The presented results, together with the outcome of other recent studies, indicate the usefulness and potentiality of δ18O-P a tracer for P in hydrological systems.
NASA Astrophysics Data System (ADS)
Ficaro, Edward Patrick
The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of uranyl nitrate. It is believed that the response of these detectors is not well known and is incorrectly modeled in KENO-NR. In addition to these tests, several benchmark calculations were also performed to provide insight into the properties of the point kinetic formulation.
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
A GIS-based time-dependent seismic source modeling of Northern Iran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
NASA Technical Reports Server (NTRS)
Harmon, B. A.; Wilson, C. A.; Fishman, G. J.; Connaughton, V.; Henze, W.; Paciesas, W. S.; Finger, M. H.; McCollough, M. L.; Sahi, M.; Peterson, B.
2003-01-01
The Burst and Transient Source Experiment (BATSE), aboard the Compton Gamma Ray Observatory (CGRO), provided a record of the low-energy gamma-ray sky (approx. 20-1000 keV) between 1991 April and 2000 May (9.1y). BATSE monitored the high energy sky using the Earth occultation technique (EOT) for point sources whose emission extended for times on the order of the CGRO orbital period (approx. 92m) or greater. Using the EOT to extract flux - 2 - information, a catalog of sources using data from the BATSE large area detectors has been prepared. The first part of the catalog consists of results from the all-sky monitoring of 58 sources, mostly Galactic, with intrinsic variability on timescales of hours to years. For these sources, we have included tables of flux and spectral data, and outburst times for transients. Light curves (or flux histories) covering the entire nine mission are being placed on the world wide web. We then performed a deep-sampling of these 58 objects, plus a selection of 121 more objects, combining data from the entire 9.ly BATSE dataset. Source types considered were primarily accreting binaries, but a small number of representative active galaxies, X-ray-emitting stars, and supernova remnants were also included. The sample represents a compilation of sources monitored and/or discovered with BATSE and other high energy instruments between 1991 and 2000, known sources taken from the HEAO 1 A-4 (Levine et al. 1984) and Macomb and Gehrels (1999) catalogs. The deep sample results include definite detections of 82 objects and possible detections of 36 additional objects. The definite detections spanned three classes of sources: accreting black hole and neutron star binaries, active galaxies and supernova remnants. The average fluxes measured for the fourth class, the X-ray emitting stars, were below the confidence limit for definite detection. Flux data for the deep sample are presented in four energy bands: 20-40, 40-70, 70-160, and 160-430 keV. The limiting average flux level (9.1 y) for the sample varies from 3.5 to 20 mCrab (5delta) between 20 and 430 keV, depending on systematic error, which in turn is primarily dependent on the sky location. To strengthen the credibility of detection of weaker sources (approx.5-25 mCrab), we generated Earth occultation images, searched for periodic behavior using FFT and epoch folding methods, and critically evaluated the energy-dependent emission in the four flux bands. The deep sample results are intended for guidance in performing future all-sky surveys or pointed observations in the hard X-ray and low-energy gamma-ray band, as well as more detailed studies with the BATSE EOT.
NASA Astrophysics Data System (ADS)
Clark, D. M.; Eikenberry, S. S.; Brandl, B. R.; Wilson, J. C.; Carson, J. C.; Henderson, C. P.; Hayward, T. L.; Barry, D. J.; Ptak, A. F.; Colbert, E. J. M.
2008-05-01
We use the previously identified 15 infrared star cluster counterparts to X-ray point sources in the interacting galaxies NGC 4038/4039 (the Antennae) to study the relationship between total cluster mass and X-ray binary number. This significant population of X-Ray/IR associations allows us to perform, for the first time, a statistical study of X-ray point sources and their environments. We define a quantity, η, relating the fraction of X-ray sources per unit mass as a function of cluster mass in the Antennae. We compute cluster mass by fitting spectral evolutionary models to Ks luminosity. Considering that this method depends on cluster age, we use four different age distributions to explore the effects of cluster age on the value of η and find it varies by less than a factor of 4. We find a mean value of η for these different distributions of η = 1.7 × 10-8 M-1⊙ with ση = 1.2 × 10-8 M-1⊙. Performing a χ2 test, we demonstrate η could exhibit a positive slope, but that it depends on the assumed distribution in cluster ages. While the estimated uncertainties in η are factors of a few, we believe this is the first estimate made of this quantity to "order of magnitude" accuracy. We also compare our findings to theoretical models of open and globular cluster evolution, incorporating the X-ray binary fraction per cluster.
Point to point multispectral light projection applied to cultural heritage
NASA Astrophysics Data System (ADS)
Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.
2017-09-01
Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".
Extended- and Point-Source Radiometric Program
1962-08-08
aircraft of the U. S. Geo- logical Survey (USGS). Because many sites involved in nuclear activities exist and more are coming into exist- ence, the need of...GZ in Fig. 1.3 was the Ground Zero point of an old nuclear detonation and, unfortunately, was still highly radioactive. The detail of the source...measurements are the most dependable since the instrument was calibrated with Cs 137, Co 6°, and radium at a distance that gave a scattering component
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
Time-dependent wave splitting and source separation
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nataf, Frédéric; Assous, Franck
2017-02-01
Starting from classical absorbing boundary conditions, we propose a method for the separation of time-dependent scattered wave fields due to multiple sources or obstacles. In contrast to previous techniques, our method is local in space and time, deterministic, and avoids a priori assumptions on the frequency spectrum of the signal. Numerical examples in two space dimensions illustrate the usefulness of wave splitting for time-dependent scattering problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
Compact range for variable-zone measurements
Burnside, Walter D.; Rudduck, Roger C.; Yu, Jiunn S.
1988-08-02
A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector.
Compact range for variable-zone measurements
Burnside, Walter D.; Rudduck, Roger C.; Yu, Jiunn S.
1988-01-01
A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector.
NASA Technical Reports Server (NTRS)
Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.
1992-01-01
The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
NASA Technical Reports Server (NTRS)
Cline, M. C.
1981-01-01
A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
Interorganizational exchanges as performance markers in a community cancer network.
McKinney, M M; Morrissey, J P; Kaluzny, A D
1993-01-01
OBJECTIVE. This study examines how "strategic partnerships" between community-based consortia of oncologists and hospitals (CCOPs) and clinical cooperative groups emerge, develop, and influence patient accruals (i.e., the number of patients enrolled in clinical trials) over time. DATA SOURCES AND STUDY SETTING. Study analyses are based on 65 pairwise relationships that 38 CCOPs established with eight clinical cooperative groups in September 1983 and maintained through February 1989. Data are drawn from grantee applications and progress reports. STUDY DESIGN. The study examines how different types of CCOP-cooperative group exchange relate to one another and to CCOP patient accruals over six time points. Key independent variables include resource dependence, information exchange (i.e., meeting attendance and committee membership), and protocol exchange (i.e., the number of different protocols used). DATA COLLECTION METHODS. Data extracted from secondary sources were entered in a data base. PRINCIPAL FINDINGS. The number of CCOP physicians and support staff who attend cooperative group meetings during the first two years of a clinical research partnership has a significant influence on meeting attendance and protocol use in later years. Two-thirds or more of the variance in patient accruals at each time point can be explained by the number of different protocols used and the number of CCOP representatives serving on cooperative group committees (or attending cooperative group meetings). CONCLUSIONS. The findings highlight the importance of historical relationships and anticipated resource dependence in shaping initial exchange patterns. They also suggest that strategic partnerships need to emphasize structures and processes that encourage early involvement in collaborative activities and that reward participants for maintaining high levels of interaction. PMID:8407338
Inverse kinematic problem for a random gradient medium in geometric optics approximation
NASA Astrophysics Data System (ADS)
Petersen, N. V.
1990-03-01
Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
[A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].
Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki
2016-03-01
The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Marques, E. R. C.; Lee, S. S.
1986-01-01
The far-field displacements in an infinite transversely isotropic elastic medium subjected to an oscillatory concentrated force are derived. The concepts of velocity surface, slowness surface and wave surface are used to describe the geometry of the wave propagation process. It is shown that the decay of the wave amplitudes depends not only on the distance from the source (as in isotropic media) but also depends on the direction of the point of interest from the source. As an example, the displacement field is computed for a laboratory fabricated unidirectional fiberglass epoxy composite. The solution for the displacements is expressed as an amplitude distribution and is presented in polar diagrams. This analysis has potential usefulness in the acoustic emission (AE) and ultrasonic nondestructive evaluation of composite materials. For example, the transient localized disturbances which are generally associated with AE sources can be modeled via this analysis. In which case, knowledge of the displacement field which arrives at a receiving transducer allows inferences regarding the strength and orientation of the source, and consequently perhaps the degree of damage within the composite.
1. DEPENDENCY Both pointed and flat shingles appear to be ...
1. DEPENDENCY Both pointed and flat shingles appear to be original. Original purpose of this building was not recorded at the time of this survey. - Annandale Plantation, Dependency, State Routes 30 & 18 vicinity, Georgetown, Georgetown County, SC
NASA Astrophysics Data System (ADS)
Granade, Christopher; Combes, Joshua; Cory, D. G.
2016-03-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Schatz, Enid; Madhavan, Sangeetha; Collinson, Mark; Gómez-Olivé, F Xavier; Ralston, Margaret
2015-08-01
South Africa's population is aging. Most of the older Black South Africans continue to live in extended household structures with children, grandchildren, and other kin. They also constitute a source of income through a means-tested noncontributory state-funded pension available at age 60. Using census data from the Agincourt Health and Demographic Surveillance System in 2000, 2005, and 2010, we develop a typology of living arrangements that is reflective of the social positioning of elderly persons as dependent or productive household members and analyze changes in the distribution over time. Older persons, in general, live in large, complex, and multigenerational households. Multigenerational households with "productive" older persons are increasing in proportion over the period, although there are few differences by gender or pension eligibility at any time point. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Compact range for variable-zone measurements
Burnside, W.D.; Rudduck, R.C.; Yu, J.S.
1987-02-27
A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector. 2 figs.
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
Asher, William E.; Bender, David A.; Zogorski, John S.; Bartholomay, Roy C.
2006-01-01
This report documents the construction and verification of the model, StreamVOC, that estimates (1) the time- and position-dependent concentrations of volatile organic compounds (VOCs) in rivers and streams as well as (2) the source apportionment (SA) of those concentrations. The model considers how different types of sources and loss processes can act together to yield a given observed VOC concentration. Reasons for interest in the relative and absolute contributions of different sources to contaminant concentrations include the need to apportion: (1) the origins for an observed contamination, and (2) the associated human and ecosystem risks. For VOCs, sources of interest include the atmosphere (by absorption), as well as point and nonpoint inflows of VOC-containing water. Loss processes of interest include volatilization to the atmosphere, degradation, and outflows of VOC-containing water from the stream to local ground water. This report presents the details of StreamVOC and compares model output with measured concentrations for eight VOCs found in the Aberjona River at Winchester, Massachusetts. Input data for the model were obtained during a synoptic study of the stream system conducted July 11-13, 2001, as part of the National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey. The input data included a variety of basic stream characteristics (for example, flows, temperature, and VOC concentrations). The StreamVOC concentration results agreed moderately well with the measured concentration data for several VOCs and provided compound-dependent SA estimates as a function of longitudinal distance down the river. For many VOCs, the quality of the agreement between the model-simulated and measured concentrations could be improved by simple adjustments of the model input parameters. In general, this study illustrated: (1) the considerable difficulty of quantifying correctly the locations and magnitudes of ground-water-related sources of contamination in streams; and (2) that model-based estimates of stream VOC concentrations are likely to be most accurate when the major sources are point sources or tributaries where the spatial extent and magnitude of the sources are tightly constrained and easily determined.
NASA Technical Reports Server (NTRS)
Stamatikos, M.
2012-01-01
This paper presents four searches for flaring sources of neutrinos using the IceCube neutrino telescope. For the first time, a search is performed over the entire parameter space of energy, direction and time with sensitivity to neutrino flares lasting between 20 microseconds and a year duration from astrophysical sources. Searches which integrate over time are less sensitive to flares because they are affected by a larger background of atmospheric neutrinos and muons that can be reduced by the use of additional timing information. Flaring sources considered here, such as active galactic nuclei, soft gamma ray repeaters and gamma-ray bursts, are promising candidate neutrino emitters. Two searches are untriggered in the sense that they look for any possible flare in the entire sky and from a predefined catalog of sources from which photon flares have been recorded. The other two searches are triggered by multi-wavelength information on flares from blazars and from a soft gamma-ray repeater. One triggered search uses lightcurves from Fermi-LAT which provides continuous monitoring. A second triggered search uses information where the flux states have been measured only for short periods of time near the flares. The untriggered searches use data taken by 40 strings of IceCube between Apr 5, 2008 and May 20, 2009. The triggered searches also use data taken by the 22-string configuration of IceCube operating between May 31, 2007 and Apr 5, 2008. The results from all four searches are compatible with a fluctuation of the background.
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brown, A. M.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Denger, T.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Stür, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration
2012-01-01
This paper presents four searches for flaring sources of neutrinos using the IceCube neutrino telescope. For the first time, a search is performed over the entire parameter space of energy, direction, and time with sensitivity to neutrino flares lasting between 20 μs and a year duration from astrophysical sources. Searches that integrate over time are less sensitive to flares because they are affected by a larger background of atmospheric neutrinos and muons that can be reduced by the use of additional timing information. Flaring sources considered here, such as active galactic nuclei, soft gamma-ray repeaters, and gamma-ray bursts, are promising candidate neutrino emitters. Two searches are "untriggered" in the sense that they look for any possible flare in the entire sky and from a predefined catalog of sources from which photon flares have been recorded. The other two searches are triggered by multi-wavelength information on flares from blazars and from a soft gamma-ray repeater. One triggered search uses lightcurves from Fermi-LAT which provides continuous monitoring. A second triggered search uses information where the flux states have been measured only for short periods of time near the flares. The untriggered searches use data taken by 40 strings of IceCube between 2008 April 5 and 2009 May 20. The triggered searches also use data taken by the 22-string configuration of IceCube operating between 2007 May 31 and 2008 April 5. The results from all four searches are compatible with a fluctuation of the background.
NASA Technical Reports Server (NTRS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.;
2012-01-01
This paper presents four searches for flaring sources of neutrinos using the IceCube neutrino telescope. For the first time, a search is performed over the entire parameter space of energy, direction, and time with sensitivity to neutrino flares lasting between 20 micro-s and a year duration from astrophysical sources. Searches that integrate over time are less sensitive to flares because they are affected by a larger background of atmospheric neutrinos and muons that can be reduced by the use of additional timing information. Flaring sources considered here, such as active galactic nuclei, soft gamma-ray repeaters, and gamma-ray bursts, are promising candidate neutrino emitters. Two searches are "untriggered" in the sense that they look for any possible flare in the entire sky and from a predefined catalog of sources from which photon flares have been recorded. The other two searches are triggered by multi-wavelength information on flares from blazars and from a soft gamma-ray repeater. One triggered search uses lightcurves from Fermi-LAT which provides continuous monitoring. A second triggered search uses information where the flux states have been measured only for short periods of time near the flares. The untriggered searches use data taken by 40 strings of IceCube between 2008 April 5 and 2009 May 20. The triggered searches also use data taken by the 22-string configuration of IceCube operating between 2007 May 31 and 2008 April 5. The results from all four searches are compatible with a fluctuation of the background.
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
A numerical experiment on light pollution from distant sources
NASA Astrophysics Data System (ADS)
Kocifaj, M.
2011-08-01
To predict the light pollution of the night-time sky realistically over any location or measuring point on the ground presents quite a difficult calculation task. Light pollution of the local atmosphere is caused by stray light, light loss or reflection of artificially illuminated ground objects or surfaces such as streets, advertisement boards or building interiors. Thus it depends on the size, shape, spatial distribution, radiative pattern and spectral characteristics of many neighbouring light sources. The actual state of the atmospheric environment and the orography of the surrounding terrain are also relevant. All of these factors together influence the spectral sky radiance/luminance in a complex manner. Knowledge of the directional behaviour of light pollution is especially important for the correct interpretation of astronomical observations. From a mathematical point of view, the light noise or veil luminance of a specific sky element is given by a superposition of scattered light beams. Theoretical models that simulate light pollution typically take into account all ground-based light sources, thus imposing great requirements on CPU and MEM. As shown in this paper, a contribution of distant sources to the light pollution might be essential under specific conditions of low turbidity and/or Garstang-like radiative patterns. To evaluate the convergence of the theoretical model, numerical experiments are made for different light sources, spectral bands and atmospheric conditions. It is shown that in the worst case the integration limit is approximately 100 km, but it can be significantly shortened for light sources with cosine-like radiative patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, I. L.
The Los Alamos Physics and Engineering Models (PEM) program has developed a model for Richtmyer-Meshkov instability (RMI) based ejecta production from shock-melted surfaces, along with a prescription for a self-similar velocity distribution (SSVD) of the resulting ejecta particles. We have undertaken an effort to validate this source model using data from explosively driven tin coupon experiments. The model’s current formulation lacks a crucial piece of physics: a method for determining the duration of the ejecta production interval. Without a mechanism for terminating ejecta production, the model is not predictive. Furthermore, when the production interval is hand-tuned to match time-integrated massmore » data, the predicted time-dependent mass accumulation on a downstream sensor rises too sharply at early times and too slowly at late times because the SSVD overestimates the amount of mass stored in the fastest particles and underestimates the mass stored in the slowest particles. The functional form of the resulting m(t) is inconsistent with the available time-dependent data; numerical simulations and analytic studies agree on this point. Simulated mass tallies are highly sensitive to radial expansion of the ejecta cloud. It is not clear if the same effect is present in the experimental data but if so, depending on the degree, this may challenge the model’s compatibility with tin coupon data. The current implementation of the model in FLAG is sensitive to the detailed interaction between kinematics (hydrodynamic methods) and thermodynamics (material models); this sensitivity prohibits certain physics modeling choices. The appendices contain an extensive analytic study of piezoelectric ejecta mass measurements, along with test problems, excerpted from a longer work (LA-UR-17-21218).« less
Jet Noise Physics and Modeling Using First-principles Simulations
NASA Technical Reports Server (NTRS)
Freund, Jonathan B.
2003-01-01
An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.
Computed narrow-band azimuthal time-reversing array retrofocusing in shallow water.
Dungan, M R; Dowling, D R
2001-10-01
The process of acoustic time reversal sends sound waves back to their point of origin in reciprocal acoustic environments even when the acoustic environment is unknown. The properties of the time-reversed field commonly depend on the frequency of the original signal, the characteristics of the acoustic environment, and the configuration of the time-reversing transducer array (TRA). In particular, vertical TRAs are predicted to produce horizontally confined foci in environments containing random volume refraction. This article validates and extends this prediction to shallow water environments via monochromatic Monte Carlo propagation simulations (based on parabolic equation computations using RAM). The computational results determine the azimuthal extent of a TRA's retrofocus in shallow-water sound channels either having random bottom roughness or containing random internal-wave-induced sound speed fluctuations. In both cases, randomness in the environment may reduce the predicted azimuthal angular width of the vertical TRA retrofocus to as little as several degrees (compared to 360 degrees for uniform environments) for source-array ranges from 5 to 20 km at frequencies from 500 Hz to 2 kHz. For both types of randomness, power law scalings are found to collapse the calculated azimuthal retrofocus widths for shallow sources over a variety of acoustic frequencies, source-array ranges, water column depths, and random fluctuation amplitudes and correlation scales. Comparisons are made between retrofocusing on shallow and deep sources, and in strongly and mildly absorbing environments.
Turning Noise into Signal: Utilizing Impressed Pipeline Currents for EM Exploration
NASA Astrophysics Data System (ADS)
Lindau, Tobias; Becken, Michael
2017-04-01
Impressed Current Cathodic Protection (ICCP) systems are extensively used for the protection of central Europe's dense network of oil-, gas- and water pipelines against destruction by electrochemical corrosion. While ICCP systems usually provide protection by injecting a DC current into the pipeline, mandatory pipeline integrity surveys demand a periodical switching of the current. Consequently, the resulting time varying pipe currents induce secondary electric- and magnetic fields in the surrounding earth. While these fields are usually considered to be unwanted cultural noise in electromagnetic exploration, this work aims at utilizing the fields generated by the ICCP system for determining the electrical resistivity of the subsurface. The fundamental period of the switching cycles typically amounts to 15 seconds in Germany and thereby roughly corresponds to periods used in controlled source EM applications (CSEM). For detailed studies we chose an approximately 30km long pipeline segment near Herford, Germany as a test site. The segment is located close to the southern margin of the Lower Saxony Basin (LSB) and part of a larger gas pipeline composed of multiple segments. The current injected into the pipeline segment originates in a rectified 50Hz AC signal which is periodically switched on and off. In contrast to the usual dipole sources used in CSEM surveys, the current distribution along the pipeline is unknown and expected to be non-uniform due to coating defects that cause current to leak into the surrounding soil. However, an accurate current distribution is needed to model the fields generated by the pipeline source. We measured the magnetic fields at several locations above the pipeline and used Biot-Savarts-Law to estimate the currents decay function. The resulting frequency dependent current distribution shows a current decay away from the injection point as well as a frequency dependent phase shift which is increasing with distance from the injection point. Electric field data were recorded at 45 stations located in an area of about 60 square kilometers in the vicinity to the pipeline. Additionally, the injected source current was recorded directly at the injection point. Transfer functions between the local electric fields and the injected source current are estimated for frequencies ranging from 0.03Hz to 15Hz using robust time series processing techniques. The resulting transfer functions are inverted for a 3D conductivity model of the subsurface using an elaborate pipeline model. We interpret the model with regards to the local geologic setting, demonstrating the methods capabilities to image the subsurface.
Sparsity-promoting inversion for modeling of irregular volcanic deformation source
NASA Astrophysics Data System (ADS)
Zhai, G.; Shirzaei, M.
2016-12-01
Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.
The social ecology of water in a Mumbai slum: failures in water quality, quantity, and reliability.
Subbaraman, Ramnath; Shitole, Shrutika; Shitole, Tejal; Sawant, Kiran; O'Brien, Jennifer; Bloom, David E; Patil-Deshmukh, Anita
2013-02-26
Urban slums in developing countries that are not recognized by the government often lack legal access to municipal water supplies. This results in the creation of insecure "informal" water distribution systems (i.e., community-run or private systems outside of the government's purview) that may increase water-borne disease risk. We evaluate an informal water distribution system in a slum in Mumbai, India using commonly accepted health and social equity indicators. We also identify predictors of bacterial contamination of drinking water using logistic regression analysis. Data were collected through two studies: the 2008 Baseline Needs Assessment survey of 959 households and the 2011 Seasonal Water Assessment, in which 229 samples were collected for water quality testing over three seasons. Water samples were collected in each season from the following points along the distribution system: motors that directly tap the municipal supply (i.e., "point-of-source" water), hoses going to slum lanes, and storage and drinking water containers from 21 households. Depending on season, households spend an average of 52 to 206 times more than the standard municipal charge of Indian rupees 2.25 (US dollars 0.04) per 1000 liters for water, and, in some seasons, 95% use less than the WHO-recommended minimum of 50 liters per capita per day. During the monsoon season, 50% of point-of-source water samples were contaminated. Despite a lack of point-of-source water contamination in other seasons, stored drinking water was contaminated in all seasons, with rates as high as 43% for E. coli and 76% for coliform bacteria. In the multivariate logistic regression analysis, monsoon and summer seasons were associated with significantly increased odds of drinking water contamination. Our findings reveal severe deficiencies in water-related health and social equity indicators. All bacterial contamination of drinking water occurred due to post-source contamination during storage in the household, except during the monsoon season, when there was some point-of-source water contamination. This suggests that safe storage and household water treatment interventions may improve water quality in slums. Problems of exorbitant expense, inadequate quantity, and poor point-of-source quality can only be remedied by providing unrecognized slums with equitable access to municipal water supplies.
Energy resolution of pulsed neutron beam provided by the ANNRI beamline at the J-PARC/MLF
NASA Astrophysics Data System (ADS)
Kino, K.; Furusaka, M.; Hiraga, F.; Kamiyama, T.; Kiyanagi, Y.; Furutaka, K.; Goko, S.; Hara, K. Y.; Harada, H.; Harada, M.; Hirose, K.; Kai, T.; Kimura, A.; Kin, T.; Kitatani, F.; Koizumi, M.; Maekawa, F.; Meigo, S.; Nakamura, S.; Ooi, M.; Ohta, M.; Oshima, M.; Toh, Y.; Igashira, M.; Katabuchi, T.; Mizumoto, M.; Hori, J.
2014-02-01
We studied the energy resolution of the pulsed neutron beam of the Accurate Neutron-Nucleus Reaction Measurement Instrument (ANNRI) at the Japan Proton Accelerator Research Complex/Materials and Life Science Experimental Facility (J-PARC/MLF). A simulation in the energy region from 0.7 meV to 1 MeV was performed and measurements were made at thermal (0.76-62 meV) and epithermal energies (4.8-410 eV). The neutron energy resolution of ANNRI determined by the time-of-flight technique depends on the time structure of the neutron pulse. We obtained the neutron energy resolution as a function of the neutron energy by the simulation in the two operation modes of the neutron source: double- and single-bunch modes. In double-bunch mode, the resolution deteriorates above about 10 eV because the time structure of the neutron pulse splits into two peaks. The time structures at 13 energy points from measurements in the thermal energy region agree with those of the simulation. In the epithermal energy region, the time structures at 17 energy points were obtained from measurements and agree with those of the simulation. The FWHM values of the time structures by the simulation and measurements were found to be almost consistent. In the single-bunch mode, the energy resolution is better than about 1% between 1 meV and 10 keV at a neutron source operation of 17.5 kW. These results confirm the energy resolution of the pulsed neutron beam produced by the ANNRI beamline.
Answering questions at the point of care: do residents practice EBM or manage information sources?
McCord, Gary; Smucker, William D; Selius, Brian A; Hannan, Scott; Davidson, Elliot; Schrop, Susan Labuda; Rao, Vinod; Albrecht, Paula
2007-03-01
To determine the types of information sources that evidence-based medicine (EBM)-trained, family medicine residents use to answer clinical questions at the point of care, to assess whether the sources are evidence-based, and to provide suggestions for more effective information-management strategies in residency training. In 2005, trained medical students directly observed (for two half-days per physician) how 25 third-year family medicine residents retrieved information to answer clinical questions arising at the point of care and documented the type and name of each source, the retrieval location, and the estimated time spent consulting the source. An end-of-study questionnaire asked 37 full-time faculty and the participating residents about the best information sources available, subscriptions owned, why they use a personal digital assistant (PDA) to practice medicine, and their experience in preventing medical errors using a PDA. Forty-four percent of questions were answered by attending physicians, 23% by consulting PDAs, and 20% from books. Seventy-two percent of questions were answered within two minutes. Residents rated UptoDate as the best source for evidence-based information, but they used this source only five times. PDAs were used because of ease of use, time factors, and accessibility. All examples of medical errors discovered or prevented with PDA programs were medication related. None of the participants' residencies required the use of a specific medical information resource. The results support the Agency for Health Care Research and Quality's call for medical system improvements at the point of care. Additionally, it may be necessary to teach residents better information-management skills in addition to EBM skills.
Real-time volcano monitoring using GNSS single-frequency receivers
NASA Astrophysics Data System (ADS)
Lee, Seung-Woo; Yun, Sung-Hyo; Kim, Do Hyeong; Lee, Dukkee; Lee, Young J.; Schutz, Bob E.
2015-12-01
We present a real-time volcano monitoring strategy that uses the Global Navigation Satellite System (GNSS), and we examine the performance of the strategy by processing simulated and real data and comparing the results with published solutions. The cost of implementing the strategy is reduced greatly by using single-frequency GNSS receivers except for one dual-frequency receiver that serves as a base receiver. Positions of the single-frequency receivers are computed relative to the base receiver on an epoch-by-epoch basis using the high-rate double-difference (DD) GNSS technique, while the position of the base station is fixed to the values obtained with a deferred-time precise point positioning technique and updated on a regular basis. Since the performance of the single-frequency high-rate DD technique depends on the conditions of the ionosphere over the monitoring area, the ionospheric total electron content is monitored using the dual-frequency data from the base receiver. The surface deformation obtained with the high-rate DD technique is eventually processed by a real-time inversion filter based on the Mogi point source model. The performance of the real-time volcano monitoring strategy is assessed through a set of tests and case studies, in which the data recorded during the 2007 eruption of Kilauea and the 2005 eruption of Augustine are processed in a simulated real-time mode. The case studies show that the displacement time series obtained with the strategy seem to agree with those obtained with deferred-time, dual-frequency approaches at the level of 10-15 mm. Differences in the estimated volume change of the Mogi source between the real-time inversion filter and previously reported works were in the range of 11 to 13% of the maximum volume changes of the cases examined.
Directional Limits on Persistent Gravitational Waves from Advanced LIGO's First Observing Run
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Biscoveanu, A. S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Campbell, W.; Canepa, M.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, E.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schlassa, S.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tao, D.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2017-03-01
We employ gravitational-wave radiometry to map the stochastic gravitational wave background expected from a variety of contributing mechanisms and test the assumption of isotropy using data from the Advanced Laser Interferometer Gravitational Wave Observatory's (aLIGO) first observing run. We also search for persistent gravitational waves from point sources with only minimal assumptions over the 20-1726 Hz frequency band. Finding no evidence of gravitational waves from either point sources or a stochastic background, we set limits at 90% confidence. For broadband point sources, we report upper limits on the gravitational wave energy flux per unit frequency in the range Fα ,Θ(f )<(0.1 - 56 )×10-8 erg cm-2 s-1 Hz-1(f /25 Hz )α -1 depending on the sky location Θ and the spectral power index α . For extended sources, we report upper limits on the fractional gravitational wave energy density required to close the Universe of Ω (f ,Θ )<(0.39 - 7.6 )×10-8 sr-1(f /25 Hz )α depending on Θ and α . Directed searches for narrowband gravitational waves from astrophysically interesting objects (Scorpius X-1, Supernova 1987 A, and the Galactic Center) yield median frequency-dependent limits on strain amplitude of h0<(6.7 ,5.5 , and 7.0 )×10-25 , respectively, at the most sensitive detector frequencies between 130-175 Hz. This represents a mean improvement of a factor of 2 across the band compared to previous searches of this kind for these sky locations, considering the different quantities of strain constrained in each case.
Valdes, Claudia P.; Varma, Hari M.; Kristoffersen, Anna K.; Dragojevic, Tanja; Culver, Joseph P.; Durduran, Turgut
2014-01-01
We introduce a new, non-invasive, diffuse optical technique, speckle contrast optical spectroscopy (SCOS), for probing deep tissue blood flow using the statistical properties of laser speckle contrast and the photon diffusion model for a point source. The feasibility of the method is tested using liquid phantoms which demonstrate that SCOS is capable of measuring the dynamic properties of turbid media non-invasively. We further present an in vivo measurement in a human forearm muscle using SCOS in two modalities: one with the dependence of the speckle contrast on the source-detector separation and another on the exposure time. In doing so, we also introduce crucial corrections to the speckle contrast that account for the variance of the shot and sensor dark noises. PMID:25136500
NASA Astrophysics Data System (ADS)
Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.
2017-12-01
Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.
NASA Astrophysics Data System (ADS)
Gusev, A. A.; Pavlov, V. M.
1991-07-01
We consider an inverse problem of determination of short-period (high-frequency) radiator in an extended earthquake source. This radiator is assumed to be noncoherent (i.e., random), it can be described by its power flux or brightness (which depends on time and location over the extended source). To decide about this radiator we try to use temporal intensity function (TIF) of a seismic waveform at a given receiver point. It is defined as (time-varying) mean elastic wave energy flux through unit area. We suggest estimating it empirically from the velocity seismogram by its squaring and smoothing. We refer to this function as “observed TIF”. We believe that one can represent TIF produced by an extended radiator and recorded at some receiver point in the earth as convolution of the two components: (1) “ideal” intensity function (ITIF) which would be recorded in the ideal nonscattering earth from the same radiator; and (2) intensity function which would be recorded in the real earth from unit point instant radiator (“intensity Green's function”, IGF). This representation enables us to attempt to estimate an ITIF of a large earthquake by inverse filtering or deconvolution of the observed TIF of this event, using the observed TIF of a small event (actually, fore-or aftershock) as the empirical IGF. Therefore, the effect of scattering is “stripped off”. Examples of the application of this procedure to real data are given. We also show that if one can determine far-field ITIF for enough rays, one can extract from them the information on space-time structure of the radiator (that is, of brightness function). We apply this theoretical approach to short-period P-wave records of the 1978 Miyagi-oki earthquake ( M=7.6). Spatial and temporal centroids of a short-period radiator are estimated.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Sampayan, Stephen E.
2016-11-22
Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.
A Novel Effect of Scattered-Light Interference in Misted Mirrors
ERIC Educational Resources Information Center
Bridge, N. James
2005-01-01
Interference rings can be observed in mirrors clouded by condensation, even in diffuse lighting. The effect depends on individual droplets acting as point sources by refracting light into the mirror, so producing coherent wave-trains which are reflected and then scattered again by diffraction round the same source droplet. The secondary wave-train…
Temporal Dependence of Chromosomal Aberration on Radiation Quality and Cellular Genetic Background
NASA Technical Reports Server (NTRS)
Lu, Tao; Zhang, Ye; Krieger, Stephanie; Yeshitla, Samrawit; Goss, Rosalin; Bowler, Deborah; Kadhim, Munira; Wilson, Bobby; Wu, Honglu
2017-01-01
Radiation induced cancer risks are driven by genetic instability. It is not well understood how different radiation sources induce genetic instability in cells with different genetic background. Here we report our studies on genetic instability, particularly chromosome instability using fluorescence in situ hybridization (FISH), in human primary lymphocytes, normal human fibroblasts, and transformed human mammary epithelial cells in a temporal manner after exposure to high energy protons and Fe ions. The chromosome spread was prepared 48 hours, 1 week, 2 week, and 1 month after radiation exposure. Chromosome aberrations were analyzed with whole chromosome specific probes (chr. 3 and chr. 6). After exposure to protons and Fe ions of similar cumulative energy (??), Fe ions induced more chromosomal aberrations at early time point (48 hours) in all three types of cells. Over time (after 1 month), more chromosome aberrations were observed in cells exposed to Fe ions than in the same type of cells exposed to protons. While the mammary epithelial cells have higher intrinsic genetic instability and higher rate of initial chromosome aberrations than the fibroblasts, the fibroblasts retained more chromosomal aberration after long term cell culture (1 month) in comparison to their initial frequency of chromosome aberration. In lymphocytes, the chromosome aberration frequency at 1 month after exposure to Fe ions was close to unexposed background, and the chromosome aberration frequency at 1 month after exposure to proton was much higher. In addition to human cells, mouse bone marrow cells isolated from strains CBA/CaH and C57BL/6 were irradiated with proton or Fe ions and were analyzed for chromosome aberration at different time points. Cells from CBA mice showed similar frequency of chromosome aberration at early and late time points, while cells from C57 mice showed very different chromosome aberration rate at early and late time points. Our results suggest that relative biological effectiveness (RBE) of radiation are different for different radiation sources, for different cell types, and for the same cell type with different genetic background at different times after radiation exposure. Caution must be taken in using RBE value to estimate biological effects from radiation exposure.
NASA Technical Reports Server (NTRS)
Kapahi, Vijay K.; Kulkarni, Vasant K.
1990-01-01
VLA observations of a complete subset of the Leiden-Berkeley Deep Survey sources that have S(1.4 GHz) greater than 10 mJy and are not optically identified down to F=22 mag are reported. By comparing the spectral and structural properties of the sources with samples from the literature, an attempt was made to disentangle the luminosity and redshift dependence of the spectral indices of extended emission in radio galaxies and of the incidence of compact steep-spectrum sources. It is found that the fraction of compact sources among those with a steep spectrum is related primarily to redshift, being much larger at high redshifts for sources of similar radio luminosity. Only a weak and marginally significant dependence of spectral indices of the extended sources on luminosity and redshift is found in samples selected at 1.4 and 2.7 GHz. It is pointed out that the much stronger correlation of spectral indices with luminosity may be arising partly from spectral curvature, and partly due to the preferential inclusion of very steep-spectrum sources from high redshift in low-frequency surveys.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
The size effects upon shock plastic compression of nanocrystals
NASA Astrophysics Data System (ADS)
Malygin, G. A.; Klyavin, O. V.
2017-10-01
For the first time a theoretical analysis of scale effects upon the shock plastic compression of nanocrystals is implemented in the context of a dislocation kinetic approach based on the equations and relationships of dislocation kinetics. The yield point of crystals τy is established as a quantitative function of their cross-section size D and the rate of shock deformation as τy ɛ2/3 D. This dependence is valid in the case of elastic stress relaxation on account of emission of dislocations from single-pole Frank-Read sources near the crystal surface.
Adriani, O; Barbarino, G C; Bazilevskaya, G A; Bellotti, R; Boezio, M; Bogomolov, E A; Bongi, M; Bonvicini, V; Bottai, S; Bruno, A; Cafagna, F; Campana, D; Carlson, P; Casolino, M; Castellini, G; De Santis, C; Di Felice, V; Galper, A M; Karelin, A V; Koldashov, S V; Koldobskiy, S A; Krutkov, S Y; Kvashnin, A N; Leonov, A; Malakhov, V; Marcelli, L; Martucci, M; Mayorov, A G; Menn, W; Mergé, M; Mikhailov, V V; Mocchiutti, E; Monaco, A; Mori, N; Munini, R; Osteria, G; Panico, B; Papini, P; Pearce, M; Picozza, P; Ricci, M; Ricciarini, S B; Simon, M; Sparvoli, R; Spillantini, P; Stozhkov, Y I; Vacchi, A; Vannuccini, E; Vasilyev, G I; Voronov, S A; Yurkin, Y T; Zampa, G; Zampa, N; Potgieter, M S; Vos, E E
2016-06-17
Cosmic-ray electrons and positrons are a unique probe of the propagation of cosmic rays as well as of the nature and distribution of particle sources in our Galaxy. Recent measurements of these particles are challenging our basic understanding of the mechanisms of production, acceleration, and propagation of cosmic rays. Particularly striking are the differences between the low energy results collected by the space-borne PAMELA and AMS-02 experiments and older measurements pointing to sign-charge dependence of the solar modulation of cosmic-ray spectra. The PAMELA experiment has been measuring the time variation of the positron and electron intensity at Earth from July 2006 to December 2015 covering the period for the minimum of solar cycle 23 (2006-2009) until the middle of the maximum of solar cycle 24, through the polarity reversal of the heliospheric magnetic field which took place between 2013 and 2014. The positron to electron ratio measured in this time period clearly shows a sign-charge dependence of the solar modulation introduced by particle drifts. These results provide the first clear and continuous observation of how drift effects on solar modulation have unfolded with time from solar minimum to solar maximum and their dependence on the particle rigidity and the cyclic polarity of the solar magnetic field.
Astefanoaei, Corina; Daye, Pierre M.; FitzGibbon, Edmond J.; Creanga, Dorina-Emilia; Rufa, Alessandra; Optican, Lance M.
2015-01-01
We move our eyes to explore the world, but visual areas determining where to look next (action) are different from those determining what we are seeing (perception). Whether, or how, action and perception are temporally coordinated is not known. The preparation time course of an action (e.g., a saccade) has been widely studied with the gap/overlap paradigm with temporal asynchronies (TA) between peripheral target onset and fixation point offset (gap, synchronous, or overlap). However, whether the subjects perceive the gap or overlap, and when they perceive it, has not been studied. We adapted the gap/overlap paradigm to study the temporal coupling of action and perception. Human subjects made saccades to targets with different TAs with respect to fixation point offset and reported whether they perceived the stimuli as separated by a gap or overlapped in time. Both saccadic and perceptual report reaction times changed in the same way as a function of TA. The TA dependencies of the time change for action and perception were very similar, suggesting a common neural substrate. Unexpectedly, in the perceptual task, subjects misperceived lights overlapping by less than ∼100 ms as separated in time (overlap seen as gap). We present an attention-perception model with a map of prominence in the superior colliculus that modulates the stimulus signal's effectiveness in the action and perception pathways. This common source of modulation determines how competition between stimuli is resolved, causes the TA dependence of action and perception to be the same, and causes the misperception. PMID:25632126
Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...
NASA Astrophysics Data System (ADS)
Jang, Jungkyu; Choi, Sungju; Kim, Jungmok; Park, Tae Jung; Park, Byung-Gook; Kim, Dong Myong; Choi, Sung-Jin; Lee, Seung Min; Kim, Dae Hwan; Mo, Hyun-Sun
2018-02-01
In this study, we investigate the effect of rising time (TR) of liquid gate bias (VLG) on transient responses in pH sensors based on Si nanowire ion-sensitive field-effect transistors (ISFETs). As TR becomes shorter and pH values decrease, the ISFET current takes a longer time to saturate to the pH-dependent steady-state value. By correlating VLG with the internal gate-to-source voltage of the ISFET, we found that this effect occurs when the drift/diffusion of mobile ions in analytes in response to VLG is delayed. This gives us useful insight on the design of ISFET-based point-of-care circuits and systems, particularly with respect to determining an appropriate rising time for the liquid gate bias.
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
NASA Astrophysics Data System (ADS)
Vink, Rona; Behrendt, Horst
2002-11-01
Pollutant transport and management in the Rhine and Elbe basins is still of international concern, since certain target levels set by the international committees for protection of both rivers have not been reached. The analysis of the chain of emissions of point and diffuse sources to river loads will provide policy makers with a tool for effective management of river basins. The analysis of large river basins such as the Elbe and Rhine requires information on the spatial and temporal characteristics of both emissions and physical information of the entire river basin. In this paper, an analysis has been made of heavy metal emissions from various point and diffuse sources in the Rhine and Elbe drainage areas. Different point and diffuse pathways are considered in the model, such as inputs from industry, wastewater treatment plants, urban areas, erosion, groundwater, atmospheric deposition, tile drainage, and runoff. In most cases the measured heavy metal loads at monitoring stations are lower than the sum of the heavy metal emissions. This behaviour in large river systems can largely be explained by retention processes (e.g. sedimentation) and is dependent on the specific runoff of a catchment. Independent of the method used to estimate emissions, the source apportionment analysis of observed loads was used to determine the share of point and diffuse sources in the heavy metal load at a monitoring station by establishing a discharge dependency. The results from both the emission analysis and the source apportionment analysis of observed loads were compared and gave similar results. Between 51% (for Hg) and 74% (for Pb) of the total transport in the Elbe basin is supplied by inputs from diffuse sources. In the Rhine basin diffuse source inputs dominate the total transport and deliver more than 70% of the total transport. The diffuse hydrological pathways with the highest share are erosion and urban areas.
A deeper look at the X-ray point source population of NGC 4472
NASA Astrophysics Data System (ADS)
Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.
2017-10-01
In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.
Renewable Energy Can Help Reduce Oil Dependency
Arvizu, Dan
2017-12-21
In a speech to the Economic Club of Kansas City on June 23, 2010, NREL Director Dan Arvizu takes a realistic look at how renewable energy can help reduce America's dependence on oil, pointing out that the country gets as much energy from renewable sources now as it does from offshore oil production.
A comparison of speciated atmospheric mercury at an urban center and an upwind rural location
Rutter, A.P.; Schauer, J.J.; Lough, G.C.; Snyder, D.C.; Kolb, C.J.; Von Klooster, S.; Rudolf, T.; Manolopoulos, H.; Olson, M.L.
2008-01-01
Gaseous elemental mercury (GEM), particulate mercury (PHg) and reactive gaseous mercury (RGM) were measured every other hour at a rural location in south central Wisconsin (Devil's Lake State Park, WI, USA) between April 2003 and March 2004, and at a predominantly downwind urban site in southeastern Wisconsin (Milwaukee, WI, USA) between June 2004 and May 2005. Annual averages of GEM, PHg, and RGM at the urban site were statistically higher than those measured at the rural site. Pollution roses of GEM and reactive mercury (RM; sum of PHg and RGM) at the rural and urban sites revealed the influences of point source emissions in surrounding counties that were consistent with the US EPA 1999 National Emission Inventory and the 2003-2005 US EPA Toxics Release Inventory. Source-receptor relationships at both sites were studied by quantifying the impacts of point sources on mercury concentrations. Time series of GEM, PHg, and RGM concentrations were sorted into two categories; time periods dominated by impacts from point sources, and time periods dominated by mercury from non-point sources. The analysis revealed average point source contributions to GEM, PHg, and RGM concentration measurements to be significant over the year long studies. At the rural site, contributions to annual average concentrations were: GEM (2%; 0.04 ng m-3); and, RM (48%; 5.7 pg m-3). At the urban site, contributions to annual average concentrations were: GEM (33%; 0.81 ng m-3); and, RM (64%; 13.8 pg m-3). ?? The Royal Society of Chemistry.
INFLUENCE OF THE GALACTIC GRAVITATIONAL FIELD ON THE POSITIONAL ACCURACY OF EXTRAGALACTIC SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larchenkova, Tatiana I.; Lutovinov, Alexander A.; Lyskova, Natalya S.
We investigate the influence of random variations of the Galactic gravitational field on the apparent celestial positions of extragalactic sources. The basic statistical characteristics of a stochastic process (first-order moments, an autocorrelation function and a power spectral density) are used to describe a light ray deflection in a gravitational field of randomly moving point masses as a function of the source coordinates. We map a 2D distribution of the standard deviation of the angular shifts in positions of distant sources (including reference sources of the International Celestial Reference Frame) with respect to their true positions. For different Galactic matter distributionsmore » the standard deviation of the offset angle can reach several tens of μ as (microarcsecond) toward the Galactic center, decreasing down to 4–6 μ as at high galactic latitudes. The conditional standard deviation (“jitter”) of 2.5 μ as is reached within 10 years at high galactic latitudes and within a few months toward the inner part of the Galaxy. The photometric microlensing events are not expected to be disturbed by astrometric random variations anywhere except the inner part of the Galaxy as the Einstein–Chvolson times are typically much shorter than the jittering timescale. While a jitter of a single reference source can be up to dozens of μ as over some reasonable observational time, using a sample of reference sources would reduce the error in relative astrometry. The obtained results can be used for estimating the physical upper limits on the time-dependent accuracy of astrometric measurements.« less
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-07-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations, we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple images by a factor of 5-500, depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested to guarantee that our uncertainties are much smaller than the effects here presented.
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-05-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple-images by a factor 5 - 500 depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested in order to guarantee that our uncertainties are much smaller than the effects here presented.
SEARCHES FOR HIGH-ENERGY NEUTRINO EMISSION IN THE GALAXY WITH THE COMBINED ICECUBE-AMANDA DETECTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbasi, R.; Ahlers, M.; Andeen, K.
2013-01-20
We report on searches for neutrino sources at energies above 200 GeV in the Northern sky of the Galactic plane, using the data collected by the South Pole neutrino telescope, IceCube, and AMANDA. The Galactic region considered in this work includes the local arm toward the Cygnus region and our closest approach to the Perseus Arm. The searches are based on the data collected between 2007 and 2009. During this time AMANDA was an integrated part of IceCube, which was still under construction and operated with 22 strings (2007-2008) and 40 strings (2008-2009) of optical modules deployed in the ice.more » By combining the advantages of the larger IceCube detector with the lower energy threshold of the more compact AMANDA detector, we obtain an improved sensitivity at energies below {approx}10 TeV with respect to previous searches. The analyses presented here are a scan for point sources within the Galactic plane, a search optimized for multiple and extended sources in the Cygnus region, which might be below the sensitivity of the point source scan, and studies of seven pre-selected neutrino source candidates. For one of them, Cygnus X-3, a time-dependent search for neutrino emission in coincidence with observed radio and X-ray flares has been performed. No evidence of a signal is found, and upper limits are reported for each of the searches. We investigate neutrino spectra proportional to E {sup -2} and E {sup -3} in order to cover the entire range of possible neutrino spectra. The steeply falling E {sup -3} neutrino spectrum can also be used to approximate neutrino energy spectra with energy cutoffs below 50 TeV since these result in a similar energy distribution of events in the detector. For the region of the Galactic plane visible in the Northern sky, the 90% confidence level muon neutrino flux upper limits are in the range E {sup 3} dN/dE {approx} 5.4-19.5 Multiplication-Sign 10{sup -11} TeV{sup 2} cm{sup -2} s{sup -1} for point-like neutrino sources in the energy region [180.0 GeV-20.5 TeV]. These represent the most stringent upper limits for soft-spectra neutrino sources within the Galaxy reported to date.« less
A Possible Magnetar Nature for IGR J16358-4726
NASA Technical Reports Server (NTRS)
Patel, S.; Zurita, J.; DelSanto, M.; Finger, M.; Koueliotou, C.; Eichler, D.; Gogus, E.; Ubertini, P.; Walter, R.; Woods, P.
2006-01-01
We present detailed spectral and timing analysis of the hard x-ray transient IGR J16358-4726 using multi-satellite archival observations. A study of the source flux time history over 6 years, suggests that this transient outbursts can be occurring in intervals of at most 1 year. Joint spectral fits using simultaneous Chandra/ACIS and INTEGRAL/ISGRI data reveal a spectrum well described by an absorbed cut-off power law model plus an Fe line. We detected the pulsations initially reported using Chandra/ACIS also in the INTEGRAL/ISGRI light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data we identified a pulse spin up of 94 s (P = 1.6 x 10(exp -4), which strongly points to a neutron star nature for IGR J16358-4726. Assuming that the spin up is due to disc accretion, we estimate that the source magnetic field ranges between 10(sup 13) approximately 10(sup 15) depending on its distance, possibly supporting a magnetar nature for IGR J16358-4726.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Norris, Jay P. (Technical Monitor)
2002-01-01
This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.
Time-dependent friction and the mechanics of stick-slip
Dieterich, J.H.
1978-01-01
Time-dependent increase of static friction is characteristic of rock friction undera variety of experimental circumstances. Data presented here show an analogous velocity-dependent effect. A theor of friction is proposed that establishes a common basis for static and sliding friction. Creep at points of contact causes increases in friction that are proportional to the logarithm of the time that the population of points of contact exist. For static friction that time is the time of stationary contact. For sliding friction the time of contact is determined by the critical displacement required to change the population of contacts and the slip velocity. An analysis of a one-dimensional spring and slider system shows that experimental observations establishing the transition from stable sliding to stick-slip to be a function of normal stress, stiffness and surface finish are a consequence of time-dependent friction. ?? 1978 Birkha??user Verlag.
Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.
Difilippo, Erica L; Brusseau, Mark L
2008-05-26
The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass removal was inefficient (non-ideal mass-transfer and displacement). The mass-flux-reduction/mass-removal relationship for the latter site exhibited a multi-step behavior, which cannot be predicted using some of the available simple estimation functions.
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie
2010-05-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.
A Factorial Data Rate and Dwell Time Experiment in the National Transonic Facility
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This report is an introductory tutorial on the application of formal experiment design methods to wind tunnel testing, for the benefit of aeronautical engineers with little formal experiment design training. It also describes the results of a Study to determine whether increases in the sample rate and dwell time of the National Transonic Facility data system Would result in significant changes in force and moment data. Increases in sample rate from 10 samples per second to 50 samples per second were examined, as were changes in dwell time from one second per data point to two seconds. These changes were examined for a representative aircraft model in a range of tunnel operating conditions defined by angles of attack from 0 deg to 3.8 degrees, total pressure from 15.0 psi to 24.1 psi, and Mach numbers from 0.52 to 0.82. No statistically significant effect was associated with the change in sample rate. The change in dwell time from one second to two seconds affected axial force measurements, and to a lesser degree normal force measurements. This dwell effect comprises a "rectification error" caused by incomplete cancellation of the positive and negative elements of certain low frequency dynamic components that are not rejected by the one-Hz low-pass filters of the data system. These low frequency effects may be due to tunnel circuit phenomena and other sources. The magnitude of the dwell effect depends on dynamic pressure, with angle of attack and Mach number influencing the strength of this dependence. An analysis is presented which suggests that the magnitude of the rectification error depends on the ratio of measurement dwell time to the period of the low-frequency dynamics, as well as the amplitude of the dynamics The essential conclusion of this analysis is that extending the dwell time (or, equivalently, replicating short-dwell data points) reduces the rectification error.
Reassessment of data used in setting exposure limits for hot particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, J.W.; Kaurin, D.G.
1991-05-01
A critical review and a reassessment of data reviewed in NCRP Report 106 on effects of hot particles'' on the skin of pigs, monkeys, and humans were made. Our analysis of the data of Forbes and Mikhail on effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model for prediction of both the threshold for acute ulceration and for ulcer diameter. A dose of 27 Gy at a depth of 1.33 mm in tissue in this model will result in an acute ulcer with a diameter determinedmore » by the radius over which this dose (at 1.33-mm depth) extends. Application of the model to the Forbes-Mikhail data yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source on skin of mixed fission product beta particles, or about 10{sup 10} beta particles from Sr--Y-90, since few of the Sr-90 beta particles reach this depth. The data of Hopewell et al. for their 1 mm Sr-Y-90 exposures were also analyzed with the above model and yielded a predicted threshold of 2 {times} 10{sup 10} Sr-Y-90 beta particles for a point source on skin. Dosimetry values were employed in this latter analysis that are 3.3 times higher than previously reported for this source. An alternate interpretation of the Forbes and Mikhail data, derived from linear plots of the data, is that the threshold depends strongly on particle size with the smaller particles yielding a much lower threshold and smaller minimum size ulcer. Additional animal exposures are planned to distinguish between the above explanations. 17 refs., 3 figs., 3 tabs.« less
The social ecology of water in a Mumbai slum: failures in water quality, quantity, and reliability
2013-01-01
Background Urban slums in developing countries that are not recognized by the government often lack legal access to municipal water supplies. This results in the creation of insecure “informal” water distribution systems (i.e., community-run or private systems outside of the government’s purview) that may increase water-borne disease risk. We evaluate an informal water distribution system in a slum in Mumbai, India using commonly accepted health and social equity indicators. We also identify predictors of bacterial contamination of drinking water using logistic regression analysis. Methods Data were collected through two studies: the 2008 Baseline Needs Assessment survey of 959 households and the 2011 Seasonal Water Assessment, in which 229 samples were collected for water quality testing over three seasons. Water samples were collected in each season from the following points along the distribution system: motors that directly tap the municipal supply (i.e., “point-of-source” water), hoses going to slum lanes, and storage and drinking water containers from 21 households. Results Depending on season, households spend an average of 52 to 206 times more than the standard municipal charge of Indian rupees 2.25 (US dollars 0.04) per 1000 liters for water, and, in some seasons, 95% use less than the WHO-recommended minimum of 50 liters per capita per day. During the monsoon season, 50% of point-of-source water samples were contaminated. Despite a lack of point-of-source water contamination in other seasons, stored drinking water was contaminated in all seasons, with rates as high as 43% for E. coli and 76% for coliform bacteria. In the multivariate logistic regression analysis, monsoon and summer seasons were associated with significantly increased odds of drinking water contamination. Conclusions Our findings reveal severe deficiencies in water-related health and social equity indicators. All bacterial contamination of drinking water occurred due to post-source contamination during storage in the household, except during the monsoon season, when there was some point-of-source water contamination. This suggests that safe storage and household water treatment interventions may improve water quality in slums. Problems of exorbitant expense, inadequate quantity, and poor point-of-source quality can only be remedied by providing unrecognized slums with equitable access to municipal water supplies. PMID:23442300
Directional Limits on Persistent Gravitational Waves from Advanced LIGO's First Observing Run.
Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Ananyeva, A; Anderson, S B; Anderson, W G; Appert, S; Arai, K; Araya, M C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Avila-Alvarez, A; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Beer, C; Bejger, M; Belahcene, I; Belgin, M; Bell, A S; Berger, B K; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Billman, C R; Birch, J; Birney, R; Birnholtz, O; Biscans, S; Biscoveanu, A S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blackman, J; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Boer, M; Bogaert, G; Bohe, A; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Broida, J E; Brooks, A F; Brown, D A; Brown, D D; Brown, N M; Brunett, S; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cabero, M; Cadonati, L; Cagnoli, G; Cahillane, C; Calderón Bustillo, J; Callister, T A; Calloni, E; Camp, J B; Campbell, W; Canepa, M; Cannon, K C; Cao, H; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Casanueva Diaz, J; Casentini, C; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Cerboni Baiardi, L; Cerretani, G; Cesarini, E; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Cheeseboro, B D; Chen, H Y; Chen, Y; Cheng, H-P; Chincarini, A; Chiummo, A; Chmiel, T; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, A J K; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Cocchieri, C; Coccia, E; Cohadon, P-F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conti, L; Cooper, S J; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, E; Coughlin, M W; Coughlin, S B; Coulon, J-P; Countryman, S T; Couvares, P; Covas, P B; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Creighton, J D E; Creighton, T D; Cripe, J; Crowder, S G; Cullen, T J; Cumming, A; Cunningham, L; Cuoco, E; Dal Canton, T; Danilishin, S L; D'Antonio, S; Danzmann, K; Dasgupta, A; Da Silva Costa, C F; Dattilo, V; Dave, I; Davier, M; Davies, G S; Davis, D; Daw, E J; Day, B; Day, R; De, S; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Devenson, J; Devine, R C; Dhurandhar, S; Díaz, M C; Di Fiore, L; Di Giovanni, M; Di Girolamo, T; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Doctor, Z; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Dorrington, I; Douglas, R; Dovale Álvarez, M; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H-B; Ehrens, P; Eichholz, J; Eikenberry, S S; Essick, R C; Etienne, Z; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Farinon, S; Farr, B; Farr, W M; Fauchon-Jones, E J; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Fernández Galiana, A; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fong, H; Forsyth, S S; Fournier, J-D; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fries, E M; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H; Gadre, B U; Gaebel, S M; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gaur, G; Gayathri, V; Gehrels, N; Gemme, G; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghonge, S; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; González, G; Gonzalez Castro, J M; Gopakumar, A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Grado, A; Graef, C; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C-J; Haughian, K; Healy, J; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Henry, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hofman, D; Holt, K; Holz, D E; Hopkins, P; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J-M; Isi, M; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jani, K; Jaranowski, P; Jawahar, S; Jiménez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Junker, J; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Karvinen, K S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; Kéfélian, F; Keitel, D; Kelley, D B; Kennedy, R; Key, J S; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, Chunglee; Kim, J C; Kim, Whansun; Kim, W; Kim, Y-M; Kimbrell, S J; King, E J; King, P J; Kirchhoff, R; Kissel, J S; Klein, B; Kleybolte, L; Klimenko, S; Koch, P; Koehlenbeck, S M; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Krämer, C; Kringel, V; Królak, A; Kuehn, G; Kumar, P; Kumar, R; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lang, R N; Lange, J; Lantz, B; Lanza, R K; Lartaux-Vollard, A; Lasky, P D; Laxen, M; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lehmann, J; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Li, T G F; Libson, A; Littenberg, T B; Liu, J; Lockerbie, N A; Lombardi, A L; London, L T; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lousto, C O; Lovelace, G; Lück, H; Lundgren, A P; Lynch, R; Ma, Y; Macfoy, S; Machenschalk, B; MacInnis, M; Macleod, D M; Magaña-Sandoval, F; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martynov, D V; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Mastrogiovanni, S; Matas, A; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; McCarthy, R; McClelland, D E; McCormick, S; McGrath, C; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McRae, T; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E L; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Metzdorff, R; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, A L; Miller, A; Miller, B B; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mours, B; Mow-Lowry, C M; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Muniz, E A M; Murray, P G; Mytidis, A; Napier, K; Nardecchia, I; Naticchioni, L; Nelemans, G; Nelson, T J N; Neri, M; Nery, M; Neunzert, A; Newport, J M; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Noack, A; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Oberling, J; Ochsner, E; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Overmier, H; Owen, B J; Pace, A E; Page, J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perez, C J; Perreca, A; Perri, L M; Pfeiffer, H P; Phelps, M; Piccinni, O J; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poe, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Pratt, J W W; Predoi, V; Prestegard, T; Prijatelj, M; Principe, M; Privitera, S; Prodi, G A; Prokhorov, L G; Puncken, O; Punturo, M; Puppo, P; Pürrer, M; Qi, H; Qin, J; Qiu, S; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rajan, C; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Rhoades, E; Ricci, F; Riles, K; Rizzo, M; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, J D; Romano, R; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Sakellariadou, M; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sampson, L M; Sanchez, E J; Sandberg, V; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Scheuer, J; Schlassa, S; Schmidt, E; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Schönbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Schwalbe, S G; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Setyawati, Y; Shaddock, D A; Shaffer, T J; Shahriar, M S; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sieniawska, M; Sigg, D; Silva, A D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, B; Smith, J R; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Spencer, A P; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stevenson, S P; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strigin, S E; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sunil, S; Sutton, P J; Swinkels, B L; Szczepańczyk, M J; Tacca, M; Talukder, D; Tanner, D B; Tao, D; Tápai, M; Taracchini, A; Taylor, R; Theeg, T; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thrane, E; Tippens, T; Tiwari, S; Tiwari, V; Tokmakov, K V; Toland, K; Tomlinson, C; Tonelli, M; Tornasi, Z; Torrie, C I; Töyrä, D; Travasso, F; Traylor, G; Trifirò, D; Trinastic, J; Tringali, M C; Trozzo, L; Tse, M; Tso, R; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; van Bakel, N; van Beuzekom, M; van den Brand, J F J; Van Den Broeck, C; Vander-Hyde, D C; van der Schaaf, L; van Heijningen, J V; van Veggel, A A; Vardaro, M; Varma, V; Vass, S; Vasúth, M; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Venugopalan, G; Verkindt, D; Vetrano, F; Viceré, A; Viets, A D; Vinciguerra, S; Vine, D J; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D V; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, Y; Ward, R L; Warner, J; Was, M; Watchi, J; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Wen, L; Weßels, P; Westphal, T; Wette, K; Whelan, J T; Whiting, B F; Whittle, C; Williams, D; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Woehler, J; Worden, J; Wright, J L; Wu, D S; Wu, G; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, Hang; Yu, Haocun; Yvert, M; Zadrożny, A; Zangrando, L; Zanolin, M; Zendri, J-P; Zevin, M; Zhang, L; Zhang, M; Zhang, T; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, S J; Zhu, X J; Zucker, M E; Zweizig, J
2017-03-24
We employ gravitational-wave radiometry to map the stochastic gravitational wave background expected from a variety of contributing mechanisms and test the assumption of isotropy using data from the Advanced Laser Interferometer Gravitational Wave Observatory's (aLIGO) first observing run. We also search for persistent gravitational waves from point sources with only minimal assumptions over the 20-1726 Hz frequency band. Finding no evidence of gravitational waves from either point sources or a stochastic background, we set limits at 90% confidence. For broadband point sources, we report upper limits on the gravitational wave energy flux per unit frequency in the range F_{α,Θ}(f)<(0.1-56)×10^{-8} erg cm^{-2} s^{-1} Hz^{-1}(f/25 Hz)^{α-1} depending on the sky location Θ and the spectral power index α. For extended sources, we report upper limits on the fractional gravitational wave energy density required to close the Universe of Ω(f,Θ)<(0.39-7.6)×10^{-8} sr^{-1}(f/25 Hz)^{α} depending on Θ and α. Directed searches for narrowband gravitational waves from astrophysically interesting objects (Scorpius X-1, Supernova 1987 A, and the Galactic Center) yield median frequency-dependent limits on strain amplitude of h_{0}<(6.7,5.5, and 7.0)×10^{-25}, respectively, at the most sensitive detector frequencies between 130-175 Hz. This represents a mean improvement of a factor of 2 across the band compared to previous searches of this kind for these sky locations, considering the different quantities of strain constrained in each case.
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
NASA Astrophysics Data System (ADS)
Reisenfeld, D. B.; Bzowski, M.; Funsten, H. O.; Janzen, P. H.; Kubiak, M. A.; McComas, D. J.; Schwadron, N.; Sokol, J. M.
2017-12-01
The IBEX mission has shown that variations in the ENA flux from the outer heliosphere are associated with the solar cycle. In particular, there is a good correlation between the dynamic pressure of the outbound solar wind and variations in the observed IBEX ENA flux (McComas et al, 2017; Reisenfeld et al., 2016). There is, of course, a time difference between observations of the outbound SW and the heliospheric ENAs with which they correlate, ranging from approximately two to four years, depending on ENA energy and look direction. In this study, we use this time difference as a means of "sounding" the heliosheath, that is, finding the average distance to the ENA source region in a particular direction. We use data from the first seven years of the IBEX mission. As each point in the sky is sampled once every six months, this gives us a time series of 14 points per look direction on which to time correlate. Fluxes are transformed from the spacecraft frame into a heliospheric inertial frame to remove the effects of spacecraft/Earth motion. Fluxes are also corrected for ENA extinction due to charge exchange. To improve statistics, we divide the sky into "macropixels" spanning 30 degrees in longitude and varying ranges of latitude to maintain comparable counting statistics per pixel. In calculating the response time, we account for the varying speed of the outbound solar wind by using a time and latitude dependent set of solar wind speeds derived from interplanetary scintillation data (Sokol et al. 2015). Consistent with heliospheric models, we determine the shortest distance to the heliopause is in the nose direction, with a flaring toward the flanks and poles.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopper, Seth; Evans, Charles R.
2010-10-15
We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less
Interpreting the Dependence of Mutation Rates on Age and Time
Gao, Ziyue; Wyman, Minyoung J.; Sella, Guy; Przeworski, Molly
2016-01-01
Mutations can originate from the chance misincorporation of nucleotides during DNA replication or from DNA lesions that arise between replication cycles and are not repaired correctly. We introduce a model that relates the source of mutations to their accumulation with cell divisions, providing a framework for understanding how mutation rates depend on sex, age, and cell division rate. We show that the accrual of mutations should track cell divisions not only when mutations are replicative in origin but also when they are non-replicative and repaired efficiently. One implication is that observations from diverse fields that to date have been interpreted as pointing to a replicative origin of most mutations could instead reflect the accumulation of mutations arising from endogenous reactions or exogenous mutagens. We further find that only mutations that arise from inefficiently repaired lesions will accrue according to absolute time; thus, unless life history traits co-vary, the phylogenetic “molecular clock” should not be expected to run steadily across species. PMID:26761240
NASA Astrophysics Data System (ADS)
Quarles, C. A.; Sheffield, Thomas; Stacy, Scott; Yang, Chun
2009-03-01
The uniformity of rubber-carbon black composite materials has been investigated with positron Doppler Broadening Spectroscopy (DBS). The number of grams of carbon black (CB) mixed into one hundred grams of rubber, phr, is used to characterize a sample. A typical concentration for rubber in tires is 50 phr. The S parameter measured by DBS has been found to depend on the phr of the sample as well as the type of rubber and carbon black. The variation in carbon black concentration within a surface area of about 5 mm diameter can be measured by moving a standard Na-22 or Ge-68 positron source over an extended sample. The precision of the concentration measurement depends on the dwell time at a point on the sample. The time required to determine uniformity over an extended sample can be reduced by running with much higher counting rate than is typical in DBS and correcting for the systematic variation of S parameter with counting rate. Variation in CB concentration with mixing time at the level of about 0.5% has been observed.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
VizieR Online Data Catalog: First Fermi-LAT Inner Galaxy point source catalog (Ajello+, 2016)
NASA Astrophysics Data System (ADS)
Ajello, M.; Albert, A.; Atwood, W. B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caputo, R.; Caragiulo, M.; Caraveo, P. A.; Cecchi, C.; Chekhtman, A.; Chiang, J.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Cutini, S.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Desiante, R.; di Venere, L.; Drell, P. S.; Favuzzi, C.; Ferrara, E. C.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Harding, A. K.; Hewitt, J. W.; Hill, A. B.; Horan, D.; Jogler, T.; Johannesson, G.; Johnson, A. S.; Kamae, T.; Karwin, C.; Knodlseder, J.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Li, L.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mayer, M.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Porter, T. A.; Raino, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Sanchez-Conde, M.; Parkinson, P. M. S.; Sgro, C.; Siskind, E. J.; Smith, D. A.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Takahashi, H.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vianello, G.; Winer, B. L.; Wood, K. S.; Zaharijas, G.; Zimmer, S.
2018-01-01
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100GeV from a 15°x15° region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the γ-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner ~1kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15°x15° region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point Source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC are used to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM. (2 data files).
Backscattering of sound from targets in an Airy caustic formed by a curved reflecting surface
NASA Astrophysics Data System (ADS)
Dzikowicz, Benjamin Robert
The focusing of a caustic associated with the reflection of a locally curved sea floor or surface affects the scattering of sound by underwater targets. The most elementary caustic formed when sound reflects off a naturally curved surface is an Airy caustic. The case of a spherical target is examined here. With a point source acting also as a receiver, a point target lying in a shadow region returns only one echo directly from the target. When the target is on the Airy caustic, there are two echoes: one path is directly to the target and the other focuses off the curved surface. Echoes may be focused in both directions, the doubly focused case being the largest and the latest echo. With the target in the lit region, these different paths produce multiple echoes. For a finite sized sphere near an Airy caustic, all these echoes are manifest, but they occur at shifted target positions. Echoes of tone bursts reflecting only once overlap and interfere with each other, as do those reflecting twice. Catastrophe theory is used to analyze the echo amplitudes arising from these overlaps. The echo pressure for single reflections is shown to have a dependence on target position described by an Airy function for both a point and a finite target. With double focusing, this dependence is the square of an Airy function for a point target. With a finite sized target, (as in the experiment) this becomes a hyperbolic umbilic catastrophe integral with symmetric arguments. The arguments of each of these functions are derived from only the relative echo times of a transient pulse. Transient echo times are calculated using a numerical ray finding technique. Experiment confirms the predicted merging of transient echoes in the time domain, as well as the Airy and hyperbolic umbilic diffraction integral amplitudes for a tone burst. This method allows targets to be observed at greater distances in the presence of a focusing surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapahi, V.K.; Kulkarni, V.K.
1990-05-01
VLA observations of a complete subset of the Leiden-Berkeley Deep Survey sources that have S(1.4 GHz) greater than 10 mJy and are not optically identified down to F=22 mag are reported. By comparing the spectral and structural properties of the sources with samples from the literature, an attempt was made to disentangle the luminosity and redshift dependence of the spectral indices of extended emission in radio galaxies and of the incidence of compact steep-spectrum sources. It is found that the fraction of compact sources among those with a steep spectrum is related primarily to redshift, being much larger at highmore » redshifts for sources of similar radio luminosity. Only a weak and marginally significant dependence of spectral indices of the extended sources on luminosity and redshift is found in samples selected at 1.4 and 2.7 GHz. It is pointed out that the much stronger correlation of spectral indices with luminosity may be arising partly from spectral curvature, and partly due to the preferential inclusion of very steep-spectrum sources from high redshift in low-frequency surveys. 54 refs.« less
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...
Temperature dependence of long coherence times of oxide charge qubits.
Dey, A; Yarlagadda, S
2018-02-22
The ability to maintain coherence and control in a qubit is a major requirement for quantum computation. We show theoretically that long coherence times can be achieved at easily accessible temperatures (such as boiling point of liquid helium) in small (i.e., ~10 nanometers) charge qubits of oxide double quantum dots when only optical phonons are the source of decoherence. In the regime of strong electron-phonon coupling and in the non-adiabatic region, we employ a duality transformation to make the problem tractable and analyze the dynamics through a non-Markovian quantum master equation. We find that the system decoheres after a long time, despite the fact that no energy is exchanged with the bath. Detuning the dots to a fraction of the optical phonon energy, increasing the electron-phonon coupling, reducing the adiabaticity, or decreasing the temperature enhances the coherence time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Tong, Yindong; Bu, Xiaoge; Chen, Junyue; Zhou, Feng; Chen, Long; Liu, Maodian; Tan, Xin; Yu, Tao; Zhang, Wei; Mi, Zhaorong; Ma, Lekuan; Wang, Xuejun; Ni, Jing
2017-01-05
Based on a time-series dataset and the mass balance method, the contributions of various sources to the nutrient discharges from the Yangtze River to the East China Sea are identified. The results indicate that the nutrient concentrations vary considerably among different sections of the Yangtze River. Non-point sources are an important source of nutrients to the Yangtze River, contributing about 36% and 63% of the nitrogen and phosphorus discharged into the East China Sea, respectively. Nutrient inputs from non-point sources vary among the sections of the Yangtze River, and the contributions of non-point sources increase from upstream to downstream. Considering the rice growing patterns in the Yangtze River Basin, the synchrony of rice tillering and the wet seasons might be an important cause of the high nutrient discharge from the non-point sources. Based on our calculations, a reduction of 0.99Tg per year in total nitrogen discharges from the Yangtze River would be needed to limit the occurrences of harmful algal blooms in the East China Sea to 15 times per year. The extensive construction of sewage treatment plants in urban areas may have only a limited effect on reducing the occurrences of harmful algal blooms in the future. Copyright © 2016 Elsevier B.V. All rights reserved.
Osterndorff-Kahanek, Elizabeth A.; Becker, Howard C.; Lopez, Marcelo F.; Farris, Sean P.; Tiwari, Gayatri R.; Nunez, Yury O.; Harris, R. Adron; Mayfield, R. Dayne
2015-01-01
Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver after four weekly cycles of chronic intermittent ethanol (CIE) vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000) at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600). Within each region, there was little gene overlap across time (~20%). All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global ‘rewiring‘ of coexpression systems involving glial and immune signaling as well as neuronal genes. PMID:25803291
Extending the Search for Neutrino Point Sources with IceCube above the Horizon
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Alba, J. L. Bazo; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Carson, M.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miyamoto, H.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Paul, L.; de Los Heros, C. Pérez; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Toale, P. A.; Tooker, J.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.
2009-11-01
Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.
Metastable Distributions of Markov Chains with Rare Transitions
NASA Astrophysics Data System (ADS)
Freidlin, M.; Koralov, L.
2017-06-01
In this paper we consider Markov chains X^\\varepsilon _t with transition rates that depend on a small parameter \\varepsilon . We are interested in the long time behavior of X^\\varepsilon _t at various \\varepsilon -dependent time scales t = t(\\varepsilon ). The asymptotic behavior depends on how the point (1/\\varepsilon , t(\\varepsilon )) approaches infinity. We introduce a general notion of complete asymptotic regularity (a certain asymptotic relation between the ratios of transition rates), which ensures the existence of the metastable distribution for each initial point and a given time scale t(\\varepsilon ). The technique of i-graphs allows one to describe the metastable distribution explicitly. The result may be viewed as a generalization of the ergodic theorem to the case of parameter-dependent Markov chains.
A very deep IRAS survey at l(II) = 97 deg, b(II) = +30 deg
NASA Technical Reports Server (NTRS)
Hacking, Perry; Houck, James R.
1987-01-01
A deep far-infrared survey is presented using over 1000 scans made of a 4 to 6 sq. deg. field at the north ecliptic pole by the IRAS. Point sources from this survey are up to 100 times fainter than the IRAS point source catalog at 12 and 25 micrometers, and up to 10 times fainter at 60 and 100 micrometers. The 12 and 25 micrometer maps are instrumental noise-limited, and the 60 and 100 micrometer maps are confusion noise-limited. The majority of the 12 micrometer point sources are stars within the Milky Way. The 25 micrometer sources are composed almost equally of stars and galaxies. About 80% of the 60 micrometer sources correspond to galaxies on Palomar Observatory Sky Survey (POSS) enlargements. The remaining 20% are probably galaxies below the POSS detection limit. The differential source counts are presented and compared with what is predicted by the Bahcall and Soneira Standard Galaxy Model using the B-V-12 micrometer colors of stars without circumstellar dust shells given by Waters, Cote and Aumann. The 60 micrometer source counts are inconsistent with those predicted for a uniformly distributed, nonevolving universe. The implications are briefly discussed.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
State-Level Point-of-Sale Tobacco News Coverage and Policy Progression Over a 2-Year Period.
Myers, Allison E; Southwell, Brian G; Ribisl, Kurt M; Moreland-Russell, Sarah; Bowling, J Michael; Lytle, Leslie A
2018-01-01
Mass media content may play an important role in policy change. However, the empirical relationship between media advocacy efforts and tobacco control policy success has rarely been studied. We examined the extent to which newspaper content characteristics (volume, slant, frame, source, use of evidence, and degree of localization) that have been identified as important in past descriptive studies were associated with policy progression over a 2-year period in the context of point-of-sale (POS) tobacco control. We used regression analyses to test the relationships between newspaper content and policy progression from 2012 to 2014. The dependent variable was the level of implementation of state-level POS tobacco control policies at Time 2. Independent variables were newspaper article characteristics (volume, slant, frame, source, use of evidence, and degree of localization) and were collected via content analysis of the articles. State-level policy environment contextual variables were examined as confounders. Positive, significant bivariate relationships exist between characteristics of news content (e.g., high overall volume, public health source present, local quote and local angle present, and pro-tobacco control slant present) and Time 2 POS score. However, in a multivariate model controlling for other factors, significant relationships did not hold. Newspaper coverage can be a marker of POS policy progression. Whether media can influence policy implementation remains an important question. Future work should continue to tease out and confirm the unique characteristics of media content that are most associated with subsequent policy progression, in order to inform media advocacy efforts.
Reversing the pump dependence of a laser at an exceptional point
Brandstetter, M.; Liertzer, M.; Deutsch, C.; Klang, P.; Schöberl, J.; Türeci, H. E.; Strasser, G.; Unterrainer, K.; Rotter, S.
2014-01-01
When two resonant modes in a system with gain or loss coalesce in both their resonance position and their width, a so-called exceptional point occurs, which acts as a source of non-trivial physics in a diverse range of systems. Lasers provide a natural setting to study such non-Hermitian degeneracies, as they feature resonant modes and a gain material as their basic constituents. Here we show that exceptional points can be conveniently induced in a photonic molecule laser by a suitable variation of the applied pump. Using a pair of coupled microdisk quantum cascade lasers, we demonstrate that in the vicinity of these exceptional points the coupled laser shows a characteristic reversal of its pump dependence, including a strongly decreasing intensity of the emitted laser light for increasing pump power. PMID:24925314
High Resolution Geological Site Characterization Utilizing Ground Motion Data
1992-06-26
Hayward, 1992). 15 Acquistion I 16 The source characterization array was composed of 28 stations evenly 17 distributed on the circumference of a...of analog anti alias filters, no prefiltering was applied during II acquistion . 12 Results 13 We deployed 9 different sources within the source...calculated using a 1024 point Hamming window applied to 32 the original 1000 point detrended and padded time series. These are then contoured as a 33
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
NASA Astrophysics Data System (ADS)
Nousratpour, A.
2011-12-01
The annual CO2 emission from soils corresponds to a large portion of the global carbon cycle and equals 10 percent of the total atmospheric carbon pool. The total forest soil CO2 loss equals the sum of contribution from autotrophic and heterotrophic organisms. The autotrophic respiration is derived from recent photosynthates from the forest canopy and exudates via the roots. The heterotrophic respiration is less directly dependent on root presence and recently assimilated photosynthates, which points to the possibility of separate mechanisms governing the CO2 emissions. The variation of the CO2 flux from these some-what overlapping sources in the soil i.e. rhizospheric and non-rhizosperically is still not fully understood. Soil temperature and water availability in particular have often been used to explain the variation of soil CO2 efflux by using regression methods. In this experiment around 1000 hours of soil CO2-emission rates from a drained spruce forest was collected from 6 plots, among which 3 were previously root excluded. The emission rates were collected during 5 campaigns throughout the growing season along with continuous above ground and below ground temperature and water properties such as precipitation and VPD (vapor pressure deficit). The resulting matrix was analyzed using multivariate statistical model PLSr (Partial Least Squares regression). This operation reduces the dimensionality of large datasets with probable multicollinearity and helps clarify the dependence of a response factor on x- variables. In addition a time series analysis is applied to the dataset to address the time lag between below ground temperature and water properties to the above ground weather conditions such as VPD and air temperature. Mean carbon emission from the control plots (428 mg Carbon m-2 hr-1) was significantly larger than that from the root excluded plots (136 mg Carbon m-2 hr-1). During the growing season more than 2/3 of the total CO2 release was estimated to be root contribution. The results show that the activity in the rhizosphere increased with rising soil temperature, VPD and ground water depletion until a certain point. When the level of ground water depth was deeper than about 0.5 m the dependence was reversed. This effect was either the opposite or lacking in the root excluded plots, which reflects the involvement of the tree roots and the separate factors controlling the different sources of CO2.
Observations of the rupture development process from source time functions
NASA Astrophysics Data System (ADS)
Renou, Julien; Vallée, Martin
2017-04-01
The mechanisms governing the seismic rupture expansion and leading to earthquakes of very different magnitudes are still under debate. In the cascade model, the rupture starts from a very small patch, which size is undetectable by seismological investigation. Then rupture grows in a self-similar way, implying that no clues about the earthquake magnitude can be found before rupture starts declining. However dependencies between early phases of the rupture process and final magnitude have also been proposed, which can be explained if an earthquake is more likely to be a big one when its start and early development occur in rupture-prone areas. Here, the analysis of the early phases of the seismic rupture is achieved from an observational point of view using the SCARDEC database, a global catalog containing more than 3000 Source Time Functions (STFs) of earthquakes with magnitude larger than 5.7. This dataset is theoretically very suitable to investigate the initial phase, because STFs directly describe the seismic moment rate released over time, giving access to the rupture growth behavior. As several studies already showed that deep earthquakes tend to have a specific signature of short duration with respect to magnitude (implying a quicker rupture growth than superficial events), only shallow events (depths < 70km) are analyzed here. Our method consists in computing the STFs slope, i.e. the seismic moment acceleration, at several prescribed moment rates. In order to ensure that the chosen moment rates intersect the growth phase of the STF, its value must be high enough to avoid the very beginning of the signal -not well constrained in the deconvolution process-, and low enough to avoid the proximity of the peak moment rate. This approach does not use any rupture time information, which is interesting as (1) the exact hypocentral time can be uncertain and (2) the real rupture expansion can be delayed compared to origin time. If any magnitude-dependent signal exists, the average or median value of the slope should vary with the magnitude of the events, despite the intrinsic variability of the STFs. The preliminary results from the SCARDEC dataset seem to only exhibit a weak dependence of the slope with magnitude, in the magnitude domain where the chosen moment rate value crosses most of the STFs onsets. In addition, our results point out that slope values gradually increase with the moment rate. These findings will be discussed in the frame of the existing models of seismic rupture expansion.
Gaspar, Ludmila; Howald, Cedric; Popadin, Konstantin; Maier, Bert; Mauvoisin, Daniel; Moriggi, Ermanno; Gutierrez-Arcelus, Maria; Falconnet, Emilie; Borel, Christelle; Kunz, Dieter; Kramer, Achim; Gachon, Frederic; Dermitzakis, Emmanouil T; Antonarakis, Stylianos E
2017-01-01
The importance of natural gene expression variation for human behavior is undisputed, but its impact on circadian physiology remains mostly unexplored. Using umbilical cord fibroblasts, we have determined by genome-wide association how common genetic variation impacts upon cellular circadian function. Gene set enrichment points to differences in protein catabolism as one major source of clock variation in humans. The two most significant alleles regulated expression of COPS7B, a subunit of the COP9 signalosome. We further show that the signalosome complex is imported into the nucleus in timed fashion to stabilize the essential circadian protein BMAL1, a novel mechanism to oppose its proteasome-mediated degradation. Thus, circadian clock properties depend in part upon a genetically-encoded competition between stabilizing and destabilizing forces, and genetic alterations in these mechanisms provide one explanation for human chronotype. PMID:28869038
McCarthy, Kathleen A.; Alvarez, David A.
2014-01-01
The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.
Comparing stochastic point-source and finite-source ground-motion simulations: SMSIM and EXSIM
Boore, D.M.
2009-01-01
Comparisons of ground motions from two widely used point-source and finite-source ground-motion simulation programs (SMSIM and EXSIM) show that the following simple modifications in EXSIM will produce agreement in the motions from a small earthquake at a large distance for the two programs: (1) base the scaling of high frequencies on the integral of the squared Fourier acceleration spectrum; (2) do not truncate the time series from each subfault; (3) use the inverse of the subfault corner frequency for the duration of motions from each subfault; and (4) use a filter function to boost spectral amplitudes at frequencies near and less than the subfault corner frequencies. In addition, for SMSIM an effective distance is defined that accounts for geometrical spreading and anelastic attenuation from various parts of a finite fault. With these modifications, the Fourier and response spectra from SMSIM and EXSIM are similar to one another, even close to a large earthquake (M 7), when the motions are averaged over a random distribution of hypocenters. The modifications to EXSIM remove most of the differences in the Fourier spectra from simulations using pulsing and static subfaults; they also essentially eliminate any dependence of the EXSIM simulations on the number of subfaults. Simulations with the revised programs suggest that the results of Atkinson and Boore (2006), computed using an average stress parameter of 140 bars and the original version of EXSIM, are consistent with the revised EXSIM with a stress parameter near 250 bars.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
The evolving interaction of low-frequency earthquakes during transient slip.
Frank, William B; Shapiro, Nikolaï M; Husker, Allen L; Kostoglodov, Vladimir; Gusev, Alexander A; Campillo, Michel
2016-04-01
Observed along the roots of seismogenic faults where the locked interface transitions to a stably sliding one, low-frequency earthquakes (LFEs) primarily occur as event bursts during slow slip. Using an event catalog from Guerrero, Mexico, we employ a statistical analysis to consider the sequence of LFEs at a single asperity as a point process, and deduce the level of time clustering from the shape of its autocorrelation function. We show that while the plate interface remains locked, LFEs behave as a simple Poisson process, whereas they become strongly clustered in time during even the smallest slow slip, consistent with interaction between different LFE sources. Our results demonstrate that bursts of LFEs can result from the collective behavior of asperities whose interaction depends on the state of the fault interface.
NASA Astrophysics Data System (ADS)
Wu, J.; Yao, W.; Zhang, J.; Li, Y.
2018-04-01
Labeling 3D point cloud data with traditional supervised learning methods requires considerable labelled samples, the collection of which is cost and time expensive. This work focuses on adopting domain adaption concept to transfer existing trained random forest classifiers (based on source domain) to new data scenes (target domain), which aims at reducing the dependence of accurate 3D semantic labeling in point clouds on training samples from the new data scene. Firstly, two random forest classifiers were firstly trained with existing samples previously collected for other data. They were different from each other by using two different decision tree construction algorithms: C4.5 with information gain ratio and CART with Gini index. Secondly, four random forest classifiers adapted to the target domain are derived through transferring each tree in the source random forest models with two types of operations: structure expansion and reduction-SER and structure transfer-STRUT. Finally, points in target domain are labelled by fusing the four newly derived random forest classifiers using weights of evidence based fusion model. To validate our method, experimental analysis was conducted using 3 datasets: one is used as the source domain data (Vaihingen data for 3D Semantic Labelling); another two are used as the target domain data from two cities in China (Jinmen city and Dunhuang city). Overall accuracies of 85.5 % and 83.3 % for 3D labelling were achieved for Jinmen city and Dunhuang city data respectively, with only 1/3 newly labelled samples compared to the cases without domain adaption.
MacBurn's cylinder test problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Lagrangian descriptors in dissipative systems.
Junginger, Andrej; Hernandez, Rigoberto
2016-11-09
The reaction dynamics of time-dependent systems can be resolved through a recrossing-free dividing surface associated with the transition state trajectory-that is, the unique trajectory which is bound to the barrier region for all time in response to a given time-dependent potential. A general procedure based on the minimization of Lagrangian descriptors has recently been developed by Craven and Hernandez [Phys. Rev. Lett., 2015, 115, 148301] to construct this particular trajectory without requiring perturbative expansions relative to the naive transition state point at the top of the barrier. The extension of the method to account for dissipation in the equations of motion requires additional considerations established in this paper because the calculation of the Lagrangian descriptor involves the integration of trajectories in forward and backward time. The two contributions are in general very different because the friction term can act as a source (in backward time) or sink (in forward time) of energy, leading to the possibility that information about the phase space structure may be lost due to the dominance of only one of the terms. To compensate for this effect, we introduce a weighting scheme within the Lagrangian descriptor and demonstrate that for thermal Langevin dynamics it preserves the essential phase space structures, while they are lost in the nonweighted case.
Correcting the extended-source calibration for the Herschel-SPIRE Fourier-transform spectrometer
NASA Astrophysics Data System (ADS)
Valtchanov, I.; Hopwood, R.; Bendo, G.; Benson, C.; Conversi, L.; Fulton, T.; Griffin, M. J.; Joubaud, T.; Lim, T.; Lu, N.; Marchili, N.; Makiwa, G.; Meyer, R. A.; Naylor, D. A.; North, C.; Papageorgiou, A.; Pearson, C.; Polehampton, E. T.; Scott, J.; Schulz, B.; Spencer, L. D.; van der Wiel, M. H. D.; Wu, R.
2018-03-01
We describe an update to the Herschel-Spectral and Photometric Imaging Receiver (SPIRE) Fourier-transform spectrometer (FTS) calibration for extended sources, which incorporates a correction for the frequency-dependent far-field feedhorn efficiency, ηff. This significant correction affects all FTS extended-source calibrated spectra in sparse or mapping mode, regardless of the spectral resolution. Line fluxes and continuum levels are underestimated by factors of 1.3-2 in thespectrometer long wavelength band (447-1018 GHz; 671-294 μm) and 1.4-1.5 in the spectrometer short wavelength band (944-1568 GHz; 318-191 μm). The correction was implemented in the FTS pipeline version 14.1 and has also been described in the SPIRE Handbook since 2017 February. Studies based on extended-source calibrated spectra produced prior to this pipeline version should be critically reconsidered using the current products available in the Herschel Science Archive. Once the extended-source calibrated spectra are corrected for ηff, the synthetic photometry and the broad-band intensities from SPIRE photometer maps agree within 2-4 per cent - similar levels to the comparison of point-source calibrated spectra and photometry from point-source calibrated maps. The two calibration schemes for the FTS are now self-consistent: the conversion between the corrected extended-source and point-source calibrated spectra can be achieved with the beam solid angle and a gain correction that accounts for the diffraction loss.
Determination of flash point in air and pure oxygen using an equilibrium closed bomb apparatus.
Kong, Dehong; am Ende, David J; Brenek, Steven J; Weston, Neil P
2003-08-29
The standard closed testers for flash point measurements may not be feasible for measuring flash point in special atmospheres like oxygen because the test atmosphere cannot be maintained due to leakage and the laboratory safety can be compromised. To address these limitations we developed a new "equilibrium closed bomb" (ECB). The ECB generally gives lower flash point values than standard closed cup testers as shown by the results of six flammable liquids. The present results are generally in good agreement with the values calculated from the reported lower flammability limits and the vapor pressures. Our measurements show that increased oxygen concentration had little effect on the flash points of the tested flammable liquids. While generally regarded as non-flammable because of the lack of observed flash point in standard closed cup flash point testers, dichloromethane is known to form flammable mixtures. The flash point of dichloromethane in oxygen measured in the ECB is -7.1 degrees C. The flash point of dichloromethane in air is dependent on the type and energy of the ignition source. Further research is being carried out to establish the relationship between the flash point of dichloromethane and the energy of the ignition source.
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
Transition and mixing in axisymmetric jets and vortex rings
NASA Technical Reports Server (NTRS)
Allen, G. A., Jr.; Cantwell, B. J.
1986-01-01
A class of impulsively started, axisymmetric, laminar jets produced by a time dependent joint source of momentum are considered. These jets are different flows, each initially at rest in an unbounded fluid. The study is conducted at three levels of detail. First, a generalized set of analytic creeping flow solutions are derived with a method of flow classification. Second, from this set, three specific creeping flow solutions are studied in detail: the vortex ring, the round jet, and the ramp jet. This study involves derivation of vorticity, stream function, entrainment diagrams, and evolution of time lines through computer animation. From entrainment diagrams, critical points are derived and analyzed. The flow geometry is dictated by the properties and location of critical points which undergo bifurcation and topological transformation (a form of transition) with changing Reynolds number. Transition Reynolds numbers were calculated. A state space trajectory was derived describing the topological behavior of these critical points. This state space derivation yielded three states of motion which are universal for all axisymmetric jets. Third, the axisymmetric round jet is solved numerically using the unsteady laminar Navier Stokes equations. These equations were shown to be self similar for the round jet. Numerical calculations were performed up to a Reynolds number of 30 for a 60x60 point mesh. Animations generated from numerical solution showed each of the three states of motion for the round jet, including the Re = 30 case.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.
Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
Ellis, William L.; Kibler, J.D.
1983-01-01
Explosion-induced compressive stress increases near an underground nuclear explosion are believed to contribute significantly to the containment of high-pressure gases within the explosion-produced cavity. These induced compressive stresses are predicted by computer calculations, but have never been adequately confirmed by field measurements, owing primarily to the unique difficulties of obtaining such field data. Vibrating-wire stressmeter measurements made near the Mighty Epic nuclear detonation, however, qualitatively indicate that within 150 meters of the working point, permanent compressive stress increases of several megapascals were present 15 weeks after the event. Additionally, stress-change magnitudes interpreted from the stressmeter data between the 75- and 260-meter range from the working point compare favorably with calculational predictions of the stress changes believed to be present shortly after detonation of the event. The measurements and calculations differ, however, with regard to the pattern of stress change radial and transverse to the explosion source. For the range of the field measurements from the working point, computer models predict the largest compressive-stress increase to be radial to the explosion source, while the field data indicate the transverse component of. stress change to be the most compressive. The significance of time-dependent modification of the initial explosion-induced stress distribution is, however, uncertain with regard to the comparison of the field measurements and theoretical predictions.
Patterns and age distribution of ground-water flow to streams
Modica, E.; Reilly, T.E.; Pollock, D.W.
1997-01-01
Simulations of ground-water flow in a generic aquifer system were made to characterize the topology of ground-water flow in the stream subsystem and to evaluate its relation to deeper ground-water flow. The flow models are patterned after hydraulic characteristics of aquifers of the Atlantic Coastal Plain and are based on numerical solutions to three-dimensional, steady-state, unconfined flow. The models were used to evaluate the effects of aquifer horizontal-to-vertical hydraulic conductivity ratios, aquifer thickness, and areal recharge rates on flow in the stream subsystem. A particle tracker was used to determine flow paths in a stream subsystem, to establish the relation between ground-water seepage to points along a simulated stream and its source area of flow, and to determine ground-water residence time in stream subsystems. In a geometrically simple aquifer system with accretion, the source area of flow to streams resembles an elongated ellipse that tapers in the downgradient direction. Increased recharge causes an expansion of the stream subsystem. The source area of flow to the stream expands predominantly toward the stream headwaters. Baseflow gain is also increased along the reach of the stream. A thin aquifer restricts ground-water flow and causes the source area of flow to expand near stream headwaters and also shifts the start-of-flow to the drainage basin divide. Increased aquifer anisotropy causes a lateral expansion of the source area of flow to streams. Ground-water seepage to the stream channel originates both from near- and far-recharge locations. The range in the lengths of flow paths that terminate at a point on a stream increase in the downstream direction. Consequently, the age distribution of ground water that seeps into the stream is skewed progressively older with distance downstream. Base flow ia an integration of ground water with varying age and potentially different water quality, depending on the source within the drainage basin. The quantitative results presented indicate that this integration can have a wide and complex residence time range and source distribution.
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shashikumar, N. S.; Hayat, T.; Alsaedi, A.
2018-06-01
Present work aims to investigate the features of the exponential space dependent heat source (ESHS) and cross-diffusion effects in Marangoni convective heat mass transfer flow due to an infinite disk. Flow analysis is comprised with magnetohydrodynamics (MHD). The effects of Joule heating, viscous dissipation and solar radiation are also utilized. The thermal and solute field on the disk surface varies in a quadratic manner. The ordinary differential equations have been obtained by utilizing Von Kármán transformations. The resulting problem under consideration is solved numerically via Runge-Kutta-Fehlberg based shooting scheme. The effects of involved pertinent flow parameters are explored by graphical illustrations. Results point out that the ESHS effect dominates thermal dependent heat source effect on thermal boundary layer growth. The concentration and temperature distributions and their associated layer thicknesses are enhanced by Marangoni effect.
NASA Technical Reports Server (NTRS)
Dewitt, K. J.; Baliga, G.
1982-01-01
A numerical simulation was developed to investigate the one dimensional heat transfer occurring in a system composed of a layered aircraft blade having an ice deposit on its surface. The finite difference representation of the heat conduction equations was done using the Crank-Nicolson implicit finite difference formulation. The simulation considers uniform or time dependent heat sources, from heaters which can be either point sources or of finite thickness. For the ice water phase change, a numerical method which approximates the latent heat effect by a large heat capacity over a small temperature interval was applied. The simulation describes the temperature profiles within the various layers of the de-icer pad, as well as the movement of the ice water interface. The simulation could also be used to predict the one dimensional temperature profiles in any composite slab having different boundary conditions.
Safety and control of accelerator-driven subcritical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rief, H.; Takahashi, H.
1995-10-01
To study control and safety of accelertor driven nuclear systems, a one point kinetic model was developed and programed. It deals with fast transients as a function of reactivity insertion. Doppler feedback, and the intensity of an external neutron source. The model allows for a simultaneous calculation of an equivalent critical reactor. It was validated by a comparison with a benchmark specified by the Nuclear Energy Agency Committee of Reactor Physics. Additional features are the possibility of inserting a linear or quadratic time dependent reactivity ramp which may account for gravity induced accidents like earthquakes, the possibility to shut downmore » the external neutron source by an exponential decay law of the form exp({minus}t/{tau}), and a graphical display of the power and reactivity changes. The calculations revealed that such boosters behave quite benignly even if they are only slightly subcritical.« less
NASA Astrophysics Data System (ADS)
Ulfah, S.; Awalludin, S. A.; Wahidin
2018-01-01
Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.
Simulating water-quality trends in public-supply wells in transient flow systems
Starn, J. Jeffrey; Green, Christopher T.; Hinkle, Stephen R.; Bagtzoglou, Amvrossios C.; Stolp, Bernard J.
2014-01-01
Models need not be complex to be useful. An existing groundwater-flow model of Salt Lake Valley, Utah, was adapted for use with convolution-based advective particle tracking to explain broad spatial trends in dissolved solids. This model supports the hypothesis that water produced from wells is increasingly younger with higher proportions of surface sources as pumping changes in the basin over time. At individual wells, however, predicting specific water-quality changes remains challenging. The influence of pumping-induced transient groundwater flow on changes in mean age and source areas is significant. Mean age and source areas were mapped across the model domain to extend the results from observation wells to the entire aquifer to see where changes in concentrations of dissolved solids are expected to occur. The timing of these changes depends on accurate estimates of groundwater velocity. Calibration to tritium concentrations was used to estimate effective porosity and improve correlation between source area changes, age changes, and measured dissolved solids trends. Uncertainty in the model is due in part to spatial and temporal variations in tracer inputs, estimated tracer transport parameters, and in pumping stresses at sampling points. For tracers such as tritium, the presence of two-limbed input curves can be problematic because a single concentration can be associated with multiple disparate travel times. These shortcomings can be ameliorated by adding hydrologic and geologic detail to the model and by adding additional calibration data. However, the Salt Lake Valley model is useful even without such small-scale detail.
Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project
Boore, David
2015-01-01
Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.
NASA Astrophysics Data System (ADS)
Uria-Tellaetxe, Iratxe; Navazo, Marino; de Blas, Maite; Durana, Nieves; Alonso, Lucio; Iza, Jon
2016-04-01
Despite the toxicity of naphthalene and the fact that it is a precursor of atmospheric photooxidants and secondary aerosol, studies on ambient gas-phase naphthalene are generally scarce. Moreover, as far as we are concerned, this is the first published one using long-term hourly ambient gas-phase naphthalene concentrations. In this work, it has been also demonstrated the usefulness of ambient gas-phase naphthalene to identify major sources of volatile organic compounds (VOC) in complex scenarios. Initially, in order to identify main benzene emission sources, hourly ambient measurements of 60 VOC were taken during a complete year together with meteorological data in an urban/industrial area. Later, due to the observed co-linearity of some of the emissions, a procedure was developed to recover naphthalene concentration data from recorded chromatograms to use it as a tracer of the combustion and distillation of petroleum products. The characteristic retention time of this compound was determined comparing previous GC-MS and GC-FID simultaneous analysis by means of relative retention times, and its concentration was calculated by using relative response factors. The obtained naphthalene concentrations correlated fairly well with ethene (r = 0.86) and benzene (r = 0.92). Besides, the analysis of daily time series showed that these compounds followed a similar pattern, very different from that of other VOC, with minimum concentrations at day-time. This, together with the results from the assessment of the meteorological dependence pointed out a coke oven as the major naphthalene and benzene emitting sources in the study area.
Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.
NASA Astrophysics Data System (ADS)
Gavazza, Sergio
Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.
NASA Astrophysics Data System (ADS)
Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.
1994-02-01
A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.
Effect of bulk Lorentz violation on anisotropic brane cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari-Fard, Malihe, E-mail: heydarifard@qom.ac.ir
2012-04-01
The effect of Lorentz invariance violation in cosmology has attracted a considerable amount of attention. By using a dynamical vector field assumed to point in the bulk direction, with Lorentz invariance holding on the brane, we extend the notation of Lorentz violation in four dimensions Jacobson to a five-dimensional brane-world. We obtain the general solution of the field equations in an exact parametric form for Bianchi type I space-time, with perfect fluid as a matter source. We show that the brane universe evolves from an isotropic/anisotropic state to an isotropic de Sitter inflationary phase at late time. The early timemore » behavior of anisotropic brane universe is largely dependent on the Lorentz violating parameters β{sub i},i = 1,2,3 and the equation of state of the matter, while its late time behavior is independent of these parameters.« less
Peeling Away Timing Error in NetFlow Data
NASA Astrophysics Data System (ADS)
Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin
In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.
NASA Astrophysics Data System (ADS)
Warneke, C.; Geiger, F.; Edwards, P. M.; Dube, W.; Pétron, G.; Kofler, J.; Zahn, A.; Brown, S. S.; Graus, M.; Gilman, J.; Lerner, B.; Peischl, J.; Ryerson, T. B.; de Gouw, J. A.; Roberts, J. M.
2014-05-01
The emissions of volatile organic compounds (VOCs) associated with oil and natural gas production in the Uinta Basin, Utah were measured at a ground site in Horse Pool and from a NOAA mobile laboratory with PTR-MS instruments. The VOC compositions in the vicinity of individual gas and oil wells and other point sources such as evaporation ponds, compressor stations and injection wells are compared to the measurements at Horse Pool. High mixing ratios of aromatics, alkanes, cycloalkanes and methanol were observed for extended periods of time and short-term spikes caused by local point sources. The mixing ratios during the time the mobile laboratory spent on the well pads were averaged. High mixing ratios were found close to all point sources, but gas wells using dry-gas collection, which means dehydration happens at the well, were clearly associated with higher mixing ratios than other wells. Another large source was the flowback pond near a recently hydraulically re-fractured gas well. The comparison of the VOC composition of the emissions from the oil and natural gas wells showed that wet gas collection wells compared well with the majority of the data at Horse Pool and that oil wells compared well with the rest of the ground site data. Oil wells on average emit heavier compounds than gas wells. The mobile laboratory measurements confirm the results from an emissions inventory: the main VOC source categories from individual point sources are dehydrators, oil and condensate tank flashing and pneumatic devices and pumps. Raw natural gas is emitted from the pneumatic devices and pumps and heavier VOC mixes from the tank flashings.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.
2016-01-01
Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).
Searching for minimum in dependence of squared speed-of-sound on collision energy
Liu, Fu -Hu; Gao, Li -Na; Lacey, Roy A.
2016-01-01
Experimore » mental results of the rapidity distributions of negatively charged pions produced in proton-proton ( p - p ) and beryllium-beryllium (Be-Be) collisions at different beam momentums, measured by the NA61/SHINE Collaboration at the super proton synchrotron (SPS), are described by a revised (three-source) Landau hydrodynamic model. The squared speed-of-sound parameter c s 2 is then extracted from the width of rapidity distribution. There is a local minimum (knee point) which indicates a softest point in the equation of state (EoS) appearing at about 40 A GeV/ c (or 8.8 GeV) in c s 2 excitation function (the dependence of c s 2 on incident beam momentum (or center-of-mass energy)). This knee point should be related to the searching for the onset of quark deconfinement and the critical point of quark-gluon plasma (QGP) phase transition.« less
NASA Astrophysics Data System (ADS)
Tarpin, Malo; Canet, Léonie; Wschebor, Nicolás
2018-05-01
In this paper, we present theoretical results on the statistical properties of stationary, homogeneous, and isotropic turbulence in incompressible flows in three dimensions. Within the framework of the non-perturbative renormalization group, we derive a closed renormalization flow equation for a generic n-point correlation (and response) function for large wave-numbers with respect to the inverse integral scale. The closure is obtained from a controlled expansion and relies on extended symmetries of the Navier-Stokes field theory. It yields the exact leading behavior of the flow equation at large wave-numbers |p→ i| and for arbitrary time differences ti in the stationary state. Furthermore, we obtain the form of the general solution of the corresponding fixed point equation, which yields the analytical form of the leading wave-number and time dependence of n-point correlation functions, for large wave-numbers and both for small ti and in the limit ti → ∞. At small ti, the leading contribution at large wave-numbers is logarithmically equivalent to -α (ɛL ) 2 /3|∑tip→ i|2, where α is a non-universal constant, L is the integral scale, and ɛ is the mean energy injection rate. For the 2-point function, the (tp)2 dependence is known to originate from the sweeping effect. The derived formula embodies the generalization of the effect of sweeping to n-point correlation functions. At large wave-numbers and large ti, we show that the ti2 dependence in the leading order contribution crosses over to a |ti| dependence. The expression of the correlation functions in this regime was not derived before, even for the 2-point function. Both predictions can be tested in direct numerical simulations and in experiments.
Trabelsi, H; Gantri, M; Sediki, E
2010-01-01
We present a numerical model for the study of a general, two-dimensional, time-dependent, laser radiation transfer problem in a biological tissue. The model is suitable for many situations, especially when the external laser source is pulsed or continuous. We used a control volume discrete-ordinate method associated with an implicit, three-level, second-order, time-differencing scheme. In medical imaging by laser techniques, this could be an optical tomography forward model. We considered a very thin rectangular biological tissue-like medium submitted to a visible or a near-infrared laser source. Different cases were treated numerically. The source was assumed to be monochromatic and collimated. We used either a continuous source or a short-pulsed source. The transmitted radiance was computed in detector points on the boundaries. Also, the distribution of the internal radiation intensity for different instants is presented. According to the source type, we examined either the steady-state response or the transient response of the medium. First, our model was validated by experimental results from the literature for a homogeneous biological tissue. The space and angular grid independency of our results is shown. Next, the proposed model was used to study changes in transmitted radiation for a homogeneous background medium in which were imbedded two heterogeneous objects. As a last investigation, we studied a multilayered biological tissue. We simulated near-infrared radiation in human skin, fat and muscle. Some results concerning the effects of fat thickness and positions of the detector source on the reflected radiation are presented.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
Another Look at the Great Area-Coverage Controversy of the 1950's
NASA Astrophysics Data System (ADS)
Blanchard, Walter
2005-09-01
In the immediate aftermath of WW2 there sprang up an international argument over the relative merits for aerial navigation of area-coverage radio navaids versus point-source systems. The United States was in favour of point-source whereas the UK proposed area-coverage, systems for which had successfully been demonstrated under very adverse conditions during the war. It rumbled on for many years, not being finally settled until the ICAO Montreal Conference of 1959 decided for point-source. Since then, VOR/DME/ADF/ILS have been the standard aviation radio navaids and there seems little likelihood of any change in the near future, GNSS notwithstanding, if one discounts the phasing-out of ADF. It now seems sufficiently in the past to perhaps allow a dispassionate evaluation of the technical arguments used at the time the political ones can be left to another place and time.
NASA Astrophysics Data System (ADS)
Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei
2013-09-01
China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.
Phadnis, Milind A.; Shireman, Theresa I.; Wetmore, James B.; Rigler, Sally K.; Zhou, Xinhua; Spertus, John A.; Ellerbeck, Edward F.; Mahnken, Jonathan D.
2014-01-01
In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient’s switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy. PMID:25343005
Phadnis, Milind A; Shireman, Theresa I; Wetmore, James B; Rigler, Sally K; Zhou, Xinhua; Spertus, John A; Ellerbeck, Edward F; Mahnken, Jonathan D
2014-01-01
In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient's switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy.
The effect of a hot, spherical scattering cloud on quasi-periodic oscillation behavior
NASA Astrophysics Data System (ADS)
Bussard, R. W.; Weisskopf, M. C.; Elsner, R. F.; Shibazaki, N.
1988-04-01
A Monte Carlo technique is used to investigate the effects of a hot electron scattering cloud surrounding a time-dependent X-ray source. Results are presented for the time-averaged emergent energy spectra and the mean residence time in the cloud as a function of energy. Moreover, after Fourier transforming the scattering Green's function, it is shown how the cloud affects both the observed power spectrum of a time-dependent source and the cross spectrum (Fourier transform of a cross correlation between energy bands). It is found that the power spectra intrinsic to the source are related to those observed by a relatively simple frequency-dependent multiplicative factor (a transmission function). The cloud can severely attenuate high frequencies in the power spectra, depending on optical depth, and, at lower frequencies, the transmission function has roughly a Lorentzian shape. It is also found that if the intrinsic energy spectrum is constant in time, the phase of the cross spectrum is determined entirely by scattering. Finally, the implications of the results for studies of the X-ray quasi-periodic oscillators are discussed.
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...
2017-12-28
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
NASA Astrophysics Data System (ADS)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.
2018-03-01
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
A Possible Magnetar Nature for IGR J16358-4726
NASA Technical Reports Server (NTRS)
Patel, S. K.; Zurita, J.; DelSanto, M.; Finger, M.; Kouveliotou, C.; Eichler, D.; Gogus, E.; Ubertini, P.; Walter, R.; Woods, P.;
2007-01-01
We present detailed spectral and timing analysis of the hard X-ray transient IGR J16358-4726 using multisatellite archival observations. A study of the source flux time history over 6 yr suggests that lower luminosity transient outbursts can be occurring in intervals of at most 1 yr. Joint spectral fits of the higher luminosity outburst using simultaneous Chandra ACIS and INTEGRAL ISGRI data reveal a spectrum well described by an absorbed power-law model with a high-energy cutoff plus an Fe line. We detected the 1.6 hr pulsations initially reported using Chandra ACIS also in the INTEGRAL ISGRI light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data, we identified a spin-up of 94 s (P(sup(.)) = 1.6 x 10(exp -4), which strongly points to a neutron star nature for IGR J16358-4726. Assuming that the spin-up is due to disk accretion, we estimate that the source magnetic field ranges between 10(exp 13) and 10(exp 15) G, depending on its distance, possibly supporting a magnetar nature for IGR J16358-4726.
Field quantization and squeezed states generation in resonators with time-dependent parameters
NASA Technical Reports Server (NTRS)
Dodonov, V. V.; Klimov, A. B.; Nikonov, D. E.
1992-01-01
The problem of electromagnetic field quantization is usually considered in textbooks under the assumption that the field occupies some empty box. The case when a nonuniform time-dependent dielectric medium is confined in some space region with time-dependent boundaries is studied. The basis of the subsequent consideration is the system of Maxwell's equations in linear passive time-dependent dielectric and magnetic medium without sources.
A new time-independent formulation of fractional release
NASA Astrophysics Data System (ADS)
Ostermöller, Jennifer; Bönisch, Harald; Jöckel, Patrick; Engel, Andreas
2017-03-01
The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time
to our data set.We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.
Hršak, Hrvoje; Majer, Marija; Grego, Timor; Bibić, Juraj; Heinrich, Zdravko
2014-12-01
Dosimetry for Gamma-Knife requires detectors with high spatial resolution and minimal angular dependence of response. Angular dependence and end effect time for p-type silicon detectors (PTW Diode P and Diode E) and PTW PinPoint ionization chamber were measured with Gamma-Knife beams. Weighted angular dependence correction factors were calculated for each detector. The Gamma-Knife output factors were corrected for angular dependence and end effect time. For Gamma-Knife beams angle range of 84°-54°. Diode P shows considerable angular dependence of 9% and 8% for the 18 mm and 14, 8, 4 mm collimator, respectively. For Diode E this dependence is about 4% for all collimators. PinPoint ionization chamber shows angular dependence of less than 3% for 18, 14 and 8 mm helmet and 10% for 4 mm collimator due to volumetric averaging effect in a small photon beam. Corrected output factors for 14 mm helmet are in very good agreement (within ±0.3%) with published data and values recommended by vendor (Elekta AB, Stockholm, Sweden). For the 8 mm collimator diodes are still in good agreement with recommended values (within ±0.6%), while PinPoint gives 3% less value. For the 4 mm helmet Diodes P and E show over-response of 2.8% and 1.8%, respectively. For PinPoint chamber output factor of 4 mm collimator is 25% lower than Elekta value which is generally not consequence of angular dependence, but of volumetric averaging effect and lack of lateral electronic equilibrium. Diodes P and E represent good choice for Gamma-Knife dosimetry. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Wang, Ce; Bi, Jun; Zhang, Xu-Xiang; Fang, Qiang; Qi, Yi
2018-05-25
Influent river carrying cumulative watershed load plays a significant role in promoting nuisance algal bloom in river-fed lake. It is most relevant to discern in-stream water quality exceedance and evaluate the spatial relationship between risk location and potential pollution sources. However, no comprehensive studies of source tracking in watershed based on management grid have been conducted for refined water quality management, particularly for plain terrain with complex river network. In this study, field investigations were implemented during 2014 in Taige Canal watershed of Taihu Lake Basin. A Geographical Information System (GIS)-based spatial relationship model was established to characterize the spatial relationships of "point (point-source location and monitoring site)-line (river segment)-plane (catchment)." As a practical exemplification, in-time source tracking was triggered on April 15, 2015 at Huangnianqiao station, where TN and TP concentration violated the water quality standard (TN 4.0 mg/L, TP 0.15 mg/L). Of the target grid cells, 53 and 46 were identified as crucial areas having high pollution intensity for TN and TP pollution, respectively. The estimated non-point source load in each grid cell could be apportioned into different source types based on spatial pollution-related entity objects. We found that the non-point source load derived from rural sewage and livestock and poultry breeding accounted for more than 80% of total TN or TP load than another source type of crop farming. The approach in this study would be of great benefit to local authorities for identifying the serious polluted regions and efficiently making environmental policies to reduce watershed load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nitao, J J
The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less
Oil and the American Way of Life: Don't Ask, Don't Tell
Kaufmann, Robert [Boston University, Boston, Massachusetts, United States
2018-04-19
In the coming decades, US consumers will face a series of important decisions about oil. To make effective decisions, consumers must confront some disturbing answers to questions they would rather not ask. These questions include: is the US running out of oil, is the world running out of oil, is OPEC increasing its grip on prices, is the US economy reducing its dependence on energy, and will the competitive market address these issues in a timely fashion? Answers to these questions indicate that the market will not address these issues: the US has already run out of inexpensive sources of oil such that rising prices no longer elicit significant increases in supply. The US experience implies that within a couple of decades, the world oil market will change from increasing supply at low prices to decreasing supply at higher prices. As the world approaches this important turning point, OPEC will strengthen its grip on world oil prices. Contrary to popular belief, the US economy continues to be highly dependent on energy, especially inexpensive sources of energy. Together, these trends threaten to undermine the basic way in which the US economy generates a high standard of living.
Improved source inversion from joint measurements of translational and rotational ground motions
NASA Astrophysics Data System (ADS)
Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.
2017-12-01
Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.
Processing challenges in the XMM-Newton slew survey
NASA Astrophysics Data System (ADS)
Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego
2005-08-01
The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
An investigation on nuclear energy policy in Turkey and public perception
NASA Astrophysics Data System (ADS)
Coskun, Mehmet Burhanettin; Tanriover, Banu
2016-11-01
Turkey, which meets nearly 70 per cent of its energy demands with import, is facing the problems of energy security and current account deficit as a result of its dependence on foreign sources in terms of energy input. It is also known that Turkey is having environmental problems due to the increases in CO2 emission. Considering these problems in Turkish economy, where energy input is commonly used, it is necessary to use energy sources efficiently and provide alternative energy sources. Due to the dependency of renewable sources on meteorological conditions (the absence of enough sun, wind, and water sources), the energy generation could not be provided efficiently and permanently from these sources. At this point, nuclear energy as analternative energy source maintains its importance as a sustainable energy source that providing energy in 7 days and 24 hours. The main purpose of this study is to evaluate the nuclear energy subject within the context of negative public perceptions emerged after Chernobyl (1986) and Fukushima (2011) disasters and to investigate in the economic framework.
Simulation of Solar Energy Use in Livelihood of Buildings
NASA Astrophysics Data System (ADS)
Lvocich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2017-11-01
Solar energy can be considered as the most technological and economical type of renewable energy. The purpose of the paper is to increase the efficiency of solar energy utilization on the basis of the mathematical simulation of the solar collector. A mathematical model of the radiant heat transfer vacuum solar collector is clarified. The model was based on the process of radiative heat transfer between glass and copper walls with the defined blackness degrees. A mathematical model of the ether phase transition point is developed. The dependence of the reservoir walls temperature change on the ambient temperature over time is obtained. The results of the paper can be useful for the development of prospective sources using solar energy.
Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
NASA Astrophysics Data System (ADS)
Kuttruff, Heinrich; Mommertz, Eckard
The traditional task of room acoustics is to create or formulate conditions which ensure the best possible propagation of sound in a room from a sound source to a listener. Thus, objects of room acoustics are in particular assembly halls of all kinds, such as auditoria and lecture halls, conference rooms, theaters, concert halls or churches. Already at this point, it has to be pointed out that these conditions essentially depend on the question if speech or music should be transmitted; in the first case, the criterion for transmission quality is good speech intelligibility, in the other case, however, the success of room-acoustical efforts depends on other factors that cannot be quantified that easily, not least it also depends on the hearing habits of the listeners. In any case, absolutely "good acoustics" of a room do not exist.
Development of Additional Hazard Assessment Models
1977-03-01
globules, their trajectory (the distance from the spill point to the impact point on the river bed), and the time required for sinking. Established theories ...chemicals, the dissolution rate is estimated by using eddy diffusivity surface renewal theories . The validity of predictions of these theories has been... theories and experimental data on aeration of rivers. * Describe dispersion in rivers with stationary area source and sources moving with the stream
A search for novae in M 31 globular clusters
NASA Astrophysics Data System (ADS)
Ciardullo, Robin; Tamblyn, Peter; Phillips, A. C.
1990-10-01
By combining a local sky-fitting algorithm with a Fourier point-spread-function matching technique, nova outbursts have been searched for inside 54 of the globular clusters contained on the Ciardullo et al. (1987 and 1990) H-alpha survey frames of M 31. Over a mean effective survey time of about 2.0 years, no cluster exhibited a magnitude increase indicative of a nova explosion. If the cataclysmic variables (CVs) contained within globular clusters are similar to those found in the field, then these data imply that the overdensity of CVs within globulars is at least several times less than that of the high-luminosity X-ray sources. If tidal capture is responsible for the high density of hard binaries within globulars, then the probability of capturing condensed objects inside globular clusters may depend strongly on the mass of the remnant.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hongfen, E-mail: wanghongfen11@163.com; Wang, Zhiqi; Chen, Shougang
Molybdenum carbides with surfactants as carbon sources were prepared using the carbothermal reduction of the appropriate precursors (molybdenum oxides deposited on surfactant micelles) at 1023 K under hydrogen gas. The carburized products were characterized using scanning electron microscopy (SEM), X-ray diffraction and BET surface area measurements. From the SEM images, hollow microspherical and rod-like molybdenum carbides were observed. X-ray diffraction patterns showed that the annealing time of carburization had a large effect on the conversion of molybdenum oxides to molybdenum carbides. And BET surface area measurements indicated that the difference of carbon sources brought a big difference in specific surfacemore » areas of molybdenum carbides. - Graphical abstract: Molybdenum carbides having hollow microspherical and hollow rod-like morphologies that are different from the conventional monodipersed platelet-like morphologies. Highlights: Black-Right-Pointing-Pointer Molybdenum carbides were prepared using surfactants as carbon sources. Black-Right-Pointing-Pointer The kinds of surfactants affected the morphologies of molybdenum carbides. Black-Right-Pointing-Pointer The time of heat preservation at 1023 K affected the carburization process. Black-Right-Pointing-Pointer Molybdenum carbides with hollow structures had larger specific surface areas.« less
NASA Astrophysics Data System (ADS)
Jin, Chichuan; Ponti, Gabriele; Haberl, Frank; Smith, Randall; Valencic, Lynne
2018-07-01
AX J1745.6-2901 is an eclipsing low-mass X-ray binary in the Galactic Centre (GC). It shows significant X-ray excess emission during the eclipse phase, and its eclipse light curve shows an asymmetric shape. We use archival XMM-Newton and Chandra observations to study the origin of these peculiar X-ray eclipsing phenomena. We find that the shape of the observed X-ray eclipse light curves depends on both photon energy and the shape of the source extraction region, and also shows differences between the two instruments. By performing detailed simulations for the time-dependent X-ray dust-scattering halo, as well as directly modelling the observed eclipse and non-eclipse halo profiles of AX J1745.6-2901, we obtained solid evidence that its peculiar eclipse phenomena are indeed caused by the X-ray dust scattering in multiple foreground dust layers along the line of sight (LOS). The apparent dependence on the instruments is caused by different instrumental point spread functions. Our results can be used to assess the influence of dust-scattering in other eclipsing X-ray sources, and raise the importance of considering the timing effects of dust-scattering halo when studying the variability of other X-ray sources in the GC, such as Sgr A⋆. Moreover, our study of halo eclipse reinforces the existence of a dust layer local to AX J1745.6-2901 as reported by Jin et al. (2017), as well as identifying another dust layer within a few hundred parsecs to the Earth, containing up to several tens of percent LOS dust, which is likely to be associated with the molecular clouds in the Solar neighbourhood. The remaining LOS dust is likely to be associated with the molecular clouds located in the Galactic disc in-between.
COST-EFFECTIVE ALLOCATION OF WATERSHED MANAGEMENT PRACTICES USING A GENETIC ALGORITHM
Implementation of conservation programs are perceived as being crucial for restoring and protecting waters and watersheds from non-point source pollution. Success of these programs depends to a great extent on planning tools that can assist the watershed management process. Here-...
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Representations and uses of light distribution functions
NASA Astrophysics Data System (ADS)
Lalonde, Paul Albert
1998-11-01
At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.
User's guide for RAM. Volume II. Data preparation and listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D.B.; Novak, J.H.
1978-11-01
The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less
The gamma ray continuum spectrum from the galactic center disk and point sources
NASA Technical Reports Server (NTRS)
Gehrels, Neil; Tueller, Jack
1992-01-01
A light curve of gamma-ray continuum emission from point sources in the galactic center region is generated from balloon and satellite observations made over the past 25 years. The emphasis is on the wide field-of-view instruments which measure the combined flux from all sources within approximately 20 degrees of the center. These data have not been previously used for point-source analyses because of the unknown contribution from diffuse disk emission. In this study, the galactic disk component is estimated from observations made by the Gamma Ray Imaging Spectrometer (GRIS) instrument in Oct. 1988. Surprisingly, there are several times during the past 25 years when all gamma-ray sources (at 100 keV) within about 20 degrees of the galactic center are turned off or are in low emission states. This implies that the sources are all variable and few in number. The continuum gamma-ray emission below approximately 150 keV from the black hole candidate 1E1740.7-2942 is seen to turn off in May 1989 on a time scale of less than two weeks, significantly shorter than ever seen before. With the continuum below 150 keV turned off, the spectral shape derived from the HEXAGONE observation on 22 May 1989 is very peculiar with a peak near 200 keV. This source was probably in its normal state for more than half of all observations since the mid-1960's. There are only two observations (in 1977 and 1979) for which the sum flux from the point sources in the region significantly exceeds that from 1E1740.7-2942 in its normal state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
Gridded national inventory of U.S. methane emissions
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...
2016-11-16
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
The Raptor Real-Time Processing Architecture
NASA Astrophysics Data System (ADS)
Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Raptor -- Mining the Sky in Real Time
NASA Astrophysics Data System (ADS)
Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.
2004-06-01
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Spatial and temporal dependence of the convective electric field in Saturn’s inner magnetosphere
NASA Astrophysics Data System (ADS)
Andriopoulou, M.; Roussos, E.; Krupp, N.; Paranicas, C.; Thomsen, M.; Krimigis, S.; Dougherty, M. K.; Glassmeier, K.-H.
2014-02-01
The recently established presence of a convective electric field in Saturn’s inner and middle magnetosphere, with an average pointing approximately towards midnight and an intensity less than 1 mV/m, is one of the most puzzling findings by the Cassini spacecraft. In order to better characterize the properties of this electric field, we augmented the original analysis method used to identify it (Andriopoulou et al., 2012) and applied it to an extended energetic electron microsignature dataset, constructed from observations at the vicinity of four saturnian moons. We study the average characteristics of the convective pattern and additionally its temporal and spatial variations. In our updated dataset we include data from the recent Cassini orbits and also microsignatures from the two moons, Rhea and Enceladus, allowing us to further extend this analysis to cover a greater time period as well as larger radial distances within the saturnian magnetosphere. When data from the larger radial range and more recent orbits are included, we find that the originally inferred electric field pattern persists, and in fact penetrates at least as far in as the orbit of Enceladus, a region of particular interest due to the plasma loading that takes place there. We perform our electric field calculations by setting the orientation of the electric field as a free, time-dependent parameter, removing the pointing constraints from previous works. Analytical but also numerical techniques have been employed, that help us overcome possible errors that could have been introduced from simplified assumptions used previously. We find that the average electric field pointing is not directed exactly at midnight, as we initially assumed, but is found to be stably displaced by approximately 12-32° from midnight, towards dawn. The fact, however, that the field’s pointing is much more variable in short time scales, in addition to our observations that it penetrates inside the orbit of Enceladus (∼4 Rs), may suggest that the convective pattern is dominating all the way down to the main rings (2.2 Rs), when data from the Saturn Orbit Insertion are factored in. We also report changes of the electric field strength and pointing over the course of time, possibly related to seasonal effects, with the largest changes occurring during a period that envelopes the saturnian equinox. Finally, the average electric field strength seems to be sensitive to radial distance, exhibiting a drop as we move further out in the magnetosphere, confirming earlier results. This drop-off, however, appears to be more intense in the earlier years of the mission. Between 2010 and 2012 the electric field is quasi-uniform, at least between the L-shells of Tethys and Dione. These new findings provide constraints in the possible electric field sources that might be causing such a convection pattern that has not been observed before in other planetary magnetospheres. The very well defined values of the field’s average properties may suggest a periodic variation of the convective pattern, which can average out very effectively the much larger changes in both pointing and intensity over short time scales, although this period cannot be defined. The slight evidence of changes in the properties across the equinox (seasonal control), may also hint that the source of the electric field resides in the planet’s atmosphere/ionosphere system.
Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source
NASA Astrophysics Data System (ADS)
Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen
2018-05-01
Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.
NASA Astrophysics Data System (ADS)
Dujardin, Alain; Courboulex, Françoise; Causse, Matthieu; Traversa, Paola; Monfret, Tony
2013-04-01
Ground motion decay with distance presents a clear magnitude dependence, PGA values of small events decreasing faster than those of larger events. This observation is now widely accepted and often taken into account in recent ground motion prediction equations (Anderson 2005, Akkar & Bommer 2010). The aim of this study is to investigate the origin of this dependence, which has not been clearly identified yet. Two main hypotheses are considered. On one hand the difference of ground motion decay is related to an attenuation effect, on the other hand the difference is related to an effect of extended fault (Anderson 2000). To study the role of attenuation, we realized synthetic tests using the stochastic simulation program SMSIM from Boore (2005). We build a set of simulations from several magnitudes and epicentral distances, and observe that the decay in PGA values is strongly dependent on the spectral shape of the Fourier spectra, which in turn strongly depends on the attenuation factor (Q(f) or kappa). We found that, for a point source approximation and an infinite value of Q (no attenuation) there is no difference between small and large events and that this difference increases when Q decreases. Theses results show that the influence of attenuation on spectral shape is different for earthquakes of different magnitude. In fact the influence of attenuation, which is more important at higher frequency, is larger for small earthquakes, whose Fourier acceleration spectrum has predominantly higher frequencies. We then study the effect of extended source using complete waveform simulations in a 1D model. We find that when the duration of the source time function increases, there is a larger probability to obtain large PGA values at equivalent distances. This effect could also play an important role in the PGA decay with magnitude and distance. Finally we compare these results with real datasets from the Japanese accelerometric network KIK-net.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
NASA Astrophysics Data System (ADS)
Petr, Rodney; Bykanov, Alexander; Freshman, Jay; Reilly, Dennis; Mangano, Joseph; Roche, Maureen; Dickenson, Jason; Burte, Mitchell; Heaton, John
2004-08-01
A high average power dense plasma focus (DPF), x-ray point source has been used to produce ˜70 nm line features in AlGaAs-based monolithic millimeter-wave integrated circuits (MMICs). The DPF source has produced up to 12 J per pulse of x-ray energy into 4π steradians at ˜1 keV effective wavelength in ˜2 Torr neon at pulse repetition rates up to 60 Hz, with an effective x-ray yield efficiency of ˜0.8%. Plasma temperature and electron concentration are estimated from the x-ray spectrum to be ˜170 eV and ˜5.1019 cm-3, respectively. The x-ray point source utilizes solid-state pulse power technology to extend the operating lifetime of electrodes and insulators in the DPF discharge. By eliminating current reversals in the DPF head, an anode electrode has demonstrated a lifetime of more than 5 million shots. The x-ray point source has also been operated continuously for 8 h run times at 27 Hz average pulse recurrent frequency. Measurements of shock waves produced by the plasma discharge indicate that overpressure pulses must be attenuated before a collimator can be integrated with the DPF point source.
Mathematical Fluid Dynamics of Store and Stage Separation
2005-05-01
coordinates r = stretched inner radius S, (x) = effective source strength Re, = transition Reynolds number t = time r = reflection coefficient T = temperature...wave drag due to lift integral has the same form as that due to thickness, the source strength of the equivalent body depends on streamwise derivatives...revolution in which the source strength S, (x) is proportional to the x rate of change of cross sectional area, the source strength depends on the streamwise
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Kasahara, A.; Yagi, Y.
2017-12-01
The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.
ICASE Semiannual Report, 1 April 1990 - 30 September 1990
1990-11-01
underlies parallel simulation protocols that synchronize based on logical time (all known approaches). This framework describes a suf- ficient set of...conducted primarily by visiting scientists from universities and from industry, who have resident appointments for limited periods of time , and by consultants...wave equation with point sources and semireflecting impedance boundary conditions. For sources that are piece- wise polynomial in time we get a finite
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Fast SiPM Readout of the PANDA TOF Detector
NASA Astrophysics Data System (ADS)
Böhm, M.; Lehmann, A.; Motz, S.; Uhlig, F.
2016-05-01
For the identification of low momentum charged particles and for event timing purposes a barrel Time-of-Flight (TOF) detector surrounding the interaction point is planned for the PANDA experiment at FAIR . Since the boundary conditions in terms of available radial space and radiation length are quite strict the favored layout is a hodoscope composed of several thousand small scintillating tiles (SciTils) read out by silicon photomultipliers (SiPMs). A time resolution of well below 100 ps is aimed for. With the originally proposed 30 × 30 × 5 mm3 SciTils read out by two single 3 × 3 mm2 SiPMs at the rims of the scintillator the targeted time resolution can be just reached, but with a considerable position dependence across the scintillator surface. In this paper we discuss other design options to further improve the time resolution and its homogeneity. It will be shown that wide scintillating rods (SciRods) with a size of, e.g., 50 × 30 × 5 mm3 or longer and read out at opposite sides by a chain of four serially connected SiPMs a time resolution down to 50 ps can be reached without problems. In addition, the position dependence of the time resolution is negligible. These SciRods were tested in the laboratory with electrons of a 90Sr source and under real experimental conditions in a particle beam at CERN. The measured time resolutions using fast BC418 or BC420 plastic scintillators wrapped in aluminum foil were consistently between 45 and 75 ps dependent on the SciRod design. This is a significant improvement compared to the original SciTil layout.
Time course of effects of emotion on item memory and source memory for Chinese words.
Wang, Bo; Fu, Xiaolan
2011-05-01
Although many studies have investigated the effect of emotion on memory, it is unclear whether the effect of emotion extends to all aspects of an event. In addition, it is poorly understood how effects of emotion on item memory and source memory change over time. This study examined the time course of effects of emotion on item memory and source memory. Participants learned intentionally a list of neutral, positive, and negative Chinese words, which were presented twice, and then took test of free recall, followed by recognition and source memory tests, at one of eight delayed points of time. The main findings are (within the time frame of 2 weeks): (1) Negative emotion enhances free recall, whereas there is only a trend that positive emotion enhances free recall. In addition, negative and positive emotions have different points of time at which their effects on free recall reach the greatest magnitude. (2) Negative emotion reduces recognition, whereas positive emotion has no effect on recognition. (3) Neither positive nor negative emotion has any effect on source memory. The above findings indicate that effect of emotion does not necessarily extend to all aspects of an event and that valence is a critical modulating factor in effect of emotion on item memory. Furthermore, emotion does not affect the time course of item memory and source memory, at least with a time frame of 2 weeks. This study has implications for establishing the theoretical model regarding the effect of emotion on memory. Copyright © 2011 Elsevier Inc. All rights reserved.
Seeing "the Dress" in the Right Light: Perceived Colors and Inferred Light Sources.
Chetverikov, Andrey; Ivanchei, Ivan
2016-08-01
In the well-known "dress" photograph, people either see the dress as blue with black stripes or as white with golden stripes. We suggest that the perception of colors is guided by the scene interpretation and the inferred positions of light sources. We tested this hypothesis in two online studies using color matching to estimate the colors observers see, while controlling for individual differences in gray point bias and color discrimination. Study 1 demonstrates that the interpretation of the dress corresponds to differences in perceived colors. Moreover, people who perceive the dress as blue-and-black are two times more likely to consider the light source as frontal, than those who see the white-and-gold dress. The inferred light sources, in turn, depend on the circadian changes in ambient light. The interpretation of the scene background as a wall or a mirror is consistent with the perceived colors as well. Study 2 shows that matching provides reliable results on differing devices and replicates the findings on scene interpretation and light sources. Additionally, we show that participants' environmental lighting conditions are an important cue for perceiving the dress colors. The exact mechanisms of how environmental lighting and circadian changes influence the perceived colors of the dress deserve further investigation.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
The BGS magnetic field candidate models for the 12th generation IGRF
NASA Astrophysics Data System (ADS)
Hamilton, Brian; Ridley, Victoria A.; Beggan, Ciarán D.; Macmillan, Susan
2015-05-01
We describe the candidate models submitted by the British Geological Survey for the 12th generation International Geomagnetic Reference Field. These models are extracted from a spherical harmonic `parent model' derived from vector and scalar magnetic field data from satellite and observatory sources. These data cover the period 2009.0 to 2014.7 and include measurements from the recently launched European Space Agency (ESA) Swarm satellite constellation. The parent model's internal field time dependence for degrees 1 to 13 is represented by order 6 B-splines with knots at yearly intervals. The parent model's degree 1 external field time dependence is described by periodic functions for the annual and semi-annual signals and by dependence on the 20-min Vector Magnetic Disturbance index. Signals induced by these external fields are also parameterized. Satellite data are weighted by spatial density and by two different noise estimators: (a) by standard deviation along segments of the satellite track and (b) a larger-scale noise estimator defined in terms of a measure of vector activity at the geographically closest magnetic observatories to the sample point. Forecasting of the magnetic field secular variation beyond the span of data is by advection of the main field using core surface flows.
Religion in SETI Communications
NASA Astrophysics Data System (ADS)
Pay, R.
The prospect of millions of civilizations in the Galaxy raises the probability of receiving communications in the Search for Extraterrestrial Intelligence (SETI). However, much depends on the average lifetime of planetary civilizations. For a lifetime of 500 years, an optimistic forecast would predict about 65 civilizations in the Galaxy at any one time, separated by 5,000 light years. No prospect of communication. For a lifetime of 10 million years, over a million civilizations would be spaced 180 light years apart. Communication among them is feasible. This indicates that extraterrestrial communications depend on civilizations achieving long term stability, probably by evolving a global religion that removes sources of religious strife. Stability also requires an ethic supporting universal rights, nonviolence, empathy and cooperation. As this ethic will be expressed in the planet-wide religion, it will lead to offers of support to other civilizations struggling to gain stability. As stable civilizations will be much advanced scientifically, understanding the religious concepts that appear in their communications will depend on how quantum mechanics, biological evolution, and the creation of the universe at a point in time are incorporated into their religion. Such a religion will view creation as intentional rather than accidental (the atheistic alternative) and will find the basis for its natural theology in the intention revealed by the physical laws of the universe.
Fermi-Lat Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
NASA Technical Reports Server (NTRS)
Ajello, M.; Albert, A.; Atwood, W.B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Brandt, T. J.;
2016-01-01
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100 GeV from a 15 degrees x 15 degrees region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the gamma-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner 1 kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15 degrees x 15 degrees region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point SourceCatalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with gamma-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC areused to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM.
40 CFR 461.33 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory § 461.33 New source... times. (4) Subpart C—Battery Wash (Detergent)—NSPS. Pollutant or pollutant Property Maximum for any 1... day Maximum for monthly average Metric units—mg/kg of lead in trucked batteries English units—pounds...
40 CFR 461.33 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory § 461.33 New source... times. (4) Subpart C—Battery Wash (Detergent)—NSPS. Pollutant or pollutant Property Maximum for any 1... day Maximum for monthly average Metric units—mg/kg of lead in trucked batteries English units—pounds...
Wiechman, Shelley A; McMullen, Kara; Carrougher, Gretchen J; Fauerbach, Jame A; Ryan, Colleen M; Herndon, David N; Holavanahalli, Radha; Gibran, Nicole S; Roaten, Kimberly
2017-12-16
To identify important sources of distress among burn survivors at discharge and 6, 12, and 24 months postinjury, and to examine if the distress related to these sources changed over time. Exploratory. Outpatient burn clinics in 4 sites across the country. Participants who met preestablished criteria for having a major burn injury (N=1009) were enrolled in this multisite study. Participants were given a previously developed list of 12 sources of distress among burn survivors and asked to rate on a 10-point Likert-type scale (0=no distress to 10=high distress) how much distress each of the 12 issues was causing them at the time of each follow-up. The Medical Outcomes Study 12-Item Short-Form Health Survey was administered at each time point as a measure of health-related quality of life. The Satisfaction With Appearance Scale was used to understand the relation between sources of distress and body image. Finally, whether a person returned to work was used to determine the effect of sources of distress on returning to employment. It was encouraging that no symptoms were worsening at 2 years. However, financial concerns and long recovery time are 2 of the highest means at all time points. Pain and sleep disturbance had the biggest effect on ability to return to work. These findings can be used to inform burn-specific interventions and to give survivors an understanding of the temporal trajectory for various causes of distress. In particular, it appears that interventions targeted at sleep disturbance and high pain levels can potentially effect distress over financial concerns by allowing a person to return to work more quickly. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Classificaiton and Discrimination of Sources with Time-Varying Frequency and Spatial Spectra
2007-04-01
sensitivity enhancement by impulse noise excision," in Proc. IEEE Nat. Radar Conf., pp. 252-256, 1997. [7] M. Turley, " Impulse noise rejection in HF...specific time-frequency points or regions, where one or more signals reside, enhances signal-to- noise ratio (SNR) and allows source discrimination and...source separation. The proposed algorithm is developed assuming deterministic signals with additive white complex Gaussian noise . 6. Estimation of FM
1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra
NASA Technical Reports Server (NTRS)
Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.;
2013-01-01
We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.
Source apportionment modeling of volatile organic compounds in streams
Pankow, J.F.; Asher, W.E.; Zogorski, J.S.
2006-01-01
It often is of interest to understand the relative importance of the different sources contributing to the concentration cw of a contaminant in a stream; the portions related to sources 1, 2, 3, etc. are denoted cw,1, cw,2, cw,3, etc. Like c w, 'he fractions ??1, = cw,1/c w, ??2 = cw,2/cw, ??3 = cw,3/cw, etc. depend on location and time. Volatile organic compounds (VOCs) can undergo absorption from the atmosphere into stream water or loss from stream water to the atmosphere, causing complexities affecting the source apportionment (SA) of VOCs in streams. Two SA rules are elaborated. Rule 1: VOC entering a stream across the air/water interface exclusively is assigned to the atmospheric portion of cw. Rule 2: VOC loss by volatilization, flow loss to groundwater, in-stream degradation, etc. is distributed over cw,1 cw,2, c w,3, etc. in proportion to their corresponding ?? values. How the two SA rules are applied, as well as the nature of the SA output for a given case, will depend on whether transport across the air/water interface is handled using the net flux F convention or using the individual fluxes J convention. Four hypothetical stream cases involving acetone, methyl-tert-butyl ether (MTBE), benzene, chloroform, and perchloroethylene (PCE) are considered. Acetone and MTBE are sufficiently water soluble from air for a domestic atmospheric source to be capable of yielding cw values approaching the common water quality guideline range of 1 to 10 ??g/L. For most other VOCs, such levels cause net outgassing (F > 0). When F > 0 in a given section of stream, in the net flux convention, all of the ??j, for the compound remain unchanged over that section while cw decreases. A characteristic time ??d can be calculated to predict when there will be differences between SA results obtained by the net flux convention versus the individual fluxes convention. Source apportionment modeling provides the framework necessary for comparing different strategies for mitigating contamination at points of interest along a stream. ?? 2006 SETAC.
First Neutrino Point-Source Results from the 22 String Icecube Detector
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration
2009-08-01
We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
VizieR Online Data Catalog: ChaMP X-ray point source catalog (Kim+, 2007)
NASA Astrophysics Data System (ADS)
Kim, M.; Kim, D.-W.; Wilkes, B. J.; Green, P. J.; Kim, E.; Anderson, C. S.; Barkhouse, W. A.; Evans, N. R.; Ivezic, Z.; Karovska, M.; Kashyap, V. L.; Lee, M. G.; Maksym, P.; Mossman, A. E.; Silverman, J. D.; Tananbaum, H. D.
2009-01-01
We present the Chandra Multiwavelength Project (ChaMP) X-ray point source catalog with ~6800 X-ray sources detected in 149 Chandra observations covering ~10deg2. The full ChaMP catalog sample is 7 times larger than the initial published ChaMP catalog. The exposure time of the fields in our sample ranges from 0.9 to 124ks, corresponding to a deepest X-ray flux limit of f0.5-8.0=9x10-16ergs/cm2/s. The ChaMP X-ray data have been uniformly reduced and analyzed with ChaMP-specific pipelines and then carefully validated by visual inspection. The ChaMP catalog includes X-ray photometric data in eight different energy bands as well as X-ray spectral hardness ratios and colors. To best utilize the ChaMP catalog, we also present the source reliability, detection probability, and positional uncertainty. (10 data files).
HerMES: point source catalogues from Herschel-SPIRE observations II
NASA Astrophysics Data System (ADS)
Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.
2014-11-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).
The development of the time dependence of the nuclear EMP electric field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eng, C
The nuclear electromagnetic pulse (EMP) electric field calculated with the legacy code CHAP is compared with the field given by an integral solution of Maxwell's equations, also known as the Jefimenko equation, to aid our current understanding on the factors that affect the time dependence of the EMP. For a fair comparison the CHAP current density is used as a source in the Jefimenko equation. At first, the comparison is simplified by neglecting the conduction current and replacing the standard atmosphere with a constant density air slab. The simplicity of the resultant current density aids in determining the factors thatmore » affect the rise, peak and tail of the EMP electric field versus time. The three dimensional nature of the radiating source, i.e. sources off the line-of-sight, and the time dependence of the derivative of the current density with respect to time are found to play significant roles in shaping the EMP electric field time dependence. These results are found to hold even when the conduction current and the standard atmosphere are properly accounted for. Comparison of the CHAP electric field with the Jefimenko electric field offers a direct validation of the high-frequency/outgoing wave approximation.« less
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
Singer, Michael Bliss; Sargeant, Christopher I; Piégay, Hervé; Riquier, Jérémie; Wilson, Rob J S; Evans, Cristina M
2014-01-01
Seasonal and annual partitioning of water within river floodplains has important implications for ecohydrologic links between the water cycle and tree growth. Climatic and hydrologic shifts alter water distribution between floodplain storage reservoirs (e.g., vadose, phreatic), affecting water availability to tree roots. Water partitioning is also dependent on the physical conditions that control tree rooting depth (e.g., gravel layers that impede root growth), the sources of contributing water, the rate of water drainage, and water residence times within particular storage reservoirs. We employ instrumental climate records alongside oxygen isotopes within tree rings and regional source waters, as well as topographic data and soil depth measurements, to infer the water sources used over several decades by two co-occurring tree species within a riparian floodplain along the Rhône River in France. We find that water partitioning to riparian trees is influenced by annual (wet versus dry years) and seasonal (spring snowmelt versus spring rainfall) fluctuations in climate. This influence depends strongly on local (tree level) conditions including floodplain surface elevation and subsurface gravel layer elevation. The latter represents the upper limit of the phreatic zone and therefore controls access to shallow groundwater. The difference between them, the thickness of the vadose zone, controls total soil moisture retention capacity. These factors thus modulate the climatic influence on tree ring isotopes. Additionally, we identified growth signatures and tree ring isotope changes associated with recent restoration of minimum streamflows in the Rhône, which made new phreatic water sources available to some trees in otherwise dry years. Key Points Water shifts due to climatic fluctuations between floodplain storage reservoirs Anthropogenic changes to hydrology directly impact water available to trees Ecohydrologic approaches to integration of hydrology afford new possibilities PMID:25506099
Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.
2011-01-01
Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661
Haughey, Aisling; Coalter, George; Mugabe, Koki
2011-09-01
The study aimed to assess the suitability of linear array metal oxide semiconductor field effect transistor detectors (MOSFETs) as in vivo dosimeters to measure rectal dose in high dose rate brachytherapy treatments. The MOSFET arrays were calibrated with an Ir192 source and phantom measurements were performed to check agreement with the treatment planning system. The angular dependence, linearity and constancy of the detectors were evaluated. For in vivo measurements two sites were investigated, transperineal needle implants for prostate cancer and Fletcher suites for cervical cancer. The MOSFETs were inserted into the patients' rectum in theatre inside a modified flatus tube. The patients were then CT scanned for treatment planning. Measured rectal doses during treatment were compared with point dose measurements predicted by the TPS. The MOSFETs were found to require individual calibration factors. The calibration was found to drift by approximately 1% ±0.8 per 500 mV accumulated and varies with distance from source due to energy dependence. In vivo results for prostate patients found only 33% of measured doses agreed with the TPS within ±10%. For cervix cases 42% of measured doses agreed with the TPS within ±10%, however of those not agreeing variations of up to 70% were observed. One of the most limiting factors in this study was found to be the inability to prevent the MOSFET moving internally between the time of CT and treatment. Due to the many uncertainties associated with MOSFETs including calibration drift, angular dependence and the inability to know their exact position at the time of treatment, we consider them to be unsuitable for in vivo dosimetry in rectum for HDR brachytherapy.
A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.
2014-01-01
Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.
A search for energy-dependence of the Kes 73/1E 1841-045 morphology in GeV
NASA Astrophysics Data System (ADS)
Yeung, P. K. H.
2017-10-01
While the Kes 73/1E 1841-045 system had been confirmed as an extended GeV source, whether its morphology depends on the photon energy or not deserves our further investigation. Adopting data collected by Fermi Large Area Telescope (LAT) again, we look into the extensions of this source in three energy bands individually: 0.3-1 GeV, 1-3 GeV and 3-200 GeV. We find that the 0.3-1 GeV morphology is point-like and is quite different from those in the other two bands, although we cannot robustly reject a unified morphology for the whole LAT band.
A very deep IRAS survey at the north ecliptic pole
NASA Technical Reports Server (NTRS)
Houck, J. R.; Hacking, P. B.; Condon, J. J.
1987-01-01
The data from approximately 20 hours observation of the 4- to 6-square degree field surrounding the north ecliptic pole have been combined to produce a very deep IR survey at the four IRAS bands. Scans from both pointed and survey observations were included in the data analysis. At 12 and 25 microns the deep survey is limited by detector noise and is approximately 50 times deeper than the IRAS Point Source Catalog (PSC). At 60 microns the problems of source confusion and Galactic cirrus combine to limit the deep survey to approximately 12 times deeper than the PSC. These problems are so severe at 100 microns that flux values are only given for locations corresponding to sources selected at 60 microns. In all, 47 sources were detected at 12 microns, 37 at 25 microns, and 99 at 60 microns. The data-analysis procedures and the significance of the 12- and 60-micron source-count results are discussed.
Small catchments DEM creation using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Gafurov, A. M.
2018-01-01
Digital elevation models (DEM) are an important source of information on the terrain, allowing researchers to evaluate various exogenous processes. The higher the accuracy of DEM the better the level of the work possible. An important source of data for the construction of DEMs are point clouds obtained with terrestrial laser scanning (TLS) and unmanned aerial vehicles (UAV). In this paper, we present the results of constructing a DEM on small catchments using UAVs. Estimation of the UAV DEM showed comparable accuracy with the TLS if real time kinematic Global Positioning System (RTK-GPS) ground control points (GCPs) and check points (CPs) were used. In this case, the main source of errors in the construction of DEMs are the errors in the referencing of survey results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, William Scott
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
An analysis of the adaptability of Loran-C to air navigation
NASA Technical Reports Server (NTRS)
Littlefield, J. A.
1981-01-01
The sources of position errors characteristics of the Loran-C navigation system were identified. Particular emphasis was given to their point on entry as well as their elimination. It is shown that the ratio of realized accuracy to theoretical accuracy of the Loran-C is highly receiver dependent.
Improved moving source photometry with TRIPPy
NASA Astrophysics Data System (ADS)
Alexandersen, Mike; Fraser, Wesley Cristopher
2017-10-01
Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.
Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad
2016-06-11
It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .
Outdoor air pollution in close proximity to a continuous point source
NASA Astrophysics Data System (ADS)
Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul
Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.
NASA Astrophysics Data System (ADS)
Yogish, H.; Chandrashekara, K.; Pramod Kumar, M. R.
2012-11-01
India is looking at the renewable alternative sources of energy to reduce its dependence on import of crude oil. As India imports 70 % of the crude oil, the country has been greatly affected by increasing cost and uncertainty. Biodiesel fuel derived by the two step acid transesterification of mixed non-edible oils from Jatropha curcas and Pongamia (karanja) can meet the requirements of diesel fuel in the coming years. In the present study, different proportions of Methanol, Sodium hydroxide, variation of Reaction time, Sulfuric acid and Reaction Temperature were adopted in order to optimize the experimental conditions for maximum biodiesel yield. The preliminary studies revealed that biodiesel yield varied widely in the range of 75-95 % using the laboratory scale reactor. The average yield of 95 % was obtained. The fuel and chemical properties of biodiesel, namely kinematic viscosity, specific gravity, density, flash point, fire point, calorific value, pH, acid value, iodine value, sulfur content, water content, glycerin content and sulfated ash values were found to be within the limits suggested by Bureau of Indian Standards (BIS 15607: 2005). The optimum combination of Methanol, Sodium hydroxide, Sulfuric acid, Reaction Time and Reaction Temperature are established.
Subjective study of preferred listening conditions in Italian Catholic churches
NASA Astrophysics Data System (ADS)
Martellotta, Francesco
2008-10-01
The paper describes the results of research aimed at investigating the preferred subjective listening conditions inside churches. The effect of different musical motifs (spanning Gregorian chants to symphonic music) was investigated and regression analysis was performed in order to point out the relationship between subjective ratings and acoustical parameters. In order to present realistic listening conditions to the subjects a small subset of nine churches was selected among a larger set of acoustic data collected in several Italian churches during a widespread on-site survey. The subset represented different architectural styles and shapes, and was characterized by average listening conditions. For each church a single source-receiver combination with fixed relative positions was chosen. Measured binaural impulse responses were cross-talk cancelled and then convolved with five anechoic motifs. Paired comparisons were finally performed, asking a trained panel of subjects their preference. Factor analysis pointed out a substantially common underlying pattern characterizing subjective responses. The results show that preferred listening conditions vary as a function of the musical motif, depending on early decay time for choral music and on a combination of initial time delay and lateral energy for instrumental music.
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
40 CFR 430.125 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Tissue, Filter, Non-Woven, and... of 5.0 to 9.0 at all times. Subpart L [NSPS for non-integrated mills where filter and non-woven...
Theory of two-point correlations of jet noise
NASA Technical Reports Server (NTRS)
Ribner, H. S.
1976-01-01
A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.
The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors
NASA Astrophysics Data System (ADS)
Joseph, Peter M.
1980-06-01
At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.
Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki
2013-01-01
In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.
Controlled dipole-dipole interactions between K Rydberg atoms in a laser-chopped effusive beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutteruf, M. R.; Jones, R. R.
2010-12-15
We explore pulsed-field control of resonant dipole-dipole interactions between K Rydberg atoms. A laser-based atomic beam chopper is used to reduce the relative velocities of Rydberg atoms excited from an effusive thermal source. Resonant energy transfer (RET) between pairs of atoms is controlled via Stark tuning of the relevant Rydberg energy levels. Resonance line shapes in the electric field dependence of the RET probability are used to determine the effective temperature of the sample. We demonstrate that the relative atom velocities can be reduced to the point where the duration of the electric-field tuning pulses, and not the motion ofmore » neighboring atoms, defines the interaction time for each pair within the ensemble. Coherent, transform-limited broadening of the resonance line shape is observed as the tuning pulse duration is reduced below the natural time scale for collisions.« less
Vant, W N
2001-01-01
The water quality of the Waikato River is currently much better than it was in the 1950s. Major improvements in the treatment of the sewage and industrial wastewaters which are discharged to the river mean that levels of indicator bacteria in the lower reaches of the river are now many times lower than in the past. Eve so, conditions are still not suitable for swimming, and blue-green algal blooms occur at times. Non-point or diffuse sources of contaminants now dominate the nutrient and pathogens budgets. Progressively-intensifying farming, particularly in lowland areas, is thought to contribute the majority of the contaminants found in the river. Future improvements in water quality will therefore depend more on activities like changes to farming practice--such as retiring the riparian margins of lowland tributaries of the river--than on further advances in wastewater treatment.
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Performance analysis of phase-change material storage unit for both heating and cooling of buildings
NASA Astrophysics Data System (ADS)
Waqas, Adeel; Ali, Majid; Ud Din, Zia
2017-04-01
Utilisation of solar energy and the night ambient (cool) temperatures are the passive ways of heating and cooling of buildings. Intermittent and time-dependent nature of these sources makes thermal energy storage vital for efficient and continuous operation of these heating and cooling techniques. Latent heat thermal energy storage by phase-change materials (PCMs) is preferred over other storage techniques due to its high-energy storage density and isothermal storage process. The current study was aimed to evaluate the performance of the air-based PCM storage unit utilising solar energy and cool ambient night temperatures for comfort heating and cooling of a building in dry-cold and dry-hot climates. The performance of the studied PCM storage unit was maximised when the melting point of the PCM was ∼29°C in summer and 21°C during winter season. The appropriate melting point was ∼27.5°C for all-the-year-round performance. At lower melting points than 27.5°C, declination in the cooling capacity of the storage unit was more profound as compared to the improvement in the heating capacity. Also, it was concluded that the melting point of the PCM that provided maximum cooling during summer season could be used for winter heating also but not vice versa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Sprinberg, G; Piriz, G
Purpose: To optimize the dose in bladder and rectum and show the different shapes of the isodose volumes in Co60-HDR brachytherapy, considering different utero and vaginal sources dwell ratio times (TU:TV). Methods: Besides Ir192-HDR, new Co60-HDR sources are being incorporated. We considered different TU:TV times and computed the dosis in bladder, rectum and at the reference points of the Manchester system. Also, we calculated the isodose volume and shape in each case. We used a EZAG-BEBIG Co0.A86 model with TPS HDRplus3.0.4. and LCT42-7, LCT42-2(R,L) applicators. A reference dose RA= 1.00 Gy was given to the A-right point. We considered themore » TU:TV dwell time ratios 1:0.25, 1:0.33, 1:0.5, 1:1, 1:2, 1:3, and 1:4. Given TU:TV, the stop time at each dwell position is fixed for each applicator. Results: Increasing TU:TV systematically results in a decreasing of the dose in bladder and rectum, e.g. 9% and 27% reduction were found in 1:0.25 with respect to 1:1, while 12% and 34% increase were found in 1:4 with respect to 1:1. Also, the isodose volume parameters height (h), width (w), thickness (t) and volume (hwt) increased from the 1:0.25 case to the 1:4 value: hwt is 25% lower and 31% higher than the 1:1 reference volume in these cases. Also w decreased for higher TU:TV and may compromise the tumoral volume coverage, decreasing 17% in the 1:0.25 case compared to the 1:1 case. The shape of the isodose volume was obtained for the different TU:TV considered. Conclusion: We obtained the shape of isodose volumes for different TU:TV values in gynecological Co60-HDR. We studied the dose reduction in bladder and rectum for different TU:TV ratios. The volume parameters and hwt are strongly dependent on this ratio. This information is useful for a quantitative check of the TPS and as a starting point towards optimization.« less
Zhang, Yuji
2015-01-01
Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly "watch" the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities (e.g., genes and proteins) coordinately perform their biological functions in biological systems. In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1α, 25(OH)2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1α, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among biological processes enriched in co-expressed genes under different conditions. The enriched biological processes include translation elongation, nucleosome assembly, and retina development. These network dynamics provide new insights into the impact of 1α, 25-Dihydroxyvitamin D3 treatment in bone and cartilage development. We developed a network-based approach to analyzing the DEGs at different time points by integrating molecular interactions and gene ontology information. These results demonstrate that the proposed approach can provide insight on the molecular mechanisms taking place in vertebrate embryo development upon treatment with 1α, 25(OH)2D3. Our approach enables the monitoring of biological processes that can serve as a basis for generating new testable hypotheses. Such network-based integration approach can be easily extended to any temporal- or condition-dependent genomic data analyses.
Non-contact local temperature measurement inside an object using an infrared point detector
NASA Astrophysics Data System (ADS)
Hisaka, Masaki
2017-04-01
Local temperature measurement in deep areas of objects is an important technique in biomedical measurement. We have investigated a non-contact method for measuring temperature inside an object using a point detector for infrared (IR) light. An IR point detector with a pinhole was constructed and the radiant IR light emitted from the local interior of the object is photodetected only at the position of pinhole located in imaging relation. We measured the thermal structure of the filament inside the miniature bulb using the IR point detector, and investigated the temperature dependence at approximately human body temperature using a glass plate positioned in front of the heat source.
NASA Astrophysics Data System (ADS)
Rumpfhuber, E.; Keller, G. R.; Velasco, A. A.
2005-12-01
Many large-scale experiments conduct both controlled-source and passive deployments to investigate the lithospheric structure of a targeted region. Many of these studies utilize each data set independently, resulting in different images of the Earth depending on the data set investigated. In general, formal integration of these data sets, such as joint inversions, with other data has not been performed. The CD-ROM experiment, which included both 2-D controlled-source and passive recording along a profile extending from southern Wyoming to northern New Mexico serves as an excellent data set to develop a formal integration strategy between both controlled source and passive experiments. These data are ideal to develop this strategy because: 1) the analysis of refraction/wide-angle reflection data yields Vp structure, and sometimes Vs structure, of the crust and uppermost mantle; 2) analysis of the PmP phase (Moho reflection) yields estimates of the average Vp of the crust for the crust; and 3) receiver functions contain full-crustal reverberations and yield the Vp/Vs ratio, but do not constrain the absolute P and S velocity. Thus, a simple form of integration involves using the Vp/Vs ratio from receiver functions and the average Vp from refraction measurements, to solve for the average Vs of the crust. When refraction/ wide-angle reflection data and several receiver functions nearby are available, an integrated 2-D model can be derived. In receiver functions, the PS conversion gives the S-wave travel-time (ts) through the crust along the raypath traveled from the Moho to the surface. Since the receiver function crustal reverberation gives the Vp/Vs ratio, it is also possible to use the arrival time of the converted phase, PS, to solve for the travel time of the direct teleseismic P-wave through the crust along the ray path. Raytracing can yield the point where the teleseismic wave intersects the Moho. In this approach, the conversion point is essentially a pseudo-shotpoint, thus the converted arrival at the surface can be jointly modeled with refraction data using a 3-D inversion code. Employing the combined CD-ROM data sets, we will be investigating the joint inversion results of controlled source data and receiver functions.
NASA Astrophysics Data System (ADS)
Lauer, F.; Frede, H.-G.; Breuer, L.
2012-04-01
Spatially confined groundwater discharge can contribute significantly to stream discharge. Distributed fibre optic temperature sensing (DTS) of stream water has been successfully used to localize- and quantify groundwater discharge from this type "point sources" (PS) in small first-order streams. During periods when stream and groundwater temperatures differ PS appear as abrupt step in longitudinal stream water temperature distribution. Based on stream temperature observation up- and downstream of a point source and estimated or measured groundwater temperature the proportion of groundwater inflow to stream discharge can be quantified using simple mixing models. However so far this method has not been quantitatively verified, nor has a detailed uncertainty analysis of the method been conducted. The relative accuracy of this method is expected to decrease nonlinear with decreasing proportions of lateral inflow. Furthermore it depends on the temperature differences (ΔT) between groundwater and surface water and on the accuracy of temperature measurement itself. The latter could be affected by different sources of errors. For example it has been shown that a direct impact of solar radiation on fibre optic cables can lead to errors in temperature measurements in small streams due to low water depth. Considerable uncertainty might also be related to the determination of groundwater temperature through direct measurements or derived from the DTS signal. In order to directly validate the method and asses it's uncertainty we performed a set of artificial point source experiments with controlled lateral inflow rates to a natural stream. The experiments were carried out at the Vollnkirchener Bach, a small head water stream in Hessen, Germany in November and December 2011 during a low flow period. A DTS system was installed along a 1.2 km sub reach of the stream. Stream discharge was measured using a gauging flume installed directly upstream of the artificial PS. Lateral inflow was simulated using a pumping system connected to a 2 m3 water tank. Pumping rates were controlled using a magnetic inductive flowmeter and kept constant for a time period of 30 minutes to 1.5 hours depending on the simulated inflow rate. Different temperatures of lateral inflow were adjusted by heating the water in the tank (for summer experiments a cooling by ice cubes could be realized). With this setup, different proportions of lateral inflow to stream flow ranging from 2 to 20%, could be simulated for different ΔT's (2-7° C) between stream- and inflowing water. Results indicate that the estimation of groundwater discharge through DTS is working properly, but that the method is very sensitive to the determination of the PS groundwater temperature. The span of adjusted ΔT and inflow rates of the artificial system are currently used to perform a thorough uncertainty analysis of the DTS method and to derive thresholds for detection limits.
Time delay of critical images in the vicinity of cusp point of gravitational-lens systems
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Zhdanov, V.
2016-12-01
We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
First Near-infrared Imaging Polarimetry of Young Stellar Objects in the Circinus Molecular Cloud
NASA Astrophysics Data System (ADS)
Kwon, Jungmi; Nakagawa, Takao; Tamura, Motohide; Hough, James H.; Choi, Minho; Kandori, Ryo; Nagata, Tetsuya; Kang, Miju
2018-02-01
We present the results of near-infrared (NIR) linear imaging polarimetry in the J, H, and K s bands of the low-mass star cluster-forming region in the Circinus Molecular Cloud Complex. Using aperture polarimetry of point-like sources, positive detection of 314, 421, and 164 sources in the J, H, and K s bands, respectively, was determined from among 749 sources whose photometric magnitudes were measured. For the source classification of the 133 point-like sources whose polarization could be measured in all 3 bands, a color–color diagram was used. While most of the NIR polarizations of point-like sources are well-aligned and can be explained by dichroic polarization produced by aligned interstellar dust grains in the cloud, 123 highly polarized sources have also been identified with some criteria. The projected direction on the sky of the magnetic field in the Cir-MMS region is indicated by the mean polarization position angles (70°) of the point-like sources in the observed region, corresponding to approximately 1.6× 1.6 pc2. In addition, the magnetic field direction is compared with the outflow orientations associated with Infrared Astronomy Satellite sources, in which two sources were found to be aligned with each other and one source was not. We also show prominent polarization nebulosities over the Cir-MMS region for the first time. Our polarization data have revealed one clear infrared reflection nebula (IRN) and several candidate IRNe in the Cir-MMS field. In addition, the illuminating sources of the IRNe are identified with near- and mid-infrared sources.
NASA Astrophysics Data System (ADS)
Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.
2012-05-01
One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.
The 124Sb activity standardization by gamma spectrometry for medical applications
NASA Astrophysics Data System (ADS)
de Almeida, M. C. M.; Iwahara, A.; Delgado, J. U.; Poledna, R.; da Silva, R. L.
2010-07-01
This work describes a metrological activity determination of 124Sb, which can be used as radiotracer, applying gamma spectrometry methods with hyper pure germanium detector and efficiency curves. This isotope with good activity and high radionuclidic purity is employed in the form of meglumine antimoniate (Glucantime) or sodium stibogluconate (Pentostam) to treat leishmaniasis. 124Sb is also applied in animal organ distribution studies to solve some questions in pharmacology. 124Sb decays by β-emission and it produces several photons (X and gamma rays) with energy varying from 27 to 2700 keV. Efficiency curves to measure point 124Sb solid sources were obtained from a 166mHo standard that is a multi-gamma reference source. These curves depend on radiation energy, sample geometry, photon attenuation, dead time and sample-detector position. Results for activity determination of 124Sb samples using efficiency curves and a high purity coaxial germanium detector were consistent in different counting geometries. Also uncertainties of about 2% ( k=2) were obtained.
Understanding the holobiont: the interdependence of plants and their microbiome.
Sánchez-Cañizares, Carmen; Jorrín, Beatriz; Poole, Philip S; Tkacz, Andrzej
2017-08-01
The holobiont is composed by the plant and its microbiome. In a similar way to ecological systems of higher organisms, the holobiont shows interdependent and complex dynamics [1,2]. While plants originate from seeds, the microbiome has a multitude of sources. The assemblage of these communities depends on the interaction between the emerging seedling and its surrounding environment, with soil being the main source. These microbial communities are controlled by the plant through different strategies, such as the specific profile of root exudates and its immune system. Despite this control, the microbiome is still able to adapt and thrive. The molecular knowledge behind these interactions and microbial '-omic' technologies are developing to the point of enabling holobiont engineering. For a long time microorganisms were in the background of plant biology but new multidisciplinary approaches have led to an appreciation of the importance of the holobiont, where plants and microbes are interdependent. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Viscoelastic modeling of deformation and gravity changes induced by pressurized magmatic sources
NASA Astrophysics Data System (ADS)
Currenti, Gilda
2018-05-01
Gravity and height changes, which reflect magma accumulation in subsurface chambers, are evaluated using analytical and numerical models in order to investigate their relationships and temporal evolutions. The analysis focuses mainly on the exploration of the time-dependent response of gravity and height changes to the pressurization of ellipsoidal magmatic chambers in viscoelastic media. Firstly, the validation of the numerical Finite Element results is performed by comparison with analytical solutions, which are devised for a simple spherical source embedded in a homogeneous viscoelastic half-space medium. Then, the effect of several model parameters on time-dependent height and gravity changes is investigated thanks to the flexibility of the numerical method in handling complex configurations. Both homogeneous and viscoelastic shell models reveal significantly different amplitudes in the ratio between gravity and height changes depending on geometry factors and medium rheology. The results show that these factors also influence the relaxation characteristic times of the investigated geophysical changes. Overall, these temporal patterns are compatible with time-dependent height and gravity changes observed on Etna volcano during the 1994-1997 inflation period. By modeling the viscoelastic response of a pressurized prolate magmatic source, a general agreement between computed and observed geophysical variations is achieved.
Reduced order modelling in searches for continuous gravitational waves - I. Barycentring time delays
NASA Astrophysics Data System (ADS)
Pitkin, M.; Doolan, S.; McMenamin, L.; Wette, K.
2018-06-01
The frequencies and phases of emission from extra-solar sources measured by Earth-bound observers are modulated by the motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the source's sky-location. Precise knowledge of the modulations are required to coherently track the source's phase over long observations, for example, in pulsar timing, or searches for continuous gravitational waves. The modulations can be modelled as sky-location and time-dependent time delays that convert arrival times at the observer to the inertial frame of the source, which can often be the Solar system barycentre. We study the use of reduced order modelling for speeding up the calculation of this time delay for any sky-location. We find that the time delay model can be decomposed into just four basis vectors, and with these the delay for any sky-location can be reconstructed to sub-nanosecond accuracy. When compared to standard routines for time delay calculation in gravitational wave searches, using the reduced basis can lead to speed-ups of 30 times. We have also studied components of time delays for sources in binary systems. Assuming eccentricities <0.25, we can reconstruct the delays to within 100 s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when interpolating the basis for different orbital periods or time stamps. In long-duration phase-coherent searches for sources with sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens.
Modeling and Correcting the Time-Dependent ACS PSF
NASA Technical Reports Server (NTRS)
Rhodes, Jason; Massey, Richard; Albert, Justin; Taylor, James E.; Koekemoer, Anton M.; Leauthaud, Alexie
2006-01-01
The ability to accurately measure the shapes of faint objects in images taken with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) depends upon detailed knowledge of the Point Spread Function (PSF). We show that thermal fluctuations cause the PSF of the ACS Wide Field Camera (WFC) to vary over time. We describe a modified version of the TinyTim PSF modeling software to create artificial grids of stars across the ACS field of view at a range of telescope focus values. These models closely resemble the stars in real ACS images. Using 10 bright stars in a real image, we have been able to measure HST s apparent focus at the time of the exposure. TinyTim can then be used to model the PSF at any position on the ACS field of view. This obviates the need for images of dense stellar fields at different focus values, or interpolation between the few observed stars. We show that residual differences between our TinyTim models and real data are likely due to the effects of Charge Transfer Efficiency (CTE) degradation. Furthermore, we discuss stochastic noise that is added to the shape of point sources when distortion is removed, and we present MultiDrizzle parameters that are optimal for weak lensing science. Specifically, we find that reducing the MultiDrizzle output pixel scale and choosing a Gaussian kernel significantly stabilizes the resulting PSF after image combination, while still eliminating cosmic rays/bad pixels, and correcting the large geometric distortion in the ACS. We discuss future plans, which include more detailed study of the effects of CTE degradation on object shapes and releasing our TinyTim models to the astronomical community.
40 CFR 461.73 - New source performance standards. (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY Zinc Subcategory § 461.73 New... times. (b) There shall be no discharge allowance for process wastewater pollutants from any battery manufacturing operation other than those battery manufacturing operations listed above. ...
Therapists' and Clients' Perceptions of Bonding as Predictors of Outcome in Multisystemic Therapy®.
Glebova, Tatiana; Foster, Sharon L; Cunningham, Phillippe B; Brennan, Patricia A; Whitmore, Elizabeth A
2017-12-08
This longitudinal study examined whether strength of and balance in self-reported caregiver, youth, and therapist emotional bonds in mid- and late treatment predicted outcomes in Multisystemic Therapy of adolescent behavior problems in a sample of 164 caregiver-youth dyads. Strength of and balance in bonds related to outcome in different ways, depending on the source of the report and time. Results showed a limited association between family members' emotional connection with the therapist and treatment outcome, whereas therapists' perceptions of bond with the caregiver showed highly significant associations across time. Caregiver-therapist agreement on emotional connection at both time points predicted therapist evaluation of treatment success and successful termination, but this was largely explained by therapists' level of alliance. Balance in bonds with the therapist between caregiver and youth had no significant associations with any outcome. The study major limitations such as examining only one component of alliance and possible implications are discussed. © 2017 Family Process Institute.
Ground-Water Age and its Water-Management Implications, Cook Inlet Basin, Alaska
Glass, Roy L.
2002-01-01
The Cook Inlet Basin encompasses 39,325 square miles in south-central Alaska. Approximately 350,000 people, more than half of Alaska?s population, reside in the basin, mostly in the Anchorage area. However, rapid growth is occurring in the Matanuska?Susitna and Kenai Peninsula Boroughs to the north and south of Anchorage. Ground-water resources provide about one-third of the water used for domestic, commercial and industrial purposes in the Anchorage metropolitan area and are the sole sources of water for industries and residents outside Anchorage. In 1997, a study of the Cook Inlet Basin was begun as part of the U.S. Geological Survey?s National Water-Quality Assessment Program. Samples of ground water were collected from 35 existing wells in unconsolidated glacial and alluvial aquifers during 1999 to determine the regional quality of ground water beneath about 790 mi2 of developed land and to gain a better understanding of the natural and human factors that affect the water quality (Glass, 2001). Of the 35 wells sampled, 31 had water analyzed for atmospherically derived substances to determine the ground water?s travel time from its point of recharge to its point of use or discharge?also known as ground-water age. Ground water moves slowly from its point of recharge to its point of use or discharge. This water starts as rain and melting snow that soak into the ground as recharge. In the Matanuska?Susitna, Anchorage, and Kenai Peninsula areas, ground water generally moves from near the mountain fronts toward Cook Inlet or the major rivers. Much of the water pumped by domestic and public-supply wells may have traveled less than 10 miles, and the trip may have taken as short a time as a few days or as long as several decades. This ground water is vulnerable to contamination from the land surface, and many contaminants in the water would follow the same paths and have similar travel times from recharge areas to points of use as the chemical substances analyzed in this study. The effects of contamination may not be seen for several years after a contaminant is introduced into the ground-water system. Many contaminants could make the water unsuitable for drinking for many years, even in concentrations too low to detect without expensive chemical tests. The travel time of a chemically conservative substance depends primarily on the velocity of ground water through the aquifer, which in turn depends on the hydrologic characteristics of the aquifer system.
Incorporating the eruptive history in a stochastic model for volcanic eruptions
NASA Astrophysics Data System (ADS)
Bebbington, Mark
2008-08-01
We show how a stochastic version of a general load-and-discharge model for volcanic eruptions can be implemented. The model tracks the history of the volcano through a quantity proportional to stored magma volume. Thus large eruptions can influence the activity rate for a considerable time following, rather than only the next repose as in the time-predictable model. The model can be fitted to data using point-process methods. Applied to flank eruptions of Mount Etna, it exhibits possible long-term quasi-cyclic behavior, and to Mauna Loa, a long-term decrease in activity. An extension to multiple interacting sources is outlined, which may be different eruption styles or locations, or different volcanoes. This can be used to identify an 'average interaction' between the sources. We find significant evidence that summit eruptions of Mount Etna are dependent on preceding flank eruptions, with both flank and summit eruptions being triggered by the other type. Fitted to Mauna Loa and Kilauea, the model had a marginally significant relationship between eruptions of Mauna Loa and Kilauea, consistent with the invasion of the latter's plumbing system by magma from the former.
Uematsu, Mikio; Kurosawa, Masahiko
2005-01-01
A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities.
Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms
NASA Astrophysics Data System (ADS)
Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng
2013-04-01
Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.
Improving Planck calibration by including frequency-dependent relativistic corrections
NASA Astrophysics Data System (ADS)
Quartin, Miguel; Notari, Alessio
2015-09-01
The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10-3, due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.
Time-dependent real space RG on the spin-1/2 XXZ chain
NASA Astrophysics Data System (ADS)
Mason, Peter; Zagoskin, Alexandre; Betouras, Joseph
In order to measure the spread of information in a system of interacting fermions with nearest-neighbour couplings and strong bond disorder, one could utilise a dynamical real space renormalisation group (RG) approach on the spin-1/2 XXZ chain. Under such a procedure, a many-body localised state is established as an infinite randomness fixed point and the entropy scales with time as log(log(t)). One interesting further question that results from such a study is the case when the Hamiltonian explicitly depends on time. Here we answer this question by considering a dynamical renormalisation group treatment on the strongly disordered random spin-1/2 XXZ chain where the couplings are time-dependent and chosen to reflect a (slow) evolution of the governing Hamiltonian. Under the condition that the renormalisation process occurs at fixed time, a set of coupled second order, nonlinear PDE's can be written down in terms of the random distributions of the bonds and fields. Solution of these flow equations at the relevant critical fixed points leads us to establish the dynamics of the flow as we sweep through the quantum critical point of the Hamiltonian. We will present these critical flows as well as discussing the issues of duality, entropy and many-body localisation.
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
Terrestrial laser scanning in monitoring of anthropogenic objects
NASA Astrophysics Data System (ADS)
Zaczek-Peplinska, Janina; Kowalska, Maria
2017-12-01
The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.
Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
NASA Astrophysics Data System (ADS)
Shi, Chang-Sheng; Zhang, Shuang-Nan; Li, Xiang-Dong
2018-05-01
We recalculate the modes of the magnetohydrodynamics (MHD) waves in the MHD model (Shi, Zhang & Li 2014) of the kilohertz quasi-periodic oscillations (kHz QPOs) in neutron star low mass X-ray binaries (NS-LMXBs), in which the compressed magnetosphere is considered. A method on point-by-point scanning for every parameter of a normal LMXBs is proposed to determine the wave number in a NS-LMXB. Then dependence of the twin kHz QPO frequencies on accretion rates (\\dot{M}) is obtained with the wave number and magnetic field (B*) determined by our method. Based on the MHD model, a new explanation of the parallel tracks, i.e. the slowly varying effective magnetic field leads to the shift of parallel tracks in a source, is presented. In this study, we obtain a simple power-law relation between the kHz QPO frequencies and \\dot{M}/B_{\\ast }^2 in those sources. Finally, we study the dependence of kHz quasi-periodic oscillation frequencies on the spin, mass and radius of a neutron star. We find that the effective magnetic field, the spin, mass and radius of a neutron star lead to the parallel tracks in different sources.
Time-series analysis of foreign exchange rates using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Inoue, Masayoshi
2013-08-01
Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in foreign exchange rates, in particular, the dollar-yen rate. The time-dependent pattern entropy of the dollar-yen rate was found to be high in the following periods: before and after the turning points of the yen from strong to weak or from weak to strong, and the period after the Lehman shock.
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
Theoretical overview and modeling of the sodium and potassium atmospheres of mercury
NASA Technical Reports Server (NTRS)
Smyth, William H.; Marconi, M. L.
1995-01-01
A general theoretical overview for the sources, sinks, gas-surface interactions, and transport dynamics of sodium and potassium in the exospheric atmsophere of Mercury is given. Information for these four factors, which control the spatial distribution of these two alkali-group gases about the planet, is incorporated in numerical models. The spatial nature and relative importance of the initial source atom atmosphere and the ambient (ballistic hopping) atom atmosphere are then examined and are shown to be controlled and coupled to a great extent by the extremely large and variable solar radiation acceleration experienced by sodium and potassium as they resonantly scatter solar photons. The lateral (antisunward) transport rate of thermally accommodated sodium and potassium ambient atoms is shown to be driven by the solar radiation acceleration and, over a significant portion of Mercury's orbit about the Sun, is sufficiently rapid to be competitive with the short photoionization lifetimes for these atoms when they are located on the summit surface near or within about 30 deg of the terminator. The lateral transport rate is characterized by a migration time determined by model calculations for an ensemble of atoms initially starting at a point source on the surface (i.e., a numerical spacetime dependent Green's function). Four animations for the spacetime evolution of the sodium (or potassium) atmosphere produced by a point source on the surface are presented on a videotape format. For extended surface sources for sodium and potassium, the local column density is determined by competition between the photoionization lifetimes and the lateral transport times of atoms originating from different surface source locations. Sodium surface source fluxes (referenced to Mercury at perihelion) that are required on the sunlit hemisphere to reproduce the typically observed several megarayleighs of D2 emission-line brightness and the inferred column densities of 1-2 x 10(exp 11) atoms per sq cm range from approximately 2-5 x 10(exp 7) atoms/sq cm/sec. The sodium model is applied to study observational data that document an anticorrelation in the average sodium column density and solar radiation acceleration. Lateral transport driven by the solar radiation acceleration is shown to produce this behavior for combinations of different sources and surface accomodation coefficients. The best fit model fits to the observational data require a significant degree of thermal accommodation of the ambient sodium atoms to the surface and a source rate that decreases as an inverse power of 1.5 to 2 in heliocentric distance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrián-Martínez, S.; Ardid, M.; Bou-Cabo, M.
2014-05-01
A search for cosmic neutrino sources using six years of data collected by the ANTARES neutrino telescope has been performed. Clusters of muon neutrinos over the expected atmospheric background have been looked for. No clear signal has been found. The most signal-like accumulation of events is located at equatorial coordinates R.A. = –46.°8 and decl. = –64.°9 and corresponds to a 2.2σ background fluctuation. In addition, upper limits on the flux normalization of an E {sup –2} muon neutrino energy spectrum have been set for 50 pre-selected astrophysical objects. Finally, motivated by an accumulation of seven events relatively close tomore » the Galactic Center in the recently reported neutrino sample of the IceCube telescope, a search for point sources in a broad region around this accumulation has been carried out. No indication of a neutrino signal has been found in the ANTARES data and upper limits on the flux normalization of an E {sup –2} energy spectrum of neutrinos from point sources in that region have been set. The 90% confidence level upper limits on the muon neutrino flux normalization vary between 3.5 and 5.1 × 10{sup –8} GeV cm{sup –2} s{sup –1}, depending on the exact location of the source.« less
Origin of acoustic emission produced during single point machining
NASA Astrophysics Data System (ADS)
Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.
1991-05-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.
Effective dynamics of a classical point charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polonyi, Janos, E-mail: polonyi@iphc.cnrs.fr
2014-03-15
The effective Lagrangian of a point charge is derived by eliminating the electromagnetic field within the framework of the classical closed time path formalism. The short distance singularity of the electromagnetic field is regulated by an UV cutoff. The Abraham–Lorentz force is recovered and its similarity to quantum anomalies is underlined. The full cutoff-dependent linearized equation of motion is obtained, no runaway trajectories are found but the effective dynamics shows acausality if the cutoff is beyond the classical charge radius. The strength of the radiation reaction force displays a pole in its cutoff-dependence in a manner reminiscent of the Landau-polemore » of perturbative QED. Similarity between the dynamical breakdown of the time reversal invariance and dynamical symmetry breaking is pointed out. -- Highlights: •Extension of the classical action principle for dissipative systems. •New derivation of the Abraham–Lorentz force for a point charge. •Absence of a runaway solution of the Abraham–Lorentz force. •Acausality in classical electrodynamics. •Renormalization of classical electrodynamics of point charges.« less
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
NASA Astrophysics Data System (ADS)
Tourin, A.; Fink, M.
2010-12-01
The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake
McLean, Thomas D; Moore, Murray E; Justus, Alan L; Hudston, Jonathan A; Barbé, Benoît
2016-11-01
Evaluation of continuous air monitors in the presence of a plutonium aerosol is time intensive, expensive, and requires a specialized facility. The Radiation Protection Services Group at Los Alamos National Laboratory has designed a Dynamic Radioactive Source, intended to replace plutonium aerosol challenge testing. The Dynamic Radioactive Source is small enough to be inserted into the sampler filter chamber of a typical continuous air monitor. Time-dependent radioactivity is introduced from electroplated sources for real-time testing of a continuous air monitor where a mechanical wristwatch motor rotates a mask above an alpha-emitting electroplated disk source. The mask is attached to the watch's minute hand, and as it rotates, more of the underlying source is revealed. The measured alpha activity increases with time, simulating the arrival of airborne radioactive particulates at the air sampler inlet. The Dynamic Radioactive Source allows the temporal behavior of puff and chronic release conditions to be mimicked without the need for radioactive aerosols. The new system is configurable to different continuous air monitor designs and provides an in-house testing capability (benchtop compatible). It is a repeatable and reusable system and does not contaminate the tested air monitor. Test benefits include direct user control, realistic (plutonium) aerosol spectra, and iterative development of continuous air monitor alarm algorithms. Data obtained using the Dynamic Radioactive Source has been used to elucidate alarm algorithms and to compare the response time of two commercial continuous air monitors.
McLean, Thomas D.; Moore, Murray E.; Justus, Alan L.; ...
2016-01-01
Evaluation of continuous air monitors in the presence of a plutonium aerosol is time intensive, expensive, and requires a specialized facility. The Radiation Protection Services Group at Los Alamos National Laboratory has designed a Dynamic Radioactive Source, intended to replace plutonium aerosol challenge testing. Furthermore, the Dynamic Radioactive Source is small enough to be inserted into the sampler filter chamber of a typical continuous air monitor. Time-dependent radioactivity is introduced from electroplated sources for real-time testing of a continuous air monitor where a mechanical wristwatch motor rotates a mask above an alpha-emitting electroplated disk source. The mask is attached tomore » the watch’s minute hand, and as it rotates, more of the underlying source is revealed. The alpha activity we measured increases with time, simulating the arrival of airborne radioactive particulates at the air sampler inlet. The Dynamic Radioactive Source allows the temporal behavior of puff and chronic release conditions to be mimicked without the need for radioactive aerosols. The new system is configurable to different continuous air monitor designs and provides an in-house testing capability (benchtop compatible). It is a repeatable and reusable system and does not contaminate the tested air monitor. Test benefits include direct user control, realistic (plutonium) aerosol spectra, and iterative development of continuous air monitor alarm algorithms. We also used data obtained using the Dynamic Radioactive Source to elucidate alarm algorithms and to compare the response time of two commercial continuous air monitors.« less
NASA Astrophysics Data System (ADS)
Goodwell, Allison E.; Kumar, Praveen
2017-07-01
Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.
NASA Astrophysics Data System (ADS)
Riddick, S. N.; Blackall, T. D.; Dragosits, U.; Tang, Y. S.; Moring, A.; Daunt, F.; Wanless, S.; Hamer, K. C.; Sutton, M. A.
2017-07-01
Many studies in recent years have highlighted the ecological implications of adding reactive nitrogen (Nr) to terrestrial ecosystems. Seabird colonies represent a situation with concentrated sources of Nr, through excreted and accumulated guano, often occurring in otherwise nutrient-poor areas. To date, there has been little attention given to modelling N flows in this context, and particularly to quantifying the relationship between ammonia (NH3) emissions and meteorology. This paper presents a dynamic mass-flow model (GUANO) that simulates temporal variations in NH3 emissions from seabird guano. While the focus is on NH3 emissions, the model necessarily also treats the interaction with wash-off as far as this affects NH3. The model is validated using NH3 emissions measurements from seabird colonies across a range of climates, from sub-polar to tropical. In simulations for hourly time-resolved data, the model is able to capture the observed dependence of NH3 emission on environmental variables. With temperature and wind speed having the greatest effects on emission for the cases considered. In comparison with empirical data, the percentage of excreted nitrogen that volatilizes as NH3 is found to range from 2% to 67% (based on measurements), with the GUANO model providing a range of 2%-82%. The model provides a tool that can be used to investigate the meteorological dependence of NH3 emissions from seabird guano and provides a starting point to refine models of NH3 emissions from other sources.
NASA Astrophysics Data System (ADS)
Lo, K. W.; Ngan, K.
2015-12-01
The age of air, which measures the time elapsed between the emission of a chemical constituent and its arrival at a receptor location, has many applications in urban air quality. Typically it has been estimated for special cases, e.g. the local mean age of air for a spatially homogeneous source. An alternative approach uses the response to a point source to determine the distribution of transit times or tracer ages connecting the source and receptor. The distribution (age spectrum) and first moment (mean tracer age) have proven to be useful diagnostics in stratospheric modelling because they can be related to observations and do not require a priori assumptions. The tracer age and age spectrum are applied to the pollutant ventilation of street canyons in this work. Using large-eddy simulations of flow over a single isolated canyon and an uneven, non-uniform canyon array, it is shown that the structure of the tracer age is dominated by the central canyon ;vortex;; small variations in the building height have a significant influence on the structure of the tracer age and the pollutant ventilation. The age spectrum is broad, with a long exponential tail whose slope depends on the canyon geometry. The mean tracer age, which roughly characterises the ventilation strength, is much greater than the local mean age of air.
Effect of an overhead shield on gamma-ray skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stedry, M.H.; Shultis, J.K.; Faw, R.E.
1996-06-01
A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less
A smart market for nutrient credit trading to incentivize wetland construction
NASA Astrophysics Data System (ADS)
Raffensperger, John F.; Prabodanie, R. A. Ranga; Kostel, Jill A.
2017-03-01
Nutrient trading and constructed wetlands are widely discussed solutions to reduce nutrient pollution. Nutrient markets usually include agricultural nonpoint sources and municipal and industrial point sources, but these markets rarely include investors who construct wetlands to sell nutrient reduction credits. We propose a new market design for trading nutrient credits, with both point source and non-point source traders, explicitly incorporating the option of landowners to build nutrient removal wetlands. The proposed trading program is designed as a smart market with centralized clearing, done with an optimization. The market design addresses the varying impacts of runoff over space and time, and the lumpiness of wetland investments. We simulated the market for the Big Bureau Creek watershed in north-central Illinois. We found that the proposed smart market would incentivize wetland construction by assuring reasonable payments for the ecosystem services provided. The proposed market mechanism selects wetland locations strategically taking into account both the cost and nutrient removal efficiencies. The centralized market produces locational prices that would incentivize farmers to reduce nutrients, which is voluntary. As we illustrate, wetland builders' participation in nutrient trading would enable the point sources and environmental organizations to buy low cost nutrient credits.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
USDA-ARS?s Scientific Manuscript database
Continued public support for U.S. tax-payer funded programs aimed at reducing agricultural non-point source pollutants depends on clear demonstrations of water quality improvements. Effectiveness of structural BMPs, as well as watershed monitoring networks is an important information need to make f...
Phytoplankton: a significant trophic source for soft corals?
NASA Astrophysics Data System (ADS)
Widdig, Alexander; Schlichter, Dietrich
2001-08-01
Histological autoradiographs and biochemical analyses show that 14C-labelled microalgae (diatoms, chlorophytes and dinoflagellates) are used by the soft coral Dendronephthya sp. Digestion of the algae took place at the point of exit of the pharynx into the coelenteron. Ingestion and assimilation of the labelled algae depended on incubation time, cell density, and to a lesser extent on species-specificity. 14C incorporation into polysaccharides, proteins, lipids and compounds of low molecular weight was analysed. The 14C-labelling patterns of the four classes of substances varied depending on incubation time and cell density. 14C incorporation was highest into lipids and proteins. Dissolved labelled algal metabolites, released during incubation into the medium, contributed between 4% and 25% to the total 14C activity incorporated. The incorporated microalgae contributed a maximum of 26% (average of the four species studied) to the daily organic carbon demand, as calculated from assimilation rates at natural eucaryotic phytoplankton densities and a 1 h incubation period. The calculated contribution to the daily organic carbon demand decreased after prolonged incubation periods to about 5% after 3 h and to 1-3% after 9 h. Thus the main energetic demand of Dendronephthya sp. has to be complemented by other components of the seston.
ERIC Educational Resources Information Center
Heric, Matthew; Carter, Jenn
2011-01-01
Cognitive readiness (CR) and performance for operational time-critical environments are continuing points of focus for military and academic communities. In response to this need, we designed an open source interactive CR assessment application as a highly adaptive and efficient open source testing administration and analysis tool. It is capable…
NASA Astrophysics Data System (ADS)
Yonezawa, A.; Kuroda, R.; Teramoto, A.; Obara, T.; Sugawa, S.
2014-03-01
We evaluated effective time constants of random telegraph noise (RTN) with various operation timings of in-pixel source follower transistors statistically, and discuss the dependency of RTN time constants on the duty ratio (on/off ratio) of MOSFET which is controlled by the gate to source voltage (VGS). Under a general readout operation of CMOS image sensor (CIS), the row selected pixel-source followers (SFs) turn on and not selected pixel-SFs operate at different bias conditions depending on the select switch position; when select switch locate in between the SF driver and column output line, SF drivers nearly turn off. The duty ratio and cyclic period of selected time of SF driver depends on the operation timing determined by the column read out sequence. By changing the duty ratio from 1 to 7.6 x 10-3, time constant ratio of RTN (time to capture <τc<)/(time to emission <τe<) of a part of MOSFETs increased while RTN amplitudes were almost the same regardless of the duty ratio. In these MOSFETs, <τc< increased and the majority of <τe< decreased and the minority of <τe< increased by decreasing the duty ratio. The same tendencies of behaviors of <τc< and <τe< were obtained when VGS was decreased. This indicates that the effective <τc< and <τe< converge to those under off state as duty ratio decreases. These results are important for the noise reduction, detection and analysis of in pixel-SF with RTN.
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
NASA Astrophysics Data System (ADS)
Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han
2016-10-01
One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.
A conceptual ground-water-quality monitoring network for San Fernando Valley, California
Setmire, J.G.
1985-01-01
A conceptual groundwater-quality monitoring network was developed for San Fernando Valley to provide the California State Water Resources Control Board with an integrated, basinwide control system to monitor the quality of groundwater. The geology, occurrence and movement of groundwater, land use, background water quality, and potential sources of pollution were described and then considered in designing the conceptual monitoring network. The network was designed to monitor major known and potential point and nonpoint sources of groundwater contamination over time. The network is composed of 291 sites where wells are needed to define the groundwater quality. The ideal network includes four specific-purpose networks to monitor (1) ambient water quality, (2) nonpoint sources of pollution, (3) point sources of pollution, and (4) line sources of pollution. (USGS)
Algorithm for astronomical, point source, signal to noise ratio calculations
NASA Technical Reports Server (NTRS)
Jayroe, R. R.; Schroeder, D. J.
1984-01-01
An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.
An InSAR-based survey of volcanic deformation in the central Andes
NASA Astrophysics Data System (ADS)
Pritchard, M. E.; Simons, M.
2004-02-01
We extend an earlier interferometric synthetic aperture radar (InSAR) survey covering about 900 remote volcanos of the central Andes (14°-27°S) between the years 1992 and 2002. Our survey reveals broad (10s of km), roughly axisymmetric deformation at 4 volcanic centers: two stratovolcanoes are inflating (Uturuncu, Bolivia, and Hualca Hualca, Peru); another source of inflation on the border between Chile and Argentina is not obviously associated with a volcanic edifice (here called Lazufre); and a caldera (Cerro Blanco, also called Robledo) in northwest Argentina is subsiding. We explore the range of source depths and volumes allowed by our observations, using spherical, ellipsoidal and crack-like source geometries. We further examine the effects of local topography upon the deformation field and invert for a spherical point-source in both elastic half-space and layered-space crustal models. We use a global search algorithm, with gradient search methods used to further constrain best-fitting models. Inferred source depths are model-dependent, with differences in the assumed source geometry generating a larger range of accepted depths than variations in elastic structure. Source depths relative to sea level are: 8-18 km at Hualca Hualca; 12-25 km for Uturuncu; 5-13 km for Lazufre, and 5-10 km at Cerro Blanco. Deformation at all four volcanoes seems to be time-dependent, and only Uturuncu and Cerro Blanco were deforming during the entire time period of observation. Inflation at Hualca Hualca stopped in 1997, perhaps related to a large eruption of nearby Sabancaya volcano in May 1997, although there is no obvious relation between the rate of deformation and the eruptions of Sabancaya. We do not observe any deformation associated with eruptions of Lascar, Chile, at 16 other volcanoes that had recent small eruptions or fumarolic activity, or associated with a short-lived thermal anomaly at Chiliques volcano. We posit a hydrothermal system at Cerro Blanco to explain the rate of subsidence there. For the last decade, we calculate the ratio of the volume of magma intruded to extruded is between 1-10, and that the combined rate of intrusion and extrusion is within an order of magnitude of the inferred geologic rate.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
Strategy for Texture Management in Metals Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
Strategy for Texture Management in Metals Additive Manufacturing
Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...
2017-01-31
Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less
Site-Dependent Fluorescence Decay of Malachite Green Doped in Onion Cell
NASA Astrophysics Data System (ADS)
Nakatsuka, Hiroki; Sekine, Masaya; Suzuki, Yuji; Hattori, Toshiaki
1999-03-01
Time-resolved fluorescence measurements of malachite green dye moleculesdoped in onion cells were carried out.The fluorescence decay time was dependent on the individual cell and on theposition of the dye in a cell, which reflect the microscopic dynamics of each boundsite.Upon cooling, the decay time increased and this increase was accelerated ataround the freezing point of the onion cell.
A new continuous light source for high-speed imaging
NASA Astrophysics Data System (ADS)
Paton, R. T.; Hall, R. E.; Skews, B. W.
2017-02-01
Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.
Null geodesics and wave front singularities in the Gödel space-time
NASA Astrophysics Data System (ADS)
Kling, Thomas P.; Roebuck, Kevin; Grotzke, Eric
2018-01-01
We explore wave fronts of null geodesics in the Gödel metric emitted from point sources both at, and away from, the origin. For constant time wave fronts emitted by sources away from the origin, we find cusp ridges as well as blue sky metamorphoses where spatially disconnected portions of the wave front appear, connect to the main wave front, and then later break free and vanish. These blue sky metamorphoses in the constant time wave fronts highlight the non-causal features of the Gödel metric. We introduce a concept of physical distance along the null geodesics, and show that for wave fronts of constant physical distance, the reorganization of the points making up the wave front leads to the removal of cusp ridges.
Granger causal time-dependent source connectivity in the somatosensory network
NASA Astrophysics Data System (ADS)
Gao, Lin; Sommerlade, Linda; Coffman, Brian; Zhang, Tongsheng; Stephen, Julia M.; Li, Dichen; Wang, Jue; Grebogi, Celso; Schelter, Bjoern
2015-05-01
Exploration of transient Granger causal interactions in neural sources of electrophysiological activities provides deeper insights into brain information processing mechanisms. However, the underlying neural patterns are confounded by time-dependent dynamics, non-stationarity and observational noise contamination. Here we investigate transient Granger causal interactions using source time-series of somatosensory evoked magnetoencephalographic (MEG) elicited by air puff stimulation of right index finger and recorded using 306-channel MEG from 21 healthy subjects. A new time-varying connectivity approach, combining renormalised partial directed coherence with state space modelling, is employed to estimate fast changing information flow among the sources. Source analysis confirmed that somatosensory evoked MEG was mainly generated from the contralateral primary somatosensory cortex (SI) and bilateral secondary somatosensory cortices (SII). Transient Granger causality shows a serial processing of somatosensory information, 1) from contralateral SI to contralateral SII, 2) from contralateral SI to ipsilateral SII, 3) from contralateral SII to contralateral SI, and 4) from contralateral SII to ipsilateral SII. These results are consistent with established anatomical connectivity between somatosensory regions and previous source modeling results, thereby providing empirical validation of the time-varying connectivity analysis. We argue that the suggested approach provides novel information regarding transient cortical dynamic connectivity, which previous approaches could not assess.
Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo
2012-01-01
The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.
NASA Astrophysics Data System (ADS)
Calvert, Nick; Betcke, Marta M.; Cresswell, John R.; Deacon, Alick N.; Gleeson, Anthony J.; Judson, Daniel S.; Mason, Peter; McIntosh, Peter A.; Morton, Edward J.; Nolan, Paul J.; Ollier, James; Procter, Mark G.; Speller, Robert D.
2015-05-01
Using a short pulse width x-ray source and measuring the time-of-flight of photons that scatter from an object under inspection allows for the point of interaction to be determined, and a profile of the object to be sampled along the path of the beam. A three dimensional image can be formed by interrogating the entire object. Using high energy x rays enables the inspection of cargo containers with steel walls, in the search for concealed items. A longer pulse width x-ray source can also be used with deconvolution techniques to determine the points of interaction. We present time-of-flight results from both short (picosecond) width and long (hundreds of nanoseconds) width x-ray sources, and show that the position of scatter can be localised with a resolution of 2 ns, equivalent to 30 cm, for a 3 cm thick plastic test object.
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
TIME-DEPENDENT COROTATION RESONANCE IN BARRED GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yu-Ting; Taam, Ronald E.; Pfenniger, Daniel, E-mail: ytwu@asiaa.sinica.edu.tw, E-mail: daniel.pfenniger@unige.ch, E-mail: taam@asiaa.sinica.edu.tw
2016-10-20
The effective potential neighboring the corotation resonance region in barred galaxies is shown to be strongly time-dependent in any rotating frame, due to the competition of nearby perturbations of similar strengths with differing rotation speeds. Contrary to the generally adopted assumption that in the bar rotating frame the corotation region should possess four stationary equilibrium points (Lagrange points), with high quality N -body simulations, we localize the instantaneous equilibrium points (EPs) and find that they circulate or oscillate broadly in azimuth with respect to the pattern speeds of the inner or outer perturbations. This implies that at the particle levelmore » the Jacobi integral is not well conserved around the corotation radius. That is, angular momentum exchanges decouple from energy exchanges, enhancing the chaotic diffusion of stars through the corotation region.« less
KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II
DOE Office of Scientific and Technical Information (OSTI.GOV)
hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.
1962-06-01
A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi, E-mail: chizheung@gmail.com; Xu, Yiqing; Wei, Xiaoming
2014-07-28
Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated bymore » a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier.« less
NASA Astrophysics Data System (ADS)
Zetterlind, Virgil E., III; Magee, Eric P.
2002-06-01
This study extends branch point tolerant phase reconstructor research to examine the effect of finite time delays and measurement error on system performance. Branch point tolerant phase reconstruction is particularly applicable to atmospheric laser weapon and communication systems, which operate in extended turbulence. We examine the relative performance of a least squares reconstructor, least squares plus hidden phase reconstructor, and a Goldstein branch point reconstructor for various correction time-delays and measurement noise scenarios. Performance is evaluated using a wave-optics simulation that models a 100km atmospheric propagation of a point source beacon to a transmit/receive aperture. Phase-only corrections are then calculated using the various reconstructor algorithms and applied to an outgoing uniform field. Point Strehl is used as the performance metric. Results indicate that while time delays and measurement noise reduce the performance of branch point tolerant reconstructors, these reconstructors can still outperform least squares implementations in many cases. We also show that branch point detection becomes the limiting factor in measurement noise corrupted scenarios.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
The Mean Curvature of the Influence Surface of Wave Equation With Sources on a Moving Surface
NASA Technical Reports Server (NTRS)
Farassat, F.; Farris, Mark
1999-01-01
The mean curvature of the influence surface of the space-time point (x, t) appears in linear supersonic propeller noise theory and in the Kirchhoff formula for a supersonic surface. Both these problems are governed by the linear wave equation with sources on a moving surface. The influence surface is also called the Sigma - surface in the aeroacoustic literature. This surface is the locus, in a frame fixed to the quiescent medium, of all the points of a radiating surface f(x, t) = 0 whose acoustic signals arrive simultaneously to an observer at position x and at the time t. Mathematically, the Sigma- surface is produced by the intersection of the characteristic conoid of the space-time point (x, t) and the moving surface. In this paper, we derive the expression for the local mean curvature of the Sigma - space of the space-time point for a moving rigid or deformable surface f(x, t) = 0. This expression is a complicated function of the geometric and kinematic parameters of the surface f(x, t) = 0. Using the results of this paper, the solution of the governing wave equation of high speed propeller noise radiation as well as the Kirchhoff formula for a supersonic surface can be written as very compact analytic expression.
Platelets to rings: Influence of sodium dodecyl sulfate on Zn-Al layered double hydroxide morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Ceren; Unal, Ugur; Koc University, Chemistry Department, Rumelifeneri yolu, Sariyer 34450, Istanbul
2012-03-15
In the current study, influence of sodium dodecyl sulfate (SDS) on the crystallization of Zn-Al layered double hydroxide (LDH) was investigated. Depending on the SDS concentration coral-like and for the first time ring-like morphologies were obtained in a urea-hydrolysis method. It was revealed that the surfactant level in the starting solution plays an important role in the morphology. Concentration of surfactant equal to or above the anion exchange capacity of the LDH is influential in creating different morphologies. Another important parameter was the critical micelle concentration (CMC) of the surfactant. Surfactant concentrations well above CMC value resulted in ring-like structures.more » The crystallization mechanism was discussed. - Graphical abstract: Dependence of ZnAl LDH Morphology on SDS concentration. Highlights: Black-Right-Pointing-Pointer In-situ intercalation of SDS in ZnAl LDH was achieved via urea hydrolysis method. Black-Right-Pointing-Pointer Morphology of ZnAl LDH intercalated with SDS depended on the SDS concentration. Black-Right-Pointing-Pointer Ring like morphology for SDS intercalated ZnAl LDH was obtained for the first time. Black-Right-Pointing-Pointer Growth mechanism was discussed. Black-Right-Pointing-Pointer Template assisted growth of ZnAl LDH was proposed.« less
Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Ajello, M.
2016-02-26
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission towards the Galactic centre (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1 - 100 GeV from a 15° X15° region about the direction of the GC, and implications for the interstellar emissions produced by cosmic ray (CR) particles interacting with the gas and radiation fields in the inner Galaxy and for the point sources detected. Specialised interstellar emission models (IEMs) are constructed that enable separation ofmore » the γ-ray emission from the inner ~ 1 kpc about the GC from the fore- and background emission from the Galaxy. Based on these models, the interstellar emission from CR electrons interacting with the interstellar radiation field via the inverse Compton (IC) process and CR nuclei inelastically scattering off the gas producing γ-rays via π⁰ decays from the inner ~ 1 kpc is determined. The IC contribution is found to be dominant in the region and strongly enhanced compared to previous studies. A catalog of point sources for the 15 °X 15 °region is self-consistently constructed using these IEMs: the First Fermi–LAT Inner Galaxy point source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs, including the Third Fermi–LAT Source Catalog (3FGL). In general, the spatial density of 1FIG sources differs from those in the 3FGL, which is attributed to the different treatments of the interstellar emission and energy ranges used by the respective analyses. Three 1FIG sources are found to spatially overlap with supernova remnants (SNRs) listed in Green’s SNR catalog; these SNRs have not previously been associated with high-energy γ-ray sources. Most 3FGL sources with known multi-wavelength counterparts are also found. However, the majority of 1FIG point sources are unassociated. After subtracting the interstellar emission and point-source contributions from the data a residual is found that is a sub-dominant fraction of the total flux. But, it is brighter than the γ-ray emission associated with interstellar gas in the inner ~ 1 kpc derived for the IEMs used in this paper, and comparable to the integrated brightness of the point sources in the region for energies & 3 GeV. If spatial templates that peak toward the GC are used to model the positive residual and included in the total model for the 1515°X° region, the agreement with the data improves, but they do not account for all the residual structure. The spectrum of the positive residual modelled with these templates has a strong dependence on the choice of IEM.« less
Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope
NASA Technical Reports Server (NTRS)
Zissa, D. E.
1984-01-01
Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.
The effects of correlated noise in phased-array observations of radio sources
NASA Technical Reports Server (NTRS)
Dewey, Rachel J.
1994-01-01
Arrays of radio telescopes are now routinely used to provide increased signal-to-noise when observing faint point sources. However, calculation of the achievable sensitivity is complicated if there are sources in the field of view other than the target source. These additional sources not only increase the system temperatures of the individual antennas, but may also contribute significant 'correlated noise' to the effective system temperature of the array. This problem has been of particular interest in the context of tracking spacecraft in the vicinity of radio-bright planets (e.g., Galileo at Jupiter), but it has broader astronomical relevance as well. This paper presents a general formulation of the problem, for the case of a point-like target source in the presence of an additional radio source of arbitrary brightness distribution. We re-derive the well known result that, in the absence of any background sources, a phased array of N indentical antennas is a factor of N more sensitive than a single antenna. We also show that an unphased array of N identical antennas is, on average, no more sensitive than a single antenna if the signals from the individual antennas are combined prior to detection. In the case where a background source is present we show that the effects of correlated noise are highly geometry dependent, and for some astronomical observations may cause significant fluctuations in the array's effective system temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Slaughter, D R; Fittinghoff, D N
We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less
Improving Planck calibration by including frequency-dependent relativistic corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quartin, Miguel; Notari, Alessio, E-mail: mquartin@if.ufrj.br, E-mail: notari@ffn.ub.es
2015-09-01
The Planck satellite detectors are calibrated in the 2015 release using the 'orbital dipole', which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10{sup −3}, due to coupling with the 'solar dipole' (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevantmore » for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, Debashree, E-mail: debashreephys@gmail.com; Basu, B., E-mail: sribbasu@gmail.com
2013-02-15
We have studied the spin dependent force and the associated momentum space Berry curvature in an accelerating system. The results are derived by taking into consideration the non-relativistic limit of a generally covariant Dirac equation with an electromagnetic field present, where the methodology of the Foldy-Wouthuysen transformation is applied to achieve the non-relativistic limit. Spin currents appear due to the combined action of the external electric field, the crystal field and the induced inertial electric field via the total effective spin-orbit interaction. In an accelerating frame, the crucial role of momentum space Berry curvature in the spin dynamics has alsomore » been addressed from the perspective of spin Hall conductivity. For time dependent acceleration, the expression for the spin polarization has been derived. - Highlights: Black-Right-Pointing-Pointer We study the effect of acceleration on the Dirac electron in the presence of an electromagnetic field, where the acceleration induces an electric field. Black-Right-Pointing-Pointer Spin currents appear due to the total effective electric field via the total spin-orbit interaction. Black-Right-Pointing-Pointer We derive the expression for the spin dependent force and the spin Hall current, which is zero for a particular acceleration. Black-Right-Pointing-Pointer The role of the momentum space Berry curvature in an accelerating system is discussed. Black-Right-Pointing-Pointer An expression for the spin polarization for time dependent acceleration is derived.« less
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, S.; Brietzke, G.; Igel, H.; Larmat, C.; Fichtner, A.; Johnson, P. A.; Huang, L.
2008-12-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the source point and other information might be inferred. In this study, the backward propagation is performed numerically using a spectral element code. We investigate the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, location of asperities, rupture velocity etc.). We use synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice- rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of relaxing the ignorance to prior source information (e.g., origin time, hypocenter, fault location, etc.) on the results of the time reversal process.
Hunchak-Kariouk, Kathryn; Buxton, Debra E.; Hickman, R. Edward
1999-01-01
Relations of water quality to streamflow were determined for 18 water-quality constituents at 28 surface-water-quality stations within the drainage area of the Atlantic Coastal, lower Delaware River, and Delaware Bay Basins for water years 1976-93. Surface-water-quality and streamflow data were evaluated for trends (through time) in constituent concentrations during high and low flows, and relations between constituent concentration and streamflow, and between constituent load and streamflow, were determined. Median concentrations were calculated for the entire period of study (water years 1976-93) and for the last 5 years of the period of study (water years 1989-93) to determine whether any large variation in concentration exists between the two periods. Medians also were used to determine the seasonal Kendall\\'s tau statistic, which was then used to evaluate trends in concentrations during high and low flows. Trends in constituent concentrations during high and low flows were evaluated to determine whether the distribution of the observations changes through time for intermittent (nonpoint storm runoff) and constant (point sources and ground water) sources, respectively. High- and low-flow trends in concentrations were determined for some constituents at 26 of the 28 water-quality stations. Seasonal effects on the relations of concentration to streamflow are evident for 10 constituents at 14 or more stations. Dissolved oxygen shows seasonal dependency at all stations. Negative slopes of relations of concentration to streamflow, which indicate a decrease in concentration at high flows, predominate over positive slopes because of dilution of instream concentrations from storm runoff. The slopes of the regression lines of load to streamflow were determined in order to show the relative contributions to the instream load from constant (point sources and ground water) and intermittent sources (storm runoff). Greater slope values indicate larger contributions from storm runoff to instream load, which most likely indicate an increased relative importance of nonpoint sources. Load-to-streamflow relations along a stream reach that tend to increase in a downstream direction indicate the increased relative importance of contributions from storm runoff. Likewise, load-to-streamflow relations along a stream reach that tend to decrease in a downstream direction indicate the increased relative importance of point sources and ground-water discharge. The magnitudes of the load slopes for five constituents increase in the downstream direction along the Great Egg Harbor River, indicating an increased relative importance of storm runoff for these constituents along the river. The magnitudes of the load slopes for 11 constituents decrease in the downstream direction along the Assunpink Creek and for 5 constituents along the Maurice River, indicating a decreased relative importance of storm runoff for these constituents along the rivers.
A survey of volcano deformation in the central Andes using InSAR: Evidence for deep, slow inflation
NASA Astrophysics Data System (ADS)
Pritchard, M. E.; Simons, M.
2001-12-01
We use interferometric synthetic aperture radar (InSAR) to survey about 50 volcanos of the central Andes (15-27o S) for deformation during the 1992-2000 time interval. Because of the remote location of these volcanos, the activity of most are poorly constrained. Using the ERS-1/2 C-band radars (5.6 cm), we observe good interferometric correlation south of about 21o S, but poor correlation north of that latitude, especially in southern Peru. This variation is presumably related to regional climate variations. Our survey reveals broad (10's of km), roughly axisymmetric deformation at 2 volcanic centers with no previously documented deformation. At Uturuncu volcano, in southwestern Bolivia, the deformation rate can be constrained with radar data from several satellite tracks and is about 1 cm/year between 1992 and 2000. We find a second source of volcanic deformation located between Lastarria and Cordon del Azufre volcanos near the Chile/Argentina border. There is less radar data to constrain the deformation in this area, but the rate is also about 1 cm/yr between 1996 and 2000. While the spatial character of the deformation field appears to be affected by atmosphere at both locations, we do not think that the entire signal is atmospheric, because the signal is observed in several interferograms and nearby edifices do not show similar patterns. The deformation signal appears to be time-variable, although it is difficult to determine whether this is due to real variations in the deformation source or atmospheric effects. We model the deformation with both a uniform point-source source of inflation, and a tri-axial point-source ellipsoid, and compare both elastic half-space and layered-space models. We also explore the effects of local topography upon the deformation field using the method of Williams and Wadge (1998). We invert for source parameters using the global search Neighborhood Algorithm of Sambridge (1998). Preliminary results indicate that the sources at both Uturuncu and Lastarria/Cordon del Azufre volcanos are model-dependent, but are generally greater than 10 km deep. This depth suggests a potential relationship between the deformation source at Uturuncu and the large Altiplano-Puna Magmatic Complex that has been imaged seismically (e.g. Chmielowski et al., 1999), although the deformation at Lastarria/Cordon del Azufre lies outside the region of lowest seismic velocities (Yuan et al., 2000).
NASA Astrophysics Data System (ADS)
Hu, Y.; Ji, Y.; Egbert, G. D.
2015-12-01
The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.
Anomalous behavior of 1/f noise in graphene near the charge neutrality point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeshita, Shunpei; Tanaka, Takahiro; Arakawa, Tomonori
2016-03-07
We investigate the noise in single layer graphene devices from equilibrium to far-from equilibrium and found that the 1/f noise shows an anomalous dependence on the source-drain bias voltage (V{sub SD}). While the Hooge's relation is not the case around the charge neutrality point, we found that it is recovered at very low V{sub SD} region. We propose that the depinning of the electron-hole puddles is induced at finite V{sub SD}, which may explain this anomalous noise behavior.
Blind source separation problem in GPS time series
NASA Astrophysics Data System (ADS)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2016-04-01
A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition techniques in explaining the data and in recovering the original (known) sources. Using the same number of components, we find that the vbICA method fits the data almost as well as a PCA method, since the χ 2 increase is less than 10 % the value calculated using a PCA decomposition. Unlike PCA, the vbICA algorithm is found to correctly separate the sources if the correlation of the dataset is low (<0.67) and the geodetic network is sufficiently dense (ten continuous GPS stations within a box of side equal to two times the locking depth of a fault where an earthquake of Mw >6 occurred). We also provide a cookbook for the use of the vbICA algorithm in analyses of position time series for tectonic and non-tectonic applications.
Fast underdetermined BSS architecture design methodology for real time applications.
Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R
2015-01-01
In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
A CMB foreground study in WMAP data: Extragalactic point sources and zodiacal light emission
NASA Astrophysics Data System (ADS)
Chen, Xi
The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. It serves as a primary tool to understand the global properties, content and evolution of the universe. Since 2001, NASA's Wilkinson Microwave Anisotropy Probe (WMAP) satellite has been napping the full sky anisotropy with unprecedented accuracy, precision and reliability. The CMB angular power spectrum calculated from the WMAP full sky maps not only enables accurate testing of cosmological models, but also places significant constraints on model parameters. The CMB signal in the WMAP sky maps is contaminated by microwave emission from the Milky Way and from extragalactic sources. Therefore, in order to use the maps reliably for cosmological studies, the foreground signals must be well understood and removed from the maps. This thesis focuses on the separation of two foreground contaminants from the WMAP maps: extragalactic point sources and zodiacal light emission. Extragalactic point sources constitute the most important foreground on small angular scales. Various methods have been applied to the WMAP single frequency maps to extract sources. However, due to the limited angular resolution of WMAP, it is possible to confuse positive CMB excursions with point sources or miss sources that are embedded in negative CMB fluctuations. We present a novel CMB-free source finding technique that utilizes the spectrum difference of point sources and CMB to form internal linear combinations of multifrequency maps to suppress the CMB and better reveal sources. When applied to the WMAP 41, 64 and 94 GHz maps, this technique has not only enabled detection of sources that are previously cataloged by independent methods, but also allowed disclosure of new sources. Without the noise contribution from the CMB, this method responds rapidly with the integration time. The number of detections varies as 0( t 0.72 in the two-band search and 0( t 0.70 in the three-band search from one year to five years, separately, in comparison to t 0.40 from the WMAP catalogs. Our source catalogs are a good supplement to the existing WMAP source catalogs, and the method itself is proven to be both complementary to and competitive with all the current source finding techniques in WMAP maps. Scattered light and thermal emission from the interplanetary dust (IPD) within our Solar System are major contributors to the diffuse sky brightness at most infrared wavelengths. For wavelengths longer than 3.5 mm, the thermal emission of the IPD dominates over scattering, and the emission is often referred to as the Zodiacal Light Emission (ZLE). To set a limit of ZLE contribution to the WMAP data, we have performed a simultaneous fit of the yearly WMAP time-ordered data to the time variation of ZLE predicted by the DIRBE IPD model (Kelsallet al. 1998) evaluated at 240 mm, plus [cursive l] = 1 - 4 CMB components. It is found that although this fitting procedure can successfully recover the CMB dipole to a 0.5% accuracy, it is not sensitive enough to determine the ZLE signal nor the other multipole moments very accurately.
NASA Astrophysics Data System (ADS)
Gudkova, T.; Lognonné, P.; Gagnepain-Beyneix, J.
2010-12-01
Let us consider the source excitation process for an impact. Following [1], we assume a simple model for the seismic source function, namely, a time-dependent force acting downward on the surface of the planet during the impact: f(t)=G g(t)=G g(t)*δ(t),g(t)=1+cosω1t for t in the interval (-π/ω1,π/ω1), g(t)=0 otherwise, where g(t) is the time dependence of the source, G is used to denote the amplitude of the applied force. This takes into account the fact that part of the seismic force could be associated with ejecta material [2]. We introduce the time constant,τ, equal to 2τ/ω1 to denote the time-duration of the excitation process. For SIVB’s and LM impacts we have τ=0.6 sec and 0.45 sec, respectively and a very good fit explaining practically for all the data and a very high quality factor. In contrast, for the seismic force as a point force (without ejecta generation) we find not only an unrealistically low Q values, but, moreover, a much lower variance reduction. The same fit was done for large meteoroids impacts (impacts on day the 13th and the 25th of January and the 14th of November 1976) (τ = 0.7, 0.8 and 1.05 sec, respectively). We get a very good fit explaining practically for all the data with 98% variance reduction and a very high quality factor. In contrast, the results with the seismic force as a point source are not satisfactory. For all these impacts, we have determined the values of the seismic impulse by matching the energy in the observed and modeled waveforms. To get the mass of a meteoroid we should correct for the ejecta effects, which lead to a mv product smaller by a ratio 1.5 to 1.7 as compared to the seismic impulse. This gave estimates on the mass and size of the meteoroids. Current estimates of the size of the meteoroids (diameter of 2-3 meters) indicate that they could create craters of about 50-70 meters in diameter: it might therefore be possible for the NASA Lunar Reconnaissance Orbiter mission to detect these craters. These impacts were insufficient to generate surface waves above the detection threshold of the Apollo seismometer. Future seismometers must have performances at least 10 times better than Apollo in order to get these surface waves from comparable impacts. Such a resolution will also allow the detection of several impacts of low mass (1-10 kg) at a few 10s to hundred km of each station, which might be used to perform local studies of the crust. Acknowledgements. This work was supported by Programme National de Planetologie from INSU, the French Space Agency (R&T program).and Grant No. 09-02-00128 and 09-05-91056 from the Russian Fund for Fundamental Research. References [1] McGarr, A., Latham, G.V., and Gault, D.E. 1969. JGR, Vol.74 (25), pp.5981-5994. [2] Lognonné, Ph., Le Feuvre, M., Johnson, C.L., and Weber, R.C. 2009. JGR, Vol. 114, E12003. [3] Gagnepain-Beyneix, J., Lognonné, P., Chenet, H., Lombardi, D., and Spohn, T. 2006. PEPI, Vol.159, pp.140-166. [4] T.V.Gudkova, Ph. Lognonné, and J. Gagnepain-Beyneix 2010. submitted to Icarus, 2010.
A distributed grid-based watershed mercury loading model has been developed to characterize spatial and temporal dynamics of mercury from both point and non-point sources. The model simulates flow, sediment transport, and mercury dynamics on a daily time step across a diverse lan...
2002-12-01
applications, vibration sources are numerous such as: ! Launch Loading ! Man-induced accelerations like on the Shuttle or space station ! Solar ...However, the lack of significant tracking errors during times when other actuators were stationary, and the fact that the local maximum tracking...
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin
2012-08-01
SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.
40 CFR 430.45 - New source performance standards (NSPS).
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Dissolving Sulfite... biocides: Subpart D [NSPS for dissolving sulfite pulp facilities where nitration grade pulp is produced... all times. Subpart D [NSPS for dissolving sulfite pulp facilities where viscose grade pulp is produced...
Recent skyshine calculations at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degtyarenko, P.
1997-12-01
New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less
Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle
NASA Astrophysics Data System (ADS)
Vinay, S.; Downs, R. R.
2012-12-01
Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.
1994-01-01
The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.
NASA Astrophysics Data System (ADS)
Konca, A. O.; Ji, C.; Helmberger, D. V.
2004-12-01
We observed the effect of the fault finiteness in the Pnl waveforms from regional distances (4° to 12° ) for the Mw6.5 San Simeon Earthquake on 22 December 2003. We aimed to include more of the high frequencies (2 seconds and longer periods) than the studies that use regional data for focal solutions (5 to 8 seconds and longer periods). We calculated 1-D synthetic seismograms for the Pn_l portion for both a point source, and a finite fault solution. The comparison of the point source and finite fault waveforms with data show that the first several seconds of the point source synthetics have considerably higher amplitude than the data, while finite fault does not have a similar problem. This can be explained by reversely polarized depth phases overlapping with the P waves from the later portion of the fault, and causing smaller amplitudes for the beginning portion of the seismogram. This is clearly a finite fault phenomenon; therefore, can not be explained by point source calculations. Moreover, the point source synthetics, which are calculated with a focal solution from a long period regional inversion, are overestimating the amplitude by three to four times relative to the data amplitude, while finite fault waveforms have the similar amplitudes to the data. Hence, a moment estimation based only on the point source solution of the regional data could have been wrong by half of magnitude. We have also calculated the shifts of synthetics relative to data to fit the seismograms. Our results reveal that the paths from Central California to the south are faster than to the paths to the east and north. The P wave arrival to the TUC station in Arizona is 4 seconds earlier than the predicted Southern California model, while most stations to the east are delayed around 1 second. The observed higher uppermost mantle velocities to the south are consistent with some recent tomographic models. Synthetics generated with these models significantly improves the fits and the timing at most stations. This means that regional waveform data can be used to help locate and establish source complexities for future events.
Wartman, Brianne C.; Holahan, Matthew R.
2014-01-01
Consolidation processes, involving synaptic and systems level changes, are suggested to stabilize memories once they are formed. At the synaptic level, dendritic structural changes are associated with long-term memory storage. At the systems level, memory storage dynamics between the hippocampus and anterior cingulate cortex (ACC) may be influenced by the number of sequentially encoded memories. The present experiment utilized Golgi-Cox staining and neuron reconstruction to examine recent and remote structural changes in the hippocampus and ACC following training on three different behavioral procedures. Rats were trained on one hippocampal-dependent task only (a water maze task), two hippocampal-dependent tasks (a water maze task followed by a radial arm maze task), or one hippocampal-dependent and one non-hippocampal-dependent task (a water maze task followed by an operant conditioning task). Rats were euthanized recently or remotely. Brains underwent Golgi-Cox processing and neurons were reconstructed using Neurolucida software (MicroBrightField, Williston, VT, USA). Rats trained on two hippocampal-dependent tasks displayed increased dendritic complexity compared to control rats, in neurons examined in both the ACC and hippocampus at recent and remote time points. Importantly, this behavioral group showed consistent, significant structural differences in the ACC compared to the control group at the recent time point. These findings suggest that taxing the demand placed upon the hippocampus, by training rats on two hippocampal-dependent tasks, engages synaptic and systems consolidation processes in the ACC at an accelerated rate for recent and remote storage of spatial memories. PMID:24795581
NASA Astrophysics Data System (ADS)
Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana
2017-11-01
The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps. They can be relevant for the retrofitting of the existing building stock and for driving risk reduction interventions. These analyses do not account for regional M > 6 seismogenic sources which dominate the hazard over long return times (≥ 500 years).
Concerns in Water Supply and Pollution Control: Legal, Social, and Economic.
ERIC Educational Resources Information Center
Burke, D. Barlow, Jr.; And Others
This bulletin contains three articles which focus on ground water's potential as a dependable supply source and some of the problems impeding the development of that potential. The authors' concerns are discussed from the vantage point of their areas of specialization: law, sociology, and economics. The first author states that water law abounds…
Evaluating emissions of HCHO, HONO, NO2, and SO2 from point sources using portable Imaging DOAS
NASA Astrophysics Data System (ADS)
Pikelnaya, O.; Tsai, C.; Herndon, S. C.; Wood, E. C.; Fu, D.; Lefer, B. L.; Flynn, J. H.; Stutz, J.
2011-12-01
Our ability to quantitatively describe urban air pollution to a large extent depends on an accurate understanding of anthropogenic emissions. In areas with a high density of individual point sources of pollution, such as petrochemical facilities with multiple flares or regions with active commercial ship traffic, this is particularly challenging as access to facilities and ships is often restricted. Direct formaldehyde emissions from flares may play an important role for ozone chemistry, acting as an initial radical precursor and enhancing the degradation of co-emitted hydrocarbons. HONO is also recognized as an important OH source throughout the day. However, very little is known about direct HCHO and HONO emissions. Imaging Differential Optical Absorption Spectroscopy (I-DOAS), a relatively new remote sensing technique, provides an opportunity to investigate emissions from these sources from a distance, making this technique attractive for fence-line monitoring. In this presentation, we will describe I-DOAS measurements during the FLAIR campaign in the spring/summer of 2009. We performed measurements outside of various industrial facilities in the larger Houston area as well as in the Houston Ship Channel to visualize and quantify the emissions of HCHO, NO2, HONO, and SO2 from flares of petrochemical facilities and ship smoke stacks. We will present the column density images of pollutant plumes as well as fluxes from individual flares calculated from I-DOAS observations. Fluxes from individual flares and smoke stacks determined from the I-DOAS measurements vary widely in time and by the emission sources. We will also present HONO/NOx ratios in ship smoke stacks derived from the combination of I-DOAS and in-situ measurements, and discuss other trace gas ratios in plumes derived from the I-DOAS observations. Finally, we will show images of HCHO, NO2 and SO2 plumes from control burn forest fires observed in November of 2009 at Vandenberg Air Force Base, Santa Maria, CA.
NASA Astrophysics Data System (ADS)
Gupta, I.; Chan, W.; Wagner, R.
2005-12-01
Several recent studies of the generation of low-frequency Lg from explosions indicate that the Lg wavetrain from explosions contains significant contributions from (1) the scattering of explosion-generated Rg into S and (2) direct S waves from the non-spherical spall source associated with a buried explosion. The pronounced spectral nulls observed in Lg spectra of Yucca Flats (NTS) and Semipalatinsk explosions (Patton and Taylor, 1995; Gupta et al., 1997) are related to Rg excitation caused by spall-related block motions in a conical volume over the shot point, which may be approximately represented by a compensated linear vector dipole (CLVD) source (Patton et al., 2005). Frequency-dependent excitation of Rg waves should be imprinted on all scattered P, S and Lg waves. A spectrogram may be considered as a three-dimensional matrix of numbers providing amplitude and frequency information for each point in the time series. We found difference spectrograms, derived from a normal explosion and a closely located over-buried shot recorded at the same common station, to be remarkably useful for an understanding of the origin and spectral contents of various regional phases. This technique allows isolation of source characteristics, essentially free from path and recording site effects, since the overburied shot acts as the empirical Green's function. Application of this methodology to several pairs of closely located explosions shows that the scattering of explosion-generated Rg makes significant contribution to not only Lg and its coda but also to the two other regional phases Pg (presumably by the scattering of Rg into P) and Sn. The scattered energy, identified by the presence of a spectral null at the appropriate frequency, generally appears to be more prominent in the somewhat later-arriving sections of Pg, Sn, and Lg than in the initial part. Difference spectrograms appear to provide a powerful new technique for understanding the mechanism of near-source scattering of explosion-generated Rg and its contribution to various regional phases.
Time-dependent jet flow and noise computations
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.
1990-01-01
Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.
Revealing two radio-active galactic nuclei extremely near PSR J0437-4715
NASA Astrophysics Data System (ADS)
Li, Zhixuan; Yang, Jun; An, Tao; Paragi, Zsolt; Deller, Adam; Reynolds, Cormac; Hong, Xiaoyu; Wang, Jiancheng; Ding, Hao; Xia, Bo; Yan, Zhen; Guo, Li
2018-05-01
Newton's gravitational constant G may vary with time at an extremely low level. The time variability of G will affect the orbital motion of a millisecond pulsar in a binary system and cause a tiny difference between the orbital period-dependent measurement of the kinematic distance and the direct measurement of the annual parallax distance. PSR J0437-4715 is the nearest millisecond pulsar and the brightest at radio wavelengths. To explore the feasibility of achieving a parallax distance accuracy of one light-year, comparable to the recent timing result, with the technique of differential astrometry, we searched for compact radio sources quite close to PSR J0437-4715. Using existing data from the Very Large Array and the Australia Telescope Compact Array, we detected two sources with flat spectra, relatively stable flux densities of 0.9 and 1.0 mJy at 8.4 GHz and separations of 13 and 45 arcsec. With a network consisting of the Long Baseline Array and the Kunming 40-m radio telescope, we found that both sources have a point-like structure and a brightness temperature of ≥107 K. According to these radio inputs and the absence of counterparts in other bands, we argue that they are most likely the compact radio cores of extragalactic active galactic nuclei, rather than Galactic radio stars. The finding of these two radio active galactic nuclei will enable us to achieve a sub-pc distance accuracy with in-beam phase-referencing very-long-baseline interferometric observations and provide one of the most stringent constraints on the time variability of G in the near future.
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
General eigenstates of Maxwell's equations in a two-constituent composite medium
NASA Astrophysics Data System (ADS)
Bergman, David J.; Farhi, Asaf
2016-11-01
Eigenstates of Maxwell's equations in the quasistatic regime were used recently to calculate the response of a Veselago Lens1 to the field produced by a time dependent point electric charge.2, 3 More recently, this approach was extended to calculate the non-quasistatic response of such a lens. This necessitated a calculation of the eigenstates of the full Maxwell equations in a flat slab structure where the electric permittivity ɛ1 of the slab differs from the electric permittivity ɛ2 of its surroundings while the magnetic permeability is equal to 1 everywhere.4 These eigenstates were used to calculate the response of a Veselago Lens to an oscillating point electric dipole source of electromagnetic (EM) waves. A result of these calculations was that, although images with subwavelength resolution are achievable, as first predicted by John Pendry,5 those images appear not at the points predicted by geometric optics. They appear, instead, at points which lie upon the slab surfaces. This is strongly connected to the fact that when ɛ1/ɛ2 = -1 a strong singularity occurs in Maxwell's equations: This value of ɛ1/ɛ2 is a mathemetical accumulation point for the EM eigenvalues.6 Unfortunately, many physicists are unaware of this crucial mathematical property of Maxwell's equations. In this article we describe how the non-quasistatic eigenstates of Maxwell's equations in a composite microstructure can be calculated for general two-constituent microstructures, where both ɛ and μ have different values in the two constituents.
NASA Technical Reports Server (NTRS)
Anderson, A. F.
1974-01-01
Research questions were proposed to determine the relationship between independent variables (race, sex, and institution attended) and dependent variables (number of job offers received, salary received, and willingness to recommend source of employer contact). The control variables were academic major, grade point average, placement registration, nonemployment activity, employer, and source of employer contact. An analysis of the results revealed no statistical significance of the institution attended as a predictor of job offers or salary, although significant relationships were found between race and sex and number of job offers received. It was found that academic major, grade point average, and source of employer contact were more useful than race in the prediction of salary. Sex and nonemployment activity were found to be the most important variables in the model. The analysis also indicated that Black students received more job offers than non-Black students.
Tanik, A
2000-01-01
The six main drinking water reservoirs of Istanbul are under the threat of pollution due to rapid population increase, unplanned urbanisation and insufficient infrastructure. In contrast to the present land use profile, the environmental evaluation of the catchment areas reveals that point sources of pollutants, especially of domestic origin, dominate over those from diffuse sources. The water quality studies also support these findings, emphasising that if no substantial precautions are taken, there will be no possibility of obtaining drinking water from them. In this paper, under the light of the present status of the reservoirs, possible and probable short- and long-term protective measures are outlined for reducing the impact of point sources. Immediate precautions mostly depend on reducing the pollution arising from the existing settlements. Long-term measures mainly emphasise the preparation of new land use plans taking into consideration the protection of unoccupied lands. Recommendations on protection and control of the reservoirs are stated.
Personal digital assistant-based drug information sources: potential to improve medication safety.
Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina
2005-04-01
This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.
Developing Real-Time Emissions Estimates for Enhanced Air Quality Forecasting
Exploring the relationship between ambient temperature, energy demand, and electric generating unit point source emissions and potential techniques for incorporating real-time information on the modulating effects of these variables using the Mid-Atlantic/Northeast Visibility Uni...
Time-dependent modulation of galactic cosmic rays by merged interaction regions
NASA Technical Reports Server (NTRS)
Perko, J. S.
1993-01-01
Models that solve the one-dimensional, solar modulation equation have reproduced the 11-year galactic cosmic ray using functional representations of global merged interaction regions (MIRs). This study extends those results to the solution of the modulation equation with explicit time dependence. The magnetometers on Voyagers 1 and 2 provide local magnetic field intensities at regular intervals, from which one calculates the ratio of the field intensity to the average local field. These ratios in turn are inverted to form diffusion coefficients. Strung together in radius and time, these coefficents then fall and rise with the strength of the interplanetary magnetic field, becoming representations of MIRs. These diffusion coefficients, calculated locally, propagate unchanged from approx. 10 AU to the outer boundary (120 AU). Inside 10 AU, all parameters, including the diffusion coefficient are assumed constant in time and space. The model reproduces the time-intensity profiles of Voyager 2 and Pioneer 10. Radial gradient data from 1982-1990 between Pioneer 10 and Voyager 2 are about the same magnitude as those calculated in the model. It is also shows agreement in rough magnitude with the radial gradient between Pioneer 10 and 1 AU. When coupled with enhanced, time-dependent solar wind speed at the probe's high latitude, as measured by independent observers, the model also follows Voyager 1's time-intensity profile reasonably well, providing a natural source the model also follows Voyager 1's time-intensity profile reasonably well, providing a natural source for the observed negative latitudinal gradients. The model exhibits the 11-year cyclical cosmic ray intensity behavior at all radii, including 1 AU, not just at the location of the spacecraft where the magnetic fields are measured. In addition, the model's point of cosmic ray maximum correctly travels at the solar wind speed, illustrating the well-known propagation of modulation. Finally, at least in the inner heliosphere this model accounts for the delay experienced by lower-rigidity protons in reaching their time-intensity peak. The actual delays in this model, however, are somewhat smaller than the data. In the outer heliosphere the models sees no delays, and the data are ambiguous as to their existence. It appears that strong magnetic field compression regions (merged interaction regions) that are 3-4 times the average field strength can, at least in a helioequatorial band, disrupt effects, such as drifts, that could dominate in quieter magnetic fields. The question remains: Is the heliosphere ever quiet enough to allow such effects to be unambiguously measured, at least in the midlatitudes?
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE
INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...
40 CFR 410.35 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing Subcategory § 410.35... product BOD5 1.4 0.7 COD 2.8 1.4 TSS 1.4 0.7 pH (1) (1) 1 Within the range 6.0 to 9.0 at all times. Water...
40 CFR 410.35 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing Subcategory § 410.35... product BOD5 1.4 0.7 COD 2.8 1.4 TSS 1.4 0.7 pH (1) (1) 1 Within the range 6.0 to 9.0 at all times. Water...
Earlier Violent Television Exposure and Later Drug Dependence
Brook, David W.; Katten, Naomi S.; Ning, Yuming; Brook, Judith S.
2013-01-01
This research examined the longitudinal pathways from earlier violent television exposure to later drug dependence. African American and Puerto Rican adolescents were interviewed during three points in time (N = 463). Violent television exposure in late adolescence predicted violent television exposure in young adulthood, which in turn was related to tobacco/marijuana use, nicotine dependence, and later drug dependence. Some policy and clinical implications suggest: a) regulating the times when violent television is broadcast; b) creating developmentally targeted prevention/treatment programs; and c) recognizing that watching violent television may serve as a cue regarding increased susceptibility to nicotine and drug dependence. PMID:18612881
Modeling hard clinical end-point data in economic analyses.
Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V
2013-11-01
The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.
The ISOPHOT far-infrared serendipity north ecliptic pole minisurvey
NASA Astrophysics Data System (ADS)
Stickel, M.; Bogun, S.; Lemke, D.; Klaas, U.; Toth, L. V.; Herbstmeier, U.; Richter, G.; Assendorp, R.; Laureijs, R.; Kessler, M. F.; Burgdorf, M.; Beichman, C. A.; Rowan-Robinson, M.; Efstathiou, A.
1998-08-01
The ISOPHOT Serendipity Survey fills the otherwise unused slew time between ISO's fine pointings with measurements in an unexplored wavelength regime near 200 microns. In order to test point source extraction software, the completeness of the detected objects as well as the astrophysical content we investigate a 100 sq degr field near the North ecliptic pole, dubbed ISOPHOT Serendipity Minisurvey field. A total of 21 IRAS point sources were detected on the Serendipity slews crossing the field. 19 of these objects are galaxies, one is a planetary nebula and one is an empty field without a bright optical counterpart. The detection completeness is better than 90% for IRAS sources brighter than 2 Jy at 100 microns and better than 80% for sources brighter than 1.5 Jy. The source detection frequency is about 1 per 40degr slew length, in agreement with previous estimations based on galaxy number counts. After the end of the ISO mission, about 4000 point sources are expected to be found in the Serendipity slews. Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. Members of the Consortium on the ISOPHOT Serendipity Survey (CISS) are MPIA Heidelberg, ESA ISO SOC Villafranca, AIP Potsdam, IPAC Pasadena, Imperial College London
Gravitational lensing of quasars as seen by the Hubble Space Telescope Snapshot Survey
NASA Technical Reports Server (NTRS)
Maoz, D.; Bahcall, J. N.; Doxsey, R.; Schneider, D. P.; Bahcall, N. A.; Lahav, O.; Yanny, B.
1992-01-01
Results from the ongoing HST Snapshot Survey are presented, with emphasis on 152 high-luminosity, z greater than 1 quasars. One quasar among those observed, 1208 + 1011, is a candidate lens system with subarcsecond image separation. Six other quasars have point sources within 6 arcsec. Ground-based observations of five of these cases show that the companion point sources are foreground Galactic stars. The predicted lensing frequency of the sample is calculated for a variety of cosmological models. The effect of uncertainties in some of the observational parameters upon the predictions is discussed. No correlation of the drift rate with time, right ascension, declination, or point error is found.
NASA Astrophysics Data System (ADS)
Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.
2018-07-01
Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.
Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope
Ackermann, M.; Atwood, W. B.; Baldini, L.; ...
2018-04-10
Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less
Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Atwood, W. B.; Baldini, L.
Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less
NASA Astrophysics Data System (ADS)
Warneke, C.; Geiger, F.; Edwards, P. M.; Dube, W.; Pétron, G.; Kofler, J.; Zahn, A.; Brown, S. S.; Graus, M.; Gilman, J. B.; Lerner, B. M.; Peischl, J.; Ryerson, T. B.; de Gouw, J. A.; Roberts, J. M.
2014-10-01
Emissions of volatile organic compounds (VOCs) associated with oil and natural gas production in the Uintah Basin, Utah were measured at a ground site in Horse Pool and from a NOAA mobile laboratory with PTR-MS instruments. The VOC compositions in the vicinity of individual gas and oil wells and other point sources such as evaporation ponds, compressor stations and injection wells are compared to the measurements at Horse Pool. High mixing ratios of aromatics, alkanes, cycloalkanes and methanol were observed for extended periods of time and for short-term spikes caused by local point sources. The mixing ratios during the time the mobile laboratory spent on the well pads were averaged. High mixing ratios were found close to all point sources, but gas well pads with collection and dehydration on the well pad were clearly associated with higher mixing ratios than other wells. The comparison of the VOC composition of the emissions from the oil and natural gas well pads showed that gas well pads without dehydration on the well pad compared well with the majority of the data at Horse Pool, and that oil well pads compared well with the rest of the ground site data. Oil well pads on average emit heavier compounds than gas well pads. The mobile laboratory measurements confirm the results from an emissions inventory: the main VOC source categories from individual point sources are dehydrators, oil and condensate tank flashing and pneumatic devices and pumps. Raw natural gas is emitted from the pneumatic devices and pumps and heavier VOC mixes from the tank flashings.
NASA Astrophysics Data System (ADS)
Unterberg, Ea; Donovan, D.; Barton, J.; Wampler, Wr; Abrams, T.; Thomas, Dm; Petrie, T.; Guo, Hy; Stangeby, Pg; Elder, Jd; Rudakov, D.; Grierson, B.; Victor, B.
2017-10-01
Experiments using metal inserts with novel isotopically-enriched tungsten coatings at the outer divertor strike point (OSP) have provided unique insight into the ELM-induced sourcing, main-SOL transport, and core accumulation control mechanisms of W for a range of operating conditions. This experimental approach has used a multi-head, dual-facing collector probe (CP) at the outboard midplane, as well as W-I and core W spectroscopy. Using the CP system, the total amount of W deposited relative to source measurements shows a clear dependence on ELM size, ELM frequency, and strike point location, with large ELMs depositing significantly more W on the CP from the far-SOL source. Additionally, high spatial ( 1mm) and ELM resolved spectroscopic measurements of W sourcing indicate shifts in the peak erosion rate. Furthermore, high performance discharges with rapid ELMs show core W concentrations of few 10-5, and the CP deposition profile indicates W is predominantly transported to the midplane from the OSP rather than from the far-SOL region. The low central W concentration is shown to be due to flattening of the main plasma density profile, presumably by on-axis electron cyclotron heating. Work supported under USDOE Cooperative Agreement DE-FC02-04ER54698.
Time- & Load-Dependence of Triboelectric Effect.
Pan, Shuaihang; Yin, Nian; Zhang, Zhinan
2018-02-06
Time- and load-dependent friction behavior is considered as important for a long time, due to its time-evolution and force-driving characteristics. However, its electronic behavior, mainly considered in triboelectric effect, has almost never been given the full attention and analyses from the above point of view. In this paper, by experimenting with fcc-latticed aluminum and copper friction pairs, the mechanical and electronic behaviors of friction contacts are correlated by time and load analyses, and the behind physical understanding is provided. Most importantly, the difference of "response lag" in force and electricity is discussed, the extreme points of coefficient of friction with the increasing normal loads are observed and explained with the surface properties and dynamical behaviors (i.e. wear), and the micro and macro theories linking tribo-electricity to normal load and wear (i.e. the physical explanation between coupled electrical and mechanical phenomena) are successfully developed and tested.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Latent Growth Modeling of nursing care dependency of acute neurological inpatients.
Piredda, M; Ghezzi, V; De Marinis, M G; Palese, A
2015-01-01
Longitudinal three-time point study, addressing how neurological adult patient care dependency varies from the admission time to the 3rd day of acute hospitalization. Nursing care dependency was measured with the Care Dependency Scale (CDS) and a Latent Growth Modeling approach was used to analyse the CDS trend in 124 neurosurgical and stroke inpatients. Care dependence followed a decreasing linear trend. Results can help nurse-managers planning an appropriate amount of nursing care for acute neurological patients during their initial stage of hospitalization. Further studies are needed aimed at investigating the determinants of nursing care dependence during the entire in-hospital stay.
Origin of acoustic emission produced during single point machining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heiple, C.R,.; Carpenter, S.H.; Armentrout, D.L.
1991-01-01
Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emissionmore » produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent. 21 refs., 19 figs., 4 tabs.« less
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Radial Distribution of X-Ray Point Sources Near the Galactic Center
NASA Astrophysics Data System (ADS)
Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas
2009-11-01
We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.
Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan
2011-01-01
The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.
NASA Astrophysics Data System (ADS)
Labenski, J. R.; Tew, W. L.; Benz, S. P.; Nam, S. W.; Dresselhaus, P.
2008-02-01
A Johnson-noise thermometer (JNT) has been used with a quantized voltage noise source (QVNS), as a calculable reference to determine the ratio of temperatures near the Zn freezing point to those near the Sn freezing point. The temperatures are derived in a series of separate measurements comparing the synthesized noise power from the QVNS with that of Johnson noise from a known resistance. The synthesized noise power is digitally programed to match the thermal noise powers at both temperatures and provides the principle means of scaling the temperatures. This produces a relatively flat spectrum for the ratio of spectral noise densities, which is close to unity in the low-frequency limit. The data are analyzed as relative spectral ratios over the 4.8 to 450 kHz range averaged over a 3.2 kHz bandwidth. A three-parameter model is used to account for differences in time constants that are inherently temperature dependent. A drift effect of approximately -6 μK·K-1 per day is observed in the results, and an empirical correction is applied to yield a relative difference in temperature ratios of -11.5 ± 43 μK·K-1 with respect to the ratio of temperatures assigned on the International Temperature Scale of 1990 (ITS-90). When these noise thermometry results are combined with results from acoustic gas thermometry at temperatures near the Sn freezing point, a value of T - T 90 = 7 ± 30 mK for the Zn freezing point is derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salcedo, D.; Laskin, Alexander; Shutthanandan, V.
The feasibility of using an online thermal-desorption electron-ionization high-resolution aerosol mass spectrometer (AMS) for the detection of particulate trace elements was investigated analyzing data from Mexico City obtained during the MILAGRO 2006 field campaign, where relatively high concentrations of trace elements have been reported. This potential application is of interest due to the real-time data provided by the AMS, its high sensitivity and time resolution, and the widespread availability and use of this instrument. High resolution mass spectral analysis, isotopic ratios, and ratios of different ions containing the same elements are used to constrain the chemical identity of the measuredmore » ions. The detection of Cu, Zn, As, Se, Sn, and Sb is reported. There was no convincing evidence for the detection of other trace elements commonly reported in PM. The elements detected tend to be those with lower melting and boiling points, as expected given the use of a vaporizer at 600oC in this instrument. Operation of the AMS vaporizer at higher temperatures is likely to improve trace element detection. The detection limit is estimated at approximately 0.3 ng m-3 for 5-min of data averaging. Concentration time series obtained from the AMS data were compared to concentration records determined from offline analysis of particle samples from the same times and locations by ICP (PM2.5) and PIXE (PM1.1 and PM0.3). The degree of correlation and agreement between the three instruments (AMS, ICP, and PIXE) varied depending on the element. The AMS shows promise for real-time detection of some trace elements, although additional work including laboratory calibrations with different chemical forms of these elements are needed to further develop this technique and to understand the differences with the ambient data from the other techniques. The trace elements peaked in the morning as expected for primary sources, and the many detected plumes suggest the presence of multiple point sources, probably industrial, in Mexico City which are variable in time and space, in agreement with previous studies.« less
NASA Astrophysics Data System (ADS)
Sarmah, Ratan; Tiwari, Shubham
2018-03-01
An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.
Discharge of stormwater runoff onto beaches is a major cause of beach closings and advisories in the United States. Prospective studies of recreational water quality and health have often been limited to two time points (baseline and follow-up). Little is known about the risk of ...
USDA-ARS?s Scientific Manuscript database
Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...
An adaptive grid scheme using the boundary element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munipalli, R.; Anderson, D.A.
1996-09-01
A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.
2003-01-01
Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.
Overview of the Texas Source Water Assessment Project
Ulery, Randy L.
2000-01-01
The 1996 Amendments to the Safe Drinking Water Act require, for the first time, that each state prepare a source water assessment for all PWS. Previously, Federal regulations focused on sampling and enforcement with emphasis on the quality of delivered water. These Amendments emphasize the importance of protecting the source water. States are required to determine the drinking-water source, the origin of contaminants monitored or the potential contaminants to be monitored, and the intrinsic susceptibility of the source water. Under the amendments to the Act, States must create SWAP Programs. The programs must include an individual source water assessment for each public water system regulated by the State. These assessments will determine whether an individual drinking water source is susceptible to contamination. During 1997?99, TNRCC and USGS staff met as subject-matter working groups to develop an approach to conducting Source Water Susceptibility Assessments (SWSA) and a draft workplan. The draft workplan was then presented to and reviewed by various stakeholder and technical advisory groups. Comments and suggestions from these groups were considered, and a final workplan was produced and presented to the EPA. After EPA approval, work formally began on the Texas SWAP Project. The project has an expected completion date of September 2002. At that time, initial SWSA of all Texas public water supplies should be complete. Ground-water supplies can be considered susceptible if a possible source of contamination (PSOC) exists in the contributing area for the public-supply well field or spring, the contaminant travel time to the well field or spring is short, and the soil zone, vadose zone, and aquifer-matrix materials are unlikely to adequately attenuate the contaminants associated with the PSOC. In addition, particular types of land use/cover within the contributing area may cause the supply to be deemed more susceptible to contamination. Finally, detection of various classes of constituents in water from wells in the vicinity of a public supply well may indicate susceptibility of the public-supply well even though there may be no identifiable PSOC or land use activity. Surface-water supplies are by nature susceptible to contamination from both point and non-point sources. The degree of susceptibility of a PWS to contamination can vary and is a function of the environmental setting, water and wastewater management practices, and land use/cover within a water supply's contributing watershed area. For example, a PWS intake downstream from extensive urban development may be more susceptible to non-point source contamination than a PWS intake downstream from a forested, relatively undeveloped watershed. Surface-water supplies are also susceptible to contamination from point sources, which may include permitted discharges, as well as accidental spills or other introduction of contaminants.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Investigation on the pinch point position in heat exchangers
NASA Astrophysics Data System (ADS)
Pan, Lisheng; Shi, Weixiu
2016-06-01
The pinch point is important for analyzing heat transfer in thermodynamic cycles. With the aim to reveal the importance of determining the accurate pinch point, the research on the pinch point position is carried out by theoretical method. The results show that the pinch point position depends on the parameters of the heat transfer fluids and the major fluid properties. In most cases, the pinch point locates at the bubble point for the evaporator and the dew point for the condenser. However, the pinch point shifts to the supercooled liquid state in the near critical conditions for the evaporator. Similarly, it shifts to the superheated vapor state with the condensing temperature approaching the critical temperature for the condenser. It even can shift to the working fluid entrance of the evaporator or the supercritical heater when the heat source fluid temperature is very high compared with the absorbing heat temperature. A wrong position for the pinch point may generate serious mistake. In brief, the pinch point should be founded by the iterative method in all conditions rather than taking for granted.
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
Sources and Deposition of Polycyclic Aromatic Hydrocarbons to Western U.S. National Parks
USENKO, SASCHA; MASSEY SIMONICH, STACI L.; HAGEMAN, KIMBERLY J.; SCHRLAU, JILL E.; GEISER, LINDA; CAMPBELL, DON H.; APPLEBY, PETER G.; LANDERS, DIXON H.
2010-01-01
Seasonal snowpack, lichens, and lake sediment cores were collected from fourteen lake catchments in eight western U.S. National Parks and analyzed for sixteen polycyclic aromatic hydrocarbons (PAHs) in order to determine their current and historical deposition, as well as to identify their potential sources. Seasonal snowpack was measured to determine the current wintertime atmospheric PAH deposition; lichens were measured to determine the long-term, year around deposition; and the temporal PAH deposition trends were reconstructed using lake sediment cores dated using 210Pb and 137Cs. The fourteen remote lake catchments ranged from low-latitude catchments (36.6° N) at high elevation (2900 masl) in Sequoia National Park, CA to high-latitude catchments (68.4° N) at low elevation (427 masl) in the Alaskan Arctic. Over 75% of the catchments demonstrated statistically significant temporal trends in ΣPAH sediment flux, depending on catchment proximity to source regions and topographic barriers. The ΣPAH concentrations and fluxes in seasonal snowpack, lichens, and surficial sediment were 3.6 to 60,000 times greater in the Snyder Lake catchment of Glacier National Park than the other 13 lake catchments. The PAH ratios measured in snow, lichen, and sediment were used to identify a local aluminum smelter as a major source of PAHs to the Snyder Lake catchment. These results suggest that topographic barriers influence the atmospheric transport and deposition of PAHs in high-elevation ecosystems and that PAH sources to these national park ecosystems range from local point sources to diffuse regional and global sources. PMID:20465303
Staphylococcus xylosus fermentation of pork fatty waste: raw material for biodiesel production.
Marques, Roger Vasques; Paz, Matheus Francisco da; Duval, Eduarda Hallal; Corrêa, Luciara Bilhalva; Corrêa, Érico Kunde
2016-01-01
The need for cleaner sources of energy has stirred research into utilising alternate fuel sources with favourable emission and sustainability such as biodiesel. However, there are technical constraints that hinder the widespread use of some of the low cost raw materials such as pork fatty wastes. Currently available technology permits the use of lipolytic microorganisms to sustainably produce energy from fat sources; and several microorganisms and their metabolites are being investigated as potential energy sources. Thus, the aim of this study was to characterise the process of Staphylococcus xylosus mediated fermentation of pork fatty waste. We also wanted to explore the possibility of fermentation effecting a modification in the lipid carbon chain to reduce its melting point and thereby act directly on one of the main technical barriers to obtaining biodiesel from this abundant source of lipids. Pork fatty waste was obtained from slaughterhouses in southern Brazil during evisceration of the carcasses and the kidney casing of slaughtered animals was used as feedstock. Fermentation was performed in BHI broth with different concentrations of fatty waste and for different time periods which enabled evaluation of the effect of fermentation time on the melting point of swine fat. The lowest melting point was observed around 46°C, indicating that these chemical and biological reactions can occur under milder conditions, and that such pre-treatment may further facilitate production of biodiesel from fatty animal waste. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.
Temperature dependence of attitude sensor coalignments on the Solar Maximum Mission (SMM)
NASA Technical Reports Server (NTRS)
Pitone, D. S.; Eudell, A. H.; Patt, F. S.
1989-01-01
Results are presented on the temperature correlation of the relative coalignment between the fine pointing sun sensor (FPSS) and fixed head star trackers (FHSTs) on the Solar Maximum Mission (SMM). This correlation can be caused by spacecraft electronic and mechanical effects. Routine daily measurements reveal a time dependent sensor coalignment variation. The magnitude of the alignment variation is on the order of 120 arc seconds (arc sec), which greatly exceeds the prelaunch thermal structural analysis estimate of 15 acr sec. Differences between FPSS-only and FHST-only yaw solutions as a function of mission day are correlated with the relevant spacecraft temperature. If unaccounted for, the sensor misalignments due to thermal effects are a significant source of error in attitude determination accuracy. Prominent sources of temperature variation are identified and correlated with the temperature profile observed on the SMM. It was determined that even relatively small changes in spacecraft temperature can affect the coalignments between the attitude hardware on the SMM and the science instrument support plate and that frequent recalibration of sensor alignments is necessary to compensate for this effect. An alterntive to frequent recalibration is to model the variation of alignments as a function of temperature and use this to maintain accurate ground or onboard alignment estimates. These flight data analysis results may be important consierations for prelaunch analysis of future missions.
Cosmological rotating black holes in five-dimensional fake supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozawa, Masato; Maeda, Kei-ichi; Waseda Research Institute for Science and Engineering, Okubo 3-4-1, Shinjuku, Tokyo 169-8555
2011-01-15
In recent series of papers, we found an arbitrary dimensional, time-evolving, and spatially inhomogeneous solution in Einstein-Maxwell-dilaton gravity with particular couplings. Similar to the supersymmetric case, the solution can be arbitrarily superposed in spite of nontrivial time-dependence, since the metric is specified by a set of harmonic functions. When each harmonic has a single point source at the center, the solution describes a spherically symmetric black hole with regular Killing horizons and the spacetime approaches asymptotically to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. We discuss in this paper that in 5 dimensions, this equilibrium condition traces back to the first-order 'Killing spinor'more » equation in 'fake supergravity' coupled to arbitrary U(1) gauge fields and scalars. We present a five-dimensional, asymptotically FLRW, rotating black-hole solution admitting a nontrivial 'Killing spinor', which is a spinning generalization of our previous solution. We argue that the solution admits nondegenerate and rotating Killing horizons in contrast with the supersymmetric solutions. It is shown that the present pseudo-supersymmetric solution admits closed timelike curves around the central singularities. When only one harmonic is time-dependent, the solution oxidizes to 11 dimensions and realizes the dynamically intersecting M2/M2/M2-branes in a rotating Kasner universe. The Kaluza-Klein-type black holes are also discussed.« less
Ecological change points: The strength of density dependence and the loss of history.
Ponciano, José M; Taper, Mark L; Dennis, Brian
2018-05-01
Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
NASA Astrophysics Data System (ADS)
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
MSWT-01, flood disaster water treatment solution from common ideas
NASA Astrophysics Data System (ADS)
Ananto, Gamawan; Setiawan, Albertus B.; Z, Darman M.
2013-06-01
Indonesia has a lot of potential flood disaster places with clean water problems faced. Various solution programs always initiated by Government, companies CSR, and people sporadical actions to provide clean water; with their advantages and disadvantages respectively. One solution is easy to operate for instance, but didn't provide adequate capacity, whereas the other had ideal performance but more costly. This situation inspired to develop a water treatment machine that could be an alternative favor. There are many methods could be choosed; whether in simple, middle or high technology, depends on water source input and output result quality. MSWT, Mobile Surface Water Treatment, is an idea for raw water in flood area, basically made for 1m3 per hour. This water treatment design adopted from combined existing technologies and related literatures. Using common ideas, the highlight is how to make such modular process put in compact design elegantly, and would be equipped with mobile feature due to make easier in operational. Through prototype level experiment trials, the machine is capable for producing clean water that suitable for sanitation and cooking/drinking purposes although using contaminated water input source. From the investment point of view, such machine could be also treated as an asset that will be used from time to time when needed, instead of made for project approach only.
Influence of different emission sources on atmospheric organochlorine patterns in Germany
NASA Astrophysics Data System (ADS)
Wenzel, Klaus-Dieter; Hubert, Andreas; Weissflog, Ludwig; Kühne, Ralph; Popp, Peter; Kindler, Annegret; Schüürmann, Gerrit
The concentrations of organochlorine parent substances such as p,p'-DDT (2,2-bis(chlorophenyl)-1,1,1-trichloroethane) and lindane ( γ-hexachlorocyclohexane (HCH)) as well as of their metabolites and conversion products, chlorobenzenes (CBz) and polychlorinated biphenyl congeners (PCBs), were determined both in the gas phase and in the particle bound fraction at 10 locations in Germany. The ratios between parent substances and possible degradation products were influenced by different gaseous point-source emissions. Factors of site-related degradation products are dependent on the emission source. Surprisingly, the highest degradation ratios of p,p'-DDT to DDE and DDD were not calculated at +20 °C, but at -19 °C. This indicates that heavy metals, black carbon and other organic substances such as PAHs may catalyse degradation reactions on particles because of higher condensation of all these substances at lower temperatures. To detect hidden characteristic of pollutant patterns that are dependent on the specific emission source and on typical degradation processes, the principal component analysis (PCA) results suggested that the organochlorines appear to be associated. Comparatively higher concentrations of DDX and HCH isomers mean at some sites also higher concentrations of CBz and PCBs, without an additional source being recognizable.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Hydraulic transients: a seismic source in volcanoes and glaciers.
Lawrence, W S; Qamar, A
1979-02-16
A source for certain low-frequency seismic waves is postulated in terms of the water hammer effect. The time-dependent displacement of a water-filled sub-glacial conduit is analyzed to demonstrate the nature of the source. Preliminary energy calculations and the observation of hydraulically generated seismic radiation from a dam indicate the plausibility of the proposed source.
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
Development and application of a reactive plume-in-grid model: evaluation over Greater Paris
NASA Astrophysics Data System (ADS)
Korsakissok, I.; Mallet, V.
2010-09-01
Emissions from major point sources are badly represented by classical Eulerian models. An overestimation of the horizontal plume dilution, a bad representation of the vertical diffusion as well as an incorrect estimate of the chemical reaction rates are the main limitations of such models in the vicinity of major point sources. The plume-in-grid method is a multiscale modeling technique that couples a local-scale Gaussian puff model with an Eulerian model in order to better represent these emissions. We present the plume-in-grid model developed in the air quality modeling system Polyphemus, with full gaseous chemistry. The model is evaluated on the metropolitan Île-de-France region, during six months (summer 2001). The subgrid-scale treatment is used for 89 major point sources, a selection based on the emission rates of NOx and SO2. Results with and without the subgrid treatment of point emissions are compared, and their performance by comparison to the observations on measurement stations is assessed. A sensitivity study is also carried out, on several local-scale parameters as well as on the vertical diffusion within the urban area. Primary pollutants are shown to be the most impacted by the plume-in-grid treatment. SO2 is the most impacted pollutant, since the point sources account for an important part of the total SO2 emissions, whereas NOx emissions are mostly due to traffic. The spatial impact of the subgrid treatment is localized in the vicinity of the sources, especially for reactive species (NOx and O3). Ozone is mostly sensitive to the time step between two puff emissions which influences the in-plume chemical reactions, whereas the almost-passive species SO2 is more sensitive to the injection time, which determines the duration of the subgrid-scale treatment. Future developments include an extension to handle aerosol chemistry, and an application to the modeling of line sources in order to use the subgrid treatment with road emissions. The latter is expected to lead to more striking results, due to the importance of traffic emissions for the pollutants of interest.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Dependence of the source performance on plasma parameters at the BATMAN test facility
NASA Astrophysics Data System (ADS)
Wimmer, C.; Fantz, U.
2015-04-01
The investigation of the dependence of the source performance (high jH-, low je) for optimum Cs conditions on the plasma parameters at the BATMAN (Bavarian Test MAchine for Negative hydrogen ions) test facility is desirable in order to find key parameters for the operation of the source as well as to deepen the physical understanding. The most relevant source physics takes place in the extended boundary layer, which is the plasma layer with a thickness of several cm in front of the plasma grid: the production of H-, its transport through the plasma and its extraction, inevitably accompanied by the co-extraction of electrons. Hence, a link of the source performance with the plasma parameters in the extended boundary layer is expected. In order to characterize electron and negative hydrogen ion fluxes in the extended boundary layer, Cavity Ring-Down Spectroscopy and Langmuir probes have been applied for the measurement of the H- density and the determination of the plasma density, the plasma potential and the electron temperature, respectively. The plasma potential is of particular importance as it determines the sheath potential profile at the plasma grid: depending on the plasma grid bias relative to the plasma potential, a transition in the plasma sheath from an electron repelling to an electron attracting sheath takes place, influencing strongly the electron fraction of the bias current and thus the amount of co-extracted electrons. Dependencies of the source performance on the determined plasma parameters are presented for the comparison of two source pressures (0.6 Pa, 0.45 Pa) in hydrogen operation. The higher source pressure of 0.6 Pa is a standard point of operation at BATMAN with external magnets, whereas the lower pressure of 0.45 Pa is closer to the ITER requirements (p ≤ 0.3 Pa).
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Generalized Fluid System Simulation Program, Version 6.0
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; LeClair, A. C.; Moore, A.; Schallhorn, P. A.
2013-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependant flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 25 demonstrated example problems.
Generalized Fluid System Simulation Program, Version 5.0-Educational
NASA Technical Reports Server (NTRS)
Majumdar, A. K.
2011-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems.
AGN neutrino flux estimates for a realistic hybrid model
NASA Astrophysics Data System (ADS)
Richter, S.; Spanier, F.
2018-07-01
Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novascone, Stephen Rhead; Peterson, John William
Abstract This report documents the progress of simulating pore migration in ceramic (UO 2 and mixed oxide or MOX) fuel using BISON. The porosity field is treated as a function of space and time whose evolution is governed by a custom convection-diffusion-reaction equation (described here) which is coupled to the heat transfer equation via the temperature field. The porosity is initialized to a constant value at every point in the domain, and as the temperature (and its gradient) are increased by application of a heat source, the pores move up the thermal gradient and accumulate at the center of themore » fuel in a time-frame that is consistent with observations from experiments. There is an inverse dependence of the fuel’s thermal conductivity on porosity (increasing porosity decreases thermal conductivity, and vice-versa) which is also accounted for, allowing the porosity equation to couple back into the heat transfer equation. Results from an example simulation are shown to demonstrate the new capability.« less
An Autonomous Distributed Fault-Tolerant Local Positioning System
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2017-01-01
We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.
Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, R E; Olivier, S; Riot, V
2010-05-27
We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a singlemore » star at 14th to 18th magnitude, depending on the passband.« less
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
40 CFR 461.33 - New source performance standards (NSPS).
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory § 461.33... of 7.5 to 10.0 at all times. (4) Subpart C—Battery Wash (Detergent)—NSPS. Pollutant or pollutant... Maximum for any 1 day Maximum for monthly average Metric units—mg/kg of lead in trucked batteries English...
40 CFR 461.33 - New source performance standards (NSPS).
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory § 461.33... of 7.5 to 10.0 at all times. (4) Subpart C—Battery Wash (Detergent)—NSPS. Pollutant or pollutant... Maximum for any 1 day Maximum for monthly average Metric units—mg/kg of lead in trucked batteries English...
40 CFR 461.33 - New source performance standards (NSPS).
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES AND STANDARDS (CONTINUED) BATTERY MANUFACTURING POINT SOURCE CATEGORY Lead Subcategory § 461.33... of 7.5 to 10.0 at all times. (4) Subpart C—Battery Wash (Detergent)—NSPS. Pollutant or pollutant... Maximum for any 1 day Maximum for monthly average Metric units—mg/kg of lead in trucked batteries English...
Eslinger, Paul W; Bowyer, Ted W; Cameron, Ian M; Hayes, James C; Miley, Harry S
2015-10-01
The radionuclide network of the International Monitoring System comprises up to 80 stations around the world that have aerosol and xenon monitoring systems designed to detect releases of radioactive materials to the atmosphere from nuclear explosions. A rule of thumb description of plume concentration and duration versus time and distance from the release point is useful when designing and deploying new sample collection systems. This paper uses plume development from atmospheric transport modeling to provide a power-law rule describing atmospheric dilution factors as a function of distance from the release point. Consider the plume center-line concentration seen by a ground-level sampler as a function of time based on a short-duration ground-level release of a nondepositing radioactive tracer. The concentration C (Bq m(-3)) near the ground varies with distance from the source with the relationship C=R×A(D,C) ×e (-λ(-1.552+0.0405×D)) × 5.37×10(-8) × D(-2.35) where R is the release magnitude (Bq), D is the separation distance (km) from the ground level release to the measurement location, λ is the decay constant (h(-1)) for the radionuclide of interest and AD,C is an attenuation factor that depends on the length of the sample collection period. This relationship is based on the median concentration for 10 release locations with different geographic characteristics and 365 days of releases at each location, and it has an R(2) of 0.99 for 32 distances from 100 to 3000 km. In addition, 90 percent of the modeled plumes fall within approximately one order of magnitude of this curve for all distances. Copyright © 2015 Elsevier Ltd. All rights reserved.
Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae
NASA Astrophysics Data System (ADS)
Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.
2014-07-01
We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
Analytical solutions describing the time-dependent DNAPL source-zone mass and contaminant discharge rate are used as a flux-boundary condition in a semi-analytical contaminant transport model. These analytical solutions assume a power relationship between the flow-averaged sourc...
EPA Office of Water (OW): SDWIS - HUC12 Densities for Public Surface Water and Groundwater Sources
Public Water System location points, based on information from the Safe Drinking Water Act Information System (SDWIS/Federal) for a 2010 third quarter (SDWIS_2010Q3) baseline period, were applied to relate system latitude and longitude coordinates (LatLongs) to Watershed Boundary Dataset subwatershed polygons (HUC12s). This HUC12 table can be mapped through setting up appropriate table relationships on the attribute HUC_12 with the HUC12 GIS layer that is part of EPA's Reach Address Database (RAD) Version 3. At the present time, the RAD Version 3 contains HUC12 polygons for the conterminous United States (CONUS), Hawaii, Puerto Rico, and the U.S. Virgin Islands (materials for Alaska or for other territories and dependencies are not available as of February, 2010). The records in this table are based on a special QUERY created by the EPA Office of Ground Water and Drinking Water (OGWDW) from the primary SDWIS/FED information to provide a robust point representation for a PWS system. PWS points are selected based on the following prioritization: 1. If the system has a treatment plant with LatLongs and MAD codes; 2. If the system has a treatment plant with LatLongs but without MAD codes; 3. If the system has a well with LatLongs and MAD codes; 4. If the system has a well with LatLongs but without MAD codes; 5. If the system has an intake with LatLongs and MAD codes; 6. If the system has an intake with LatLongs but without MAD codes; 7. If the system has any source
Contaminant transport from point source on water surface in open channel flow with bed absorption
NASA Astrophysics Data System (ADS)
Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian
2018-06-01
Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.