Sample records for source time function

  1. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  2. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  3. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  4. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  5. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  6. An efficient algorithm for the retarded time equation for noise from rotating sources

    NASA Astrophysics Data System (ADS)

    Loiodice, S.; Drikakis, D.; Kokkalis, A.

    2018-01-01

    This study concerns modelling of noise emanating from rotating sources such as helicopter rotors. We present an accurate and efficient algorithm for the solution of the retarded time equation, which can be used both in subsonic and supersonic flow regimes. A novel approach for the search of the roots of the retarded time function was developed based on considerations of the kinematics of rotating sources and of the bifurcation analysis of the retarded time function. It is shown that the proposed algorithm is faster than the classical Newton and Brent methods, especially in the presence of sources rotating supersonically.

  7. Inverse source problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  8. Influence of the noise sources motion on the estimated Green's functions from ambient noise cross-correlations.

    PubMed

    Sabra, Karim G

    2010-06-01

    It has been demonstrated theoretically and experimentally that an estimate of the Green's function between two receivers can be obtained by cross-correlating acoustic (or elastic) ambient noise recorded at these two receivers. Coherent wavefronts emerge from the noise cross-correlation time function due to the accumulated contributions over time from noise sources whose propagation path pass through both receivers. Previous theoretical studies of the performance of this passive imaging technique have assumed that no relative motion between noise sources and receivers occurs. In this article, the influence of noise sources motion (e.g., aircraft or ship) on this passive imaging technique was investigated theoretically in free space, using a stationary phase approximation, for stationary receivers. The theoretical results were extended to more complex environments, in the high-frequency regime, using first-order expansions of the Green's function. Although sources motion typically degrades the performance of wideband coherent processing schemes, such as time-delay beamforming, it was found that the Green's function estimated from ambient noise cross-correlations are not expected to be significantly affected by the Doppler effect, even for supersonic sources. Numerical Monte-Carlo simulations were conducted to confirm these theoretical predictions for both cases of subsonic and supersonic moving sources.

  9. Analysis and attenuation of artifacts caused by spatially and temporally correlated noise sources in Green's function estimates

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.

    2016-12-01

    Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.

  10. Rupture Complexities of Fluid Induced Microseismic Events at the Basel EGS Project

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.; Häring, Markus; Asanuma, Hiroshi

    2016-04-01

    Microseismic data sets of excellent quality, such as the seismicity recorded in the Basel-1 enhanced geothermal system, Switzerland, in 2006-2007, provide the opportunity to analyse induced seismic events in great detail. It is important to understand in how far seismological insights on e.g. source and rupture processes are scale dependent and how they can be transferred to fluid induced micro-seismicity. We applied the empirical Green's function (EGF) method in order to reconstruct the relative source time functions of 195 suitable microseismic events from the Basel-1 reservoir. We found 93 solutions with a clear and consistent directivity pattern. The remaining events display either no measurable directivity, are unfavourably oriented or exhibit non consistent or complex relative source time functions. In this work we focus on selected events of M ˜ 1 which show possible rupture complexities. It is demonstrated that the EGF method allows to resolve complex rupture behaviour even if it is not directly identifiable in the seismograms. We find clear evidence of rupture directivity and multi-phase rupturing in the analysed relative source time functions. The time delays between consecutive subevents lies in the order of 10ms. Amplitudes of the relative source time functions of the subevents do not always show the same azimuthal dependence, indicating dissimilarity in the rupture directivity of the subevents. Our observations support the assumption that heterogeneity on fault surfaces persists down to small scale (few tens of meters).

  11. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  12. MEG/EEG Source Reconstruction, Statistical Evaluation, and Visualization with NUTMEG

    PubMed Central

    Dalal, Sarang S.; Zumer, Johanna M.; Guggisberg, Adrian G.; Trumpis, Michael; Wong, Daniel D. E.; Sekihara, Kensuke; Nagarajan, Srikantan S.

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions. PMID:21437174

  13. MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.

    PubMed

    Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.

  14. Coherency of seismic noise, Green functions and site effects

    NASA Astrophysics Data System (ADS)

    Prieto, G. A.; Beroza, G. C.

    2007-12-01

    The newly rediscovered methodology of cross correlating seismic noise (or seismic coda) to retrieve the Green function takes advantage of the coherency of the signals across a set of stations. Only coherent signals are expected to emerge after stacking over a long enough time. Cross-correlation has a significant disadvantage for this purpose, in that the Green function recovered is convolved with the source-time function of the noise source. For seismic waves, this can mean that the microseism peak dominates the signal. We show how the use of the transfer function between sensors provides a better resolved Green function (after inverse Fourier transform), because the deconvolution process removes the effect of the noise source-time function. In addition, we compute the coherence of the seismic noise as a function of frequency and distance, providing information about the effective frequency band over which Green function retrieval is possible. The coherence may also be used in resolution analysis for time reversal as a constraint on the de-coherence length (the distance between sensors over which the signals become uncorrelated). We use the information from the transfer function and the coherence to examine wave propagation effects (attenuation and site effects) for closely spaced stations compared to a reference station.

  15. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  16. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  17. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  18. Stress drop variation of M > 4 earthquakes on the Blanco oceanic transform fault using a phase coherence method

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.

    2017-12-01

    Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.

  19. The effect of a hot, spherical scattering cloud on quasi-periodic oscillation behavior

    NASA Astrophysics Data System (ADS)

    Bussard, R. W.; Weisskopf, M. C.; Elsner, R. F.; Shibazaki, N.

    1988-04-01

    A Monte Carlo technique is used to investigate the effects of a hot electron scattering cloud surrounding a time-dependent X-ray source. Results are presented for the time-averaged emergent energy spectra and the mean residence time in the cloud as a function of energy. Moreover, after Fourier transforming the scattering Green's function, it is shown how the cloud affects both the observed power spectrum of a time-dependent source and the cross spectrum (Fourier transform of a cross correlation between energy bands). It is found that the power spectra intrinsic to the source are related to those observed by a relatively simple frequency-dependent multiplicative factor (a transmission function). The cloud can severely attenuate high frequencies in the power spectra, depending on optical depth, and, at lower frequencies, the transmission function has roughly a Lorentzian shape. It is also found that if the intrinsic energy spectrum is constant in time, the phase of the cross spectrum is determined entirely by scattering. Finally, the implications of the results for studies of the X-ray quasi-periodic oscillators are discussed.

  20. Temporal evolution of the Green's function reconstruction in the seismic coda

    NASA Astrophysics Data System (ADS)

    Clerc, V.; Roux, P.; Campillo, M.

    2013-12-01

    In presence of multiple scattering, the wavefield evolves towards an equipartitioned state, equivalent to ambient noise. CAMPILLO and PAUL (2003) reconstructed the surface wave part of the Green's function between three pairs of stations in Mexico. The data indicate that the time asymmetry between causal and acausal part of the Green's function is less pronounced when the correlation is performed in the later windows of the coda. These results on the correlation of diffuse waves provide another perspective on the reconstruction of Green function which is independent of the source distribution and which suggests that if the time of observation is long enough, a single source could be sufficient. The paper by ROUX et al. (2005) provides a theoretical frame for the reconstruction of the Green's function in a homogeneous middle. In a multiple scattering medium with a single source, scatterers behave as secondary sources according to the Huygens principle. Coda waves are relevant to multiple scattering, a regime which can be approximated by diffusion for long lapse times. We express the temporal evolution of the correlation function between two receivers as a function of the secondary sources. We are able to predict the effect of the persistence of the net flux of energy observed by CAMPILLO and PAUL (2003) in numerical simulations. This method is also effective in order to retrieve the scattering mean free path. We perform a partial reconstruction of the Green's function in a strongly scattering medium in numerical simulations. The prediction of the flux asymmetry allows defining the parts of the coda providing the same information as ambient noise cross correlation.

  1. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  2. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  3. Analytical solutions to compartmental indoor air quality models with application to environmental tobacco smoke concentrations measured in a house.

    PubMed

    Ott, Wayne R; Klepeis, Neil E; Switzer, Paul

    2003-08-01

    This paper derives the analytical solutions to multi-compartment indoor air quality models for predicting indoor air pollutant concentrations in the home and evaluates the solutions using experimental measurements in the rooms of a single-story residence. The model uses Laplace transform methods to solve the mass balance equations for two interconnected compartments, obtaining analytical solutions that can be applied without a computer. Environmental tobacco smoke (ETS) sources such as the cigarette typically emit pollutants for relatively short times (7-11 min) and are represented mathematically by a "rectangular" source emission time function, or approximated by a short-duration source called an "impulse" time function. Other time-varying indoor sources also can be represented by Laplace transforms. The two-compartment model is more complicated than the single-compartment model and has more parameters, including the cigarette or combustion source emission rate as a function of time, room volumes, compartmental air change rates, and interzonal air flow factors expressed as dimensionless ratios. This paper provides analytical solutions for the impulse, step (Heaviside), and rectangular source emission time functions. It evaluates the indoor model in an unoccupied two-bedroom home using cigars and cigarettes as sources with continuous measurements of carbon monoxide (CO), respirable suspended particles (RSP), and particulate polycyclic aromatic hydrocarbons (PPAH). Fine particle mass concentrations (RSP or PM3.5) are measured using real-time monitors. In our experiments, simultaneous measurements of concentrations at three heights in a bedroom confirm an important assumption of the model-spatial uniformity of mixing. The parameter values of the two-compartment model were obtained using a "grid search" optimization method, and the predicted solutions agreed well with the measured concentration time series in the rooms of the home. The door and window positions in each room had considerable effect on the pollutant concentrations observed in the home. Because of the small volumes and low air change rates of most homes, indoor pollutant concentrations from smoking activity in a home can be very high and can persist at measurable levels indoors for many hours.

  4. An empirical method for estimating travel times for wet volcanic mass flows

    USGS Publications Warehouse

    Pierson, Thomas C.

    1998-01-01

    Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.

  5. BLUES function method in computational physics

    NASA Astrophysics Data System (ADS)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  6. A Comparison between Predicted and Observed Atmospheric States and their Effects on Infrasonic Source Time Function Inversion at Source Physics Experiment 6

    NASA Astrophysics Data System (ADS)

    Aur, K. A.; Poppeliers, C.; Preston, L. A.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of underground chemical explosions at the Nevada National Security Site (NNSS) designed to gain an improved understanding of the generation and propagation of physical signals in the near and far field. Characterizing the acoustic and infrasound source mechanism from underground explosions is of great importance to underground explosion monitoring. To this end we perform full waveform source inversion of infrasound data collected from the SPE-6 experiment at distances from 300 m to 6 km and frequencies up to 20 Hz. Our method requires estimating the state of the atmosphere at the time of each experiment, computing Green's functions through these atmospheric models, and subsequently inverting the observed data in the frequency domain to obtain a source time function. To estimate the state of the atmosphere at the time of the experiment, we utilize the Weather Research and Forecasting - Data Assimilation (WRF-DA) modeling system to derive a unified atmospheric state model by combining Global Energy and Water Cycle Experiment (GEWEX) Continental-scale International Project (GCIP) data and locally obtained sonde and surface weather observations collected at the time of the experiment. We synthesize Green's functions through these atmospheric models using Sandia's moving media acoustic propagation simulation suite (TDAAPS). These models include 3-D variations in topography, temperature, pressure, and wind. We compare inversion results using the atmospheric models derived from the unified weather models versus previous modeling results and discuss how these differences affect computed source waveforms with respect to observed waveforms at various distances. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  7. Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan

    2011-01-01

    The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.

  8. A Study of Regional Wave Source Time Functions of Central Asian Earthquakes

    NASA Astrophysics Data System (ADS)

    Xie, J.; Perry, M. R.; Schult, F. R.; Wood, J.

    2014-12-01

    Despite the extensive use of seismic regional waves in seismic event identification and attenuation tomography, very little is known on how seismic sources radiate energy into these waves. For example, whether regional Lg wave has the same source spectrum as that of the local S has been questioned by Harr et al. and Frenkel et al. three decades ago; many current investigators assume source spectra in Lg, Sn, Pg, Pn and Lg coda waves have either the same or very similar corner frequencies, in contrast to local P and S spectra whose corner frequencies differ. The most complete information on how the finite source ruptures radiate energy into regional waves is contained in the time domain source time functions (STFs). To estimate the STFs of regional waves using the empirical Green's function (EGF) method, we have been substantially modifying a semi-automotive computer procedure to cope with the increasingly diverse and inconsistent naming patterns of new data files from the IRIS DMC. We are applying the modified procedure to many earthquakes in central Asia to study the STFs of various regional waves to see whether they have the same durations and pulse shapes, and how frequently source directivity occur. When applicable, we also examine the differences between STFs of local P and S waves and those of regional waves. The result of these analyses will be presented at the meeting.

  9. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  10. Transient difference solutions of the inhomogeneous wave equation - Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  11. Transient difference solutions of the inhomogeneous wave equation: Simulation of the Green's function

    NASA Technical Reports Server (NTRS)

    Baumeiste, K. J.

    1983-01-01

    A time-dependent finite difference formulation to the inhomogeneous wave equation is derived for plane wave propagation with harmonic noise sources. The difference equation and boundary conditions are developed along with the techniques to simulate the Dirac delta function associated with a concentrated noise source. Example calculations are presented for the Green's function and distributed noise sources. For the example considered, the desired Fourier transformed acoustic pressures are determined from the transient pressures by use of a ramping function and an integration technique, both of which eliminates the nonharmonic pressure associated with the initial transient.

  12. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  13. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  14. Interference of Photons from a Weak Laser and a Quantum Dot

    NASA Astrophysics Data System (ADS)

    Ritchie, David; Bennett, Anthony; Patel, Raj; Nicoll, Christine; Shields, Andrew

    2010-03-01

    We demonstrate two-photon interference from two unsynchronized sources operating via different physical processes [1]. One source is spontaneous emission from the X^- state of an electrically-driven InAs/GaAs single quantum dot with μeV linewidth, the other stimulated emission from a laser with a neV linewidth. We mix the emission from these sources on a balanced non-polarising beam splitter and measure correlations in the photons that exit using Si-avalanche photodiodes and a time-correlated counting card. By periodically switching the polarisation state of the weak laser we simultaneously measure the correlation for parallel and orthogonally polarised sources, corresponding to maximum and minimum degrees of interference. When the two sources have the same intensity, a reduction in the correlation function at time zero for the case of parallel photon sources clearly indicates this interference effect. To quantify the degree of interference, we develop a theory that predicts the correlation function. Data and experiment are then compared for a range of intensity ratios. Based on this analysis we infer a wave-function overlap of 91%, which is remarkable given the fundamental differences between the two sources. [1] Bennett A. J et al Nature Physics, 5, 715--717 (2009).

  15. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  16. The generation of gravitational waves. I - Weak-field sources

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1975-01-01

    This paper derives and summarizes a 'plug-in-and-grind' formalism for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, the formalism reduces to standard 'linearized theory'. Independent of the effects of gravity on the motions, the formalism reduces to the standard 'quadrupole-moment formalism' if the motions are slow and internal stresses are weak. In the general case, the formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime and breaks the Green's function integral into five easily understood pieces: direct radiation, produced directly by the motions of the source; whump radiation, produced by the 'gravitational stresses' of the source; transition radiation, produced by a time-changing time delay ('Shapiro effect') in the propagation of the nonradiative 1/r field of the source; focusing radiation, produced when one portion of the source focuses, in a time-dependent way, the nonradiative field of another portion of the source; and tail radiation, produced by 'back-scatter' of the nonradiative field in regions of focusing.

  17. The generation of gravitational waves. 1. Weak-field sources: A plug-in-and-grind formalism

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1974-01-01

    A plug-in-and-grind formalism is derived for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, then the formalism reduces to standard linearized theory. Whether or not gravity affects the motions, if the motions are slow and internal stresses are weak, then the new formalism reduces to the standard quadrupole-moment formalism. In the general case the new formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime, and then breaks the Green's-function integral into five easily understood pieces: direct radiation, produced directly by the motions of the sources; whump radiation, produced by the the gravitational stresses of the source; transition radiation, produced by a time-changing time delay (Shapiro effect) in the propagation of the nonradiative, 1/r field of the source; focussing radiation produced when one portion of the source focusses, in a time-dependent way, the nonradiative field of another portion of the source, and tail radiation, produced by backscatter of the nonradiative field in regions of focussing.

  18. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Core Noise Diagnostics of Turbofan Engine Noise Using Correlation and Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey H.

    2009-01-01

    Cross-correlation and coherence functions are used to look for periodic acoustic components in turbofan engine combustor time histories, to investigate direct and indirect combustion noise source separation based on signal propagation time delays, and to provide information on combustor acoustics. Using the cross-correlation function, time delays were identified in all cases, clearly indicating the combustor is the source of the noise. In addition, unfiltered and low-pass filtered at 400 Hz signals had a cross-correlation time delay near 90 ms, while the low-pass filtered at less than 400 Hz signals had a cross-correlation time delay longer than 90 ms. Low-pass filtering at frequencies less than 400 Hz partially removes the direct combustion noise signals. The remainder includes the indirect combustion noise signal, which travels more slowly because of the dependence on the entropy convection velocity in the combustor. Source separation of direct and indirect combustion noise is demonstrated by proper use of low-pass filters with the cross-correlation function for a range of operating conditions. The results may lead to a better idea about the acoustics in the combustor and may help develop and validate improved reduced-order physics-based methods for predicting direct and indirect combustion noise.

  20. Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rodgers, A. J.

    2015-12-01

    Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    NASA Astrophysics Data System (ADS)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  2. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  3. Coherence and phase locking in the scalp EEG and between LORETA model sources, and microstates as putative mechanisms of brain temporo-spatial functional organization.

    PubMed

    Lehmann, Dietrich; Faber, Pascal L; Gianotti, Lorena R R; Kochi, Kieko; Pascual-Marqui, Roberto D

    2006-01-01

    Brain electric mechanisms of temporary, functional binding between brain regions are studied using computation of scalp EEG coherence and phase locking, sensitive to time differences of few milliseconds. However, such results if computed from scalp data are ambiguous since electric sources are spatially oriented. Non-ambiguous results can be obtained using calculated time series of strength of intracerebral model sources. This is illustrated applying LORETA modeling to EEG during resting and meditation. During meditation, time series of LORETA model sources revealed a tendency to decreased left-right intracerebral coherence in the delta band, and to increased anterior-posterior intracerebral coherence in the theta band. An alternate conceptualization of functional binding is based on the observation that brain electric activity is discontinuous, i.e., that it occurs in chunks of up to about 100 ms duration that are detectable as quasi-stable scalp field configurations of brain electric activity, called microstates. Their functional significance is illustrated in spontaneous and event-related paradigms, where microstates associated with imagery- versus abstract-type mentation, or while reading positive versus negative emotion words showed clearly different regions of cortical activation in LORETA tomography. These data support the concept that complete brain functions of higher order such as a momentary thought might be incorporated in temporal chunks of processing in the range of tens to about 100 ms as quasi-stable brain states; during these time windows, subprocesses would be accepted as members of the ongoing chunk of processing.

  4. Time-domain comparisons of power law attenuation in causal and noncausal time-fractional wave equations

    PubMed Central

    Zhao, Xiaofeng; McGough, Robert J.

    2016-01-01

    The attenuation of ultrasound propagating in human tissue follows a power law with respect to frequency that is modeled by several different causal and noncausal fractional partial differential equations. To demonstrate some of the similarities and differences that are observed in three related time-fractional partial differential equations, time-domain Green's functions are calculated numerically for the power law wave equation, the Szabo wave equation, and for the Caputo wave equation. These Green's functions are evaluated for water with a power law exponent of y = 2, breast with a power law exponent of y = 1.5, and liver with a power law exponent of y = 1.139. Simulation results show that the noncausal features of the numerically calculated time-domain response are only evident very close to the source and that these causal and noncausal time-domain Green's functions converge to the same result away from the source. When noncausal time-domain Green's functions are convolved with a short pulse, no evidence of noncausal behavior remains in the time-domain, which suggests that these causal and noncausal time-fractional models are equally effective for these numerical calculations. PMID:27250193

  5. Underground coal mining section data

    NASA Technical Reports Server (NTRS)

    Gabrill, C. P.; Urie, J. T.

    1981-01-01

    A set of tables which display the allocation of time for ten personnel and eight pieces of underground coal mining equipment to ten function categories is provided. Data from 125 full shift time studies contained in the KETRON database was utilized as the primary source data. The KETRON activity and delay codes were mapped onto JPL equipment, personnel and function categories. Computer processing was then performed to aggregate the shift level data and generate the matrices. Additional, documented time study data were analyzed and used to supplement the KETRON databased. The source data including the number of shifts are described. Specific parameters of the mines from which there data were extracted are presented. The result of the data processing including the required JPL matrices is presented. A brief comparison with a time study analysis of continuous mining systems is presented. The procedures used for processing the source data are described.

  6. astroplan: An Open Source Observation Planning Package in Python

    NASA Astrophysics Data System (ADS)

    Morris, Brett M.; Tollerud, Erik; Sipőcz, Brigitta; Deil, Christoph; Douglas, Stephanie T.; Berlanga Medina, Jazmin; Vyhmeister, Karl; Smith, Toby R.; Littlefair, Stuart; Price-Whelan, Adrian M.; Gee, Wilfred T.; Jeschke, Eric

    2018-03-01

    We present astroplan—an open source, open development, Astropy affiliated package for ground-based observation planning and scheduling in Python. astroplan is designed to provide efficient access to common observational quantities such as celestial rise, set, and meridian transit times and simple transformations from sky coordinates to altitude-azimuth coordinates without requiring a detailed understanding of astropy’s implementation of coordinate systems. astroplan provides convenience functions to generate common observational plots such as airmass and parallactic angle as a function of time, along with basic sky (finder) charts. Users can determine whether or not a target is observable given a variety of observing constraints, such as airmass limits, time ranges, Moon illumination/separation ranges, and more. A selection of observation schedulers are included that divide observing time among a list of targets, given observing constraints on those targets. Contributions to the source code from the community are welcome.

  7. Response functions for neutron skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-02-01

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less

  8. Time Reversal Mirrors and Cross Correlation Functions in Acoustic Wave Propagation

    NASA Astrophysics Data System (ADS)

    Fishman, Louis; Jonsson, B. Lars G.; de Hoop, Maarten V.

    2009-03-01

    In time reversal acoustics (TRA), a signal is recorded by an array of transducers, time reversed, and then retransmitted into the configuration. The retransmitted signal propagates back through the same medium and retrofocuses on the source that generated the signal. If the transducer array is a single, planar (flat) surface, then this configuration is referred to as a planar, one-sided, time reversal mirror (TRM). In signal processing, for example, in active-source seismic interferometry, the measurement of the wave field at two distinct receivers, generated by a common source, is considered. Cross correlating these two observations and integrating the result over the sources yield the cross correlation function (CCF). Adopting the TRM experiments as the basic starting point and identifying the kinematically correct correspondences, it is established that the associated CCF signal processing constructions follow in a specific, infinite recording time limit. This perspective also provides for a natural rationale for selecting the Green's function components in the TRM and CCF expressions. For a planar, one-sided, TRM experiment and the corresponding CCF signal processing construction, in a three-dimensional homogeneous medium, the exact expressions are explicitly calculated, and the connecting limiting relationship verified. Finally, the TRM and CCF results are understood in terms of the underlying, governing, two-way wave equation, its corresponding time reversal invariance (TRI) symmetry, and the absence of TRI symmetry in the associated one-way wave equations, highlighting the role played by the evanescent modal contributions.

  9. The Relationship between Sources and Functions of Social Support and Dimensions of Child- and Parent-Related Stress

    ERIC Educational Resources Information Center

    Guralnick, M. J.; Hammond, M. A.; Neville, B.; Connor, R. T.

    2008-01-01

    Background: In this longitudinal study, we examined the relationship between the sources and functions of social support and dimensions of child- and parent-related stress for mothers of young children with mild developmental delays. Methods: Sixty-three mothers completed assessments of stress and support at two time points. Results: Multiple…

  10. Understanding handpump sustainability: Determinants of rural water source functionality in the Greater Afram Plains region of Ghana.

    PubMed

    Fisher, Michael B; Shields, Katherine F; Chan, Terence U; Christenson, Elizabeth; Cronk, Ryan D; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra; Bartram, Jamie

    2015-10-01

    Safe drinking water is critical to human health and development. In rural sub-Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross-sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access.

  11. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Thorbecke, Jan; van der Neut, Joost

    2016-04-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's function of the wave equation without a singularity on the right-hand side) is represented by a closed boundary integral. In practical applications, sources and/or receivers are usually present only on an open surface, which implies that a significant part of the closed boundary integral is by necessity ignored. Here we derive a homogeneous Green's function representation for the common situation that sources and/or receivers are present on an open surface only. We modify the integrand in such a way that it vanishes on the part of the boundary where no sources and receivers are present. As a consequence, the remaining integral along the open surface is an accurate single-sided representation of the homogeneous Green's function. This single-sided representation accounts for all orders of multiple scattering. The new representation significantly improves the aforementioned wavefield imaging applications, particularly in situations where the first-order scattering approximation breaks down.

  12. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  13. Real-time color measurement using active illuminant

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji; Horiuchi, Takahiko; Yoshimura, Akihiko

    2010-01-01

    This paper proposes a method for real-time color measurement using active illuminant. A synchronous measurement system is constructed by combining a high-speed active spectral light source and a high-speed monochrome camera. The light source is a programmable spectral source which is capable of emitting arbitrary spectrum in high speed. This system is the essential advantage of capturing spectral images without using filters in high frame rates. The new method of real-time colorimetry is different from the traditional method based on the colorimeter or the spectrometers. We project the color-matching functions onto an object surface as spectral illuminants. Then we can obtain the CIE-XYZ tristimulus values directly from the camera outputs at every point on the surface. We describe the principle of our colorimetric technique based on projection of the color-matching functions and the procedure for realizing a real-time measurement system of a moving object. In an experiment, we examine the performance of real-time color measurement for a static object and a moving object.

  14. 3-D acoustic waveform simulation and inversion at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Austin, A.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Kennedy, B.; Fitzgerald, R.; Key, N.

    2016-12-01

    Acoustic waveform inversion shows promise for improved eruption characterization that may inform volcano monitoring. Well-constrained inversion can provide robust estimates of volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be thought of as the combination of pressure fluctuations from a volume change, directionality, and turbulence. This infrasound source could not be well constrained up to this time due to infrasound sensors only being deployed on Earth's surface, so the assumption of no vertical dipole component has been made. In this study we deploy a high-density seismo-acoustic network, including multiple acoustic sensors along a tethered balloon around Yasur Volcano, Vanuatu. Yasur has frequent strombolian eruptions from any one of its three active vents within a 400 m diameter crater. The third dimension (vertical) of pressure sensor coverage allows us to begin to constrain the acoustic source components in a profound way, primarily the horizontal and vertical components and their previously uncharted contributions to volcano infrasound. The deployment also has a geochemical and visual component, including FLIR, FTIR, two scanning FLYSPECs, and a variety of visual imagery. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a digital elevation model created using structure-from-motion techniques. We then invert for the source location and source-time function, constraining the contribution of the vertical sound radiation to the source. The final outcome of this inversion is an infrasound-derived volume flux as a function of time, which we then compare to those derived independently from geochemical techniques as well as the inversion of seismic data. Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249

  15. Locating Microseism Sources Using Spurious Arrivals in Intercontinental Noise Correlations

    NASA Astrophysics Data System (ADS)

    Retailleau, Lise; Boué, Pierre; Stehly, Laurent; Campillo, Michel

    2017-10-01

    The accuracy of Green's functions retrieved from seismic noise correlations in the microseism frequency band is limited by the uneven distribution of microseism sources at the surface of the Earth. As a result, correlation functions are often biased as compared to the expected Green's functions, and they can include spurious arrivals. These spurious arrivals are seismic arrivals that are visible on the correlation and do not belong to the theoretical impulse response. In this article, we propose to use Rayleigh wave spurious arrivals detected on correlation functions computed between European and United States seismic stations to locate microseism sources in the Atlantic Ocean. We perform a slant stack on a time distance gather of correlations obtained from an array of stations that comprises a regional deployment and a distant station. The arrival times and the apparent slowness of the spurious arrivals lead to the location of their source, which is obtained through a grid search procedure. We discuss improvements in the location through this methodology as compared to classical back projection of microseism energy. This method is interesting because it only requires an array and a distant station on each side of an ocean, conditions that can be met relatively easily.

  16. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    USGS Publications Warehouse

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  17. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  18. Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.

    2017-12-01

    We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.

  19. Advanced Optimal Extraction for the Spitzer/IRS

    NASA Astrophysics Data System (ADS)

    Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.

    2010-02-01

    We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.

  20. Band-limited Green's Functions for Quantitative Evaluation of Acoustic Emission Using the Finite Element Method

    NASA Technical Reports Server (NTRS)

    Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.

    2013-01-01

    A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.

  1. A time-frequency analysis of the dynamics of cortical networks of sleep spindles from MEG-EEG recordings

    PubMed Central

    Zerouali, Younes; Lina, Jean-Marc; Sekerovic, Zoran; Godbout, Jonathan; Dube, Jonathan; Jolicoeur, Pierre; Carrier, Julie

    2014-01-01

    Sleep spindles are a hallmark of NREM sleep. They result from a widespread thalamo-cortical loop and involve synchronous cortical networks that are still poorly understood. We investigated whether brain activity during spindles can be characterized by specific patterns of functional connectivity among cortical generators. For that purpose, we developed a wavelet-based approach aimed at imaging the synchronous oscillatory cortical networks from simultaneous MEG-EEG recordings. First, we detected spindles on the EEG and extracted the corresponding frequency-locked MEG activity under the form of an analytic ridge signal in the time-frequency plane (Zerouali et al., 2013). Secondly, we performed source reconstruction of the ridge signal within the Maximum Entropy on the Mean framework (Amblard et al., 2004), yielding a robust estimate of the cortical sources producing observed oscillations. Lastly, we quantified functional connectivity among cortical sources using phase-locking values. The main innovations of this methodology are (1) to reveal the dynamic behavior of functional networks resolved in the time-frequency plane and (2) to characterize functional connectivity among MEG sources through phase interactions. We showed, for the first time, that the switch from fast to slow oscillatory mode during sleep spindles is required for the emergence of specific patterns of connectivity. Moreover, we show that earlier synchrony during spindles was associated with mainly intra-hemispheric connectivity whereas later synchrony was associated with global long-range connectivity. We propose that our methodology can be a valuable tool for studying the connectivity underlying neural processes involving sleep spindles, such as memory, plasticity or aging. PMID:25389381

  2. Understanding handpump sustainability: Determinants of rural water source functionality in the Greater Afram Plains region of Ghana†

    PubMed Central

    Shields, Katherine F.; Chan, Terence U.; Christenson, Elizabeth; Cronk, Ryan D.; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra

    2015-01-01

    Abstract Safe drinking water is critical to human health and development. In rural sub‐Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross‐sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access. PMID:27667863

  3. Understanding handpump sustainability: Determinants of rural water source functionality in the Greater Afram Plains region of Ghana

    NASA Astrophysics Data System (ADS)

    Fisher, Michael B.; Shields, Katherine F.; Chan, Terence U.; Christenson, Elizabeth; Cronk, Ryan D.; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra; Bartram, Jamie

    2015-10-01

    Safe drinking water is critical to human health and development. In rural sub-Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross-sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access. This article was corrected on 11 Nov 2015. See the end of the full text for details.

  4. Heuristic Green's function of the time dependent radiative transfer equation for a semi-infinite medium.

    PubMed

    Martelli, Fabrizio; Sassaroli, Angelo; Pifferi, Antonio; Torricelli, Alessandro; Spinelli, Lorenzo; Zaccanti, Giovanni

    2007-12-24

    The Green's function of the time dependent radiative transfer equation for the semi-infinite medium is derived for the first time by a heuristic approach based on the extrapolated boundary condition and on an almost exact solution for the infinite medium. Monte Carlo simulations performed both in the simple case of isotropic scattering and of an isotropic point-like source, and in the more realistic case of anisotropic scattering and pencil beam source, are used to validate the heuristic Green's function. Except for the very early times, the proposed solution has an excellent accuracy (> 98 % for the isotropic case, and > 97 % for the anisotropic case) significantly better than the diffusion equation. The use of this solution could be extremely useful in the biomedical optics field where it can be directly employed in conditions where the use of the diffusion equation is limited, e.g. small volume samples, high absorption and/or low scattering media, short source-receiver distances and early times. Also it represents a first step to derive tools for other geometries (e.g. slab and slab with inhomogeneities inside) of practical interest for noninvasive spectroscopy and diffuse optical imaging. Moreover the proposed solution can be useful to several research fields where the study of a transport process is fundamental.

  5. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.

    PubMed

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.

  6. EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome

    PubMed Central

    Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice

    2015-01-01

    The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232

  7. Dynamic radioactive particle source

    DOEpatents

    Moore, Murray E; Gauss, Adam Benjamin; Justus, Alan Lawrence

    2012-06-26

    A method and apparatus for providing a timed, synchronized dynamic alpha or beta particle source for testing the response of continuous air monitors (CAMs) for airborne alpha or beta emitters is provided. The method includes providing a radioactive source; placing the radioactive source inside the detection volume of a CAM; and introducing an alpha or beta-emitting isotope while the CAM is in a normal functioning mode.

  8. VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)

    NASA Astrophysics Data System (ADS)

    Yang, X.-L.; Wang, J.-C.

    2013-11-01

    Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).

  9. Guidebook on preserving the functionality of state highways in Texas.

    DOT National Transportation Integrated Search

    2010-05-01

    The purpose of this project was to identify the sources of deterioration of state highway : functionality that occur over time and what actions can be taken to preserve, recover, and : enhance functionality. Congestion and operational problems slow t...

  10. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  11. Theory of acoustic design of opera house and a design proposal

    NASA Astrophysics Data System (ADS)

    Ando, Yoichi

    2004-05-01

    First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.

  12. Empirical Green's functions from small earthquakes: A waveform study of locally recorded aftershocks of the 1971 San Fernando earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchings, L.; Wu, F.

    1990-02-10

    Seismograms from 52 aftershocks of the 1971 San Fernando earthquake recorded at 25 stations distributed across the San Fernando Valley are examined to identify empirical Green's functions, and characterize the dependence of their waveforms on moment, focal mechanism, source and recording site spatial variations, recording site geology, and recorded frequency band. Recording distances ranged from 3.0 to 33.0 km, hypocentral separations ranged from 0.22 to 28.4 km, and recording site separations ranged from 0.185 to 24.2 km. The recording site geologies are diorite gneiss, marine and nonmarine sediments, and alluvium of varying thicknesses. Waveforms of events with moment below aboutmore » 1.5 {times} 10{sup 21} dyn cm are independent of the source-time function and are termed empirical Green's functions. Waveforms recorded at a particular station from events located within 1.0 to 3.0 km of each other, depending upon site geology, with very similar focal mechanism solutions are nearly identical for frequencies up to 10 Hz. There is no correlation to waveforms between recording sites at least 1.2 km apart, and waveforms are clearly distinctive for two sites 0.185 km apart. The geologic conditions of the recording site dominate the character of empirical Green's functions. Even for source separations of up to 20.0 km, the empirical Green's functions at a particular site are consistent in frequency content, amplification, and energy distribution. Therefore, it is shown that empirical Green's functions can be used to obtain site response functions. The observations of empirical Green's functions are used as a basis for developing the theory for using empirical Green's functions in deconvolution for source pulses and synthesis of seismograms of larger earthquakes.« less

  13. Thunder-induced ground motions: 1. Observations

    NASA Astrophysics Data System (ADS)

    Lin, Ting-L.; Langston, Charles A.

    2009-04-01

    Acoustic pressure from thunder and its induced ground motions were investigated using a small array consisting of five three-component short-period surface seismometers, a three-component borehole seismometer, and five infrasound microphones. We used the array to constrain wave parameters of the incident acoustic and seismic waves. The incident slowness differences between acoustic pressure and ground motions suggest that ground reverberations were first initiated somewhat away from the array. Using slowness inferred from ground motions is preferable to obtain the seismic source parameters. We propose a source equalization procedure for acoustic/seismic deconvolution to generate the time domain transfer function, a procedure similar to that of obtaining teleseismic earthquake receiver functions. The time domain transfer function removes the incident pressure time history from the seismogram. An additional vertical-to-radial ground motion transfer function was used to identify the Rayleigh wave propagation mode of induced seismic waves complementing that found using the particle motions and amplitude variations in the borehole. The initial motions obtained by the time domain transfer functions suggest a low Poisson's ratio for the near-surface layer. The acoustic-to-seismic transfer functions show a consistent reverberation series at frequencies near 5 Hz. This gives an empirical measure of site resonance that depends on the ratio of the layer velocity to layer thickness for earthquake P and S waves. The time domain transfer function approach by transferring a spectral division into the time domain provides an alternative method for studying acoustic-to-seismic coupling.

  14. [Functional state of the visual analyzer in the conditions of the use of traditional and LED light sources].

    PubMed

    Kaptsov, V A; Sosunov, N N; Shishchenko, I I; Viktorov, V S; Tulushev, V N; Deynego, V N; Bukhareva, E A; Murashova, M A; Shishchenko, A A

    2014-01-01

    There was performed the experimental work on the study of the possibility of the application of LED lighting (LED light sources) in rail transport for traffic safety in related professions. Results of 4 series of studies involving 10 volunteers for the study and a comparative evaluation of the functional state of the visual analyzer, the general functional state and mental capacity under the performing the simulated operator activity in conditions of traditional light sources (incandescent, fluorescent lamp) and the new LED (LED lamp, LED panel) light sources have revealed changes in the negative direction. This was pronounced in a some decrease of functional stability to color discrimination between green and red cone signals, as well as an increase in response time in complex visual--motor response and significant reduction in readiness for emergency action of examinees.

  15. Electrophysiological Source Imaging: A Noninvasive Window to Brain Dynamics.

    PubMed

    He, Bin; Sohrabpour, Abbas; Brown, Emery; Liu, Zhongming

    2018-06-04

    Brain activity and connectivity are distributed in the three-dimensional space and evolve in time. It is important to image brain dynamics with high spatial and temporal resolution. Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive measurements associated with complex neural activations and interactions that encode brain functions. Electrophysiological source imaging estimates the underlying brain electrical sources from EEG and MEG measurements. It offers increasingly improved spatial resolution and intrinsically high temporal resolution for imaging large-scale brain activity and connectivity on a wide range of timescales. Integration of electrophysiological source imaging and functional magnetic resonance imaging could further enhance spatiotemporal resolution and specificity to an extent that is not attainable with either technique alone. We review methodological developments in electrophysiological source imaging over the past three decades and envision its future advancement into a powerful functional neuroimaging technology for basic and clinical neuroscience applications.

  16. Source modeling and inversion with near real-time GPS: a GITEWS perspective for Indonesia

    NASA Astrophysics Data System (ADS)

    Babeyko, A. Y.; Hoechner, A.; Sobolev, S. V.

    2010-07-01

    We present the GITEWS approach to source modeling for the tsunami early warning in Indonesia. Near-field tsunami implies special requirements to both warning time and details of source characterization. To meet these requirements, we employ geophysical and geological information to predefine a maximum number of rupture parameters. We discretize the tsunamigenic Sunda plate interface into an ordered grid of patches (150×25) and employ the concept of Green's functions for forward and inverse rupture modeling. Rupture Generator, a forward modeling tool, additionally employs different scaling laws and slip shape functions to construct physically reasonable source models using basic seismic information only (magnitude and epicenter location). GITEWS runs a library of semi- and fully-synthetic scenarios to be extensively employed by system testing as well as by warning center personnel teaching and training. Near real-time GPS observations are a very valuable complement to the local tsunami warning system. Their inversion provides quick (within a few minutes on an event) estimation of the earthquake magnitude, rupture position and, in case of sufficient station coverage, details of slip distribution.

  17. Openly Published Environmental Sensing (OPEnS) | Advancing Open-Source Research, Instrumentation, and Dissemination

    NASA Astrophysics Data System (ADS)

    Udell, C.; Selker, J. S.

    2017-12-01

    The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.

  18. Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.

    PubMed

    Firtha, Gergely; Fiala, Péter

    2017-08-01

    The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.

  19. Ocean acoustic interferometry.

    PubMed

    Brooks, Laura A; Gerstoft, Peter

    2007-06-01

    Ocean acoustic interferometry refers to an approach whereby signals recorded from a line of sources are used to infer the Green's function between two receivers. An approximation of the time domain Green's function is obtained by summing, over all source positions (stacking), the cross-correlations between the receivers. Within this paper a stationary phase argument is used to describe the relationship between the stacked cross-correlations from a line of vertical sources, located in the same vertical plane as two receivers, and the Green's function between the receivers. Theory and simulations demonstrate the approach and are in agreement with those of a modal based approach presented by others. Results indicate that the stacked cross-correlations can be directly related to the shaded Green's function, so long as the modal continuum of any sediment layers is negligible.

  20. Correction for the detector-dead-time effect on the second-order correlation of stationary sub-Poissonian light in a two-detector configuration

    NASA Astrophysics Data System (ADS)

    Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon

    2015-08-01

    Exact measurement of the second-order correlation function g(2 )(t ) of a light source is essential when investigating the photon statistics and the light generation process of the source. For a stationary single-mode light source, the Mandel Q factor is directly related to g(2 )(0 ) . For a large mean photon number in the mode, the deviation of g(2 )(0 ) from unity is so small that even a tiny error in measuring g(2 )(0 ) would result in an inaccurate Mandel Q . In this work, we address the detector-dead-time effect on g(2 )(0 ) of stationary sub-Poissonian light. It is then found that detector dead time can induce a serious error in g(2 )(0 ) and thus in Mandel Q in those cases even in a two-detector configuration. Utilizing the cavity-QED microlaser, a well-established sub-Poissonian light source, we measured g(2 )(0 ) with two different types of photodetectors with different dead times. We also introduced prolonged dead time by intentionally deleting the photodetection events following a preceding one within a specified time interval. We found that the observed Q of the cavity-QED microlaser was underestimated by 19% with respect to the dead-time-free Q when its mean photon number was about 600. We derived an analytic formula which well explains the behavior of the g(2 )(0 ) as a function of the dead time.

  1. Testing the seismology-based landquake monitoring system

    NASA Astrophysics Data System (ADS)

    Chao, Wei-An

    2016-04-01

    I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.

  2. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  3. Information Source Preference as a Function of Physical and Psychological Distance from the Information Object.

    ERIC Educational Resources Information Center

    Paisley, William J.

    In this study data were collected on "major" and "most helpful" sources used by high school students (incoming Syracuse University freshmen at the time of data collection) as they gathered information about Syracuse and other colleges. Sources were both interpersonal (family, friends, high school personnel, college representatives) and impersonal…

  4. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nitao, J J

    The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less

  6. OpenNFT: An open-source Python/Matlab framework for real-time fMRI neurofeedback training based on activity, connectivity and multivariate pattern analysis.

    PubMed

    Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van

    2017-08-01

    Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Generalized slow roll in the unified effective field theory of inflation

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Hu, Wayne

    2017-07-01

    We provide a compact and unified treatment of power spectrum observables for the effective field theory (EFT) of inflation with the complete set of operators that lead to second-order equations of motion in metric perturbations in both space and time derivatives, including Horndeski and Gleyzes-Langlois-Piazza-Vernizzi theories. We relate the EFT operators in ADM form to the four additional free functions of time in the scalar and tensor equations. Using the generalized slow-roll formalism, we show that each power spectrum can be described by an integral over a single source that is a function of its respective sound horizon. With this correspondence, existing model independent constraints on the source function can be simply reinterpreted in the more general inflationary context. By expanding these sources around an optimized freeze-out epoch, we also provide characterizations of these spectra in terms of five slow-roll hierarchies whose leading-order forms are compact and accurate as long as EFT coefficients vary only on time scales greater than an e -fold. We also clarify the relationship between the unitary gauge observables employed in the EFT and the comoving gauge observables of the postinflationary universe.

  8. Range Safety for an Autonomous Flight Safety System

    NASA Technical Reports Server (NTRS)

    Lanzi, Raymond J.; Simpson, James C.

    2010-01-01

    The Range Safety Algorithm software encapsulates the various constructs and algorithms required to accomplish Time Space Position Information (TSPI) data management from multiple tracking sources, autonomous mission mode detection and management, and flight-termination mission rule evaluation. The software evaluates various user-configurable rule sets that govern the qualification of TSPI data sources, provides a prelaunch autonomous hold-launch function, performs the flight-monitoring-and-termination functions, and performs end-of-mission safing

  9. A Decision Model for Merging Base Operations: Outsourcing Pest Management on Joint Base Anacostia-Bolling

    DTIC Science & Technology

    2011-11-30

    OH: South- Western Cengage Learning. Mankiw , N. G. (2006). Principles of economics (4th ed.). Mason, OH: Thompson South- Western. Private...When the choice to in-source or outsource an installation function or service requirement exists, in these challenging economic times, it is now more...decision uncertainties. When the choice to in-source or outsource an installation function or service requirement exists, in these challenging economic

  10. Determination of acoustical transfer functions using an impulse method

    NASA Astrophysics Data System (ADS)

    MacPherson, J.

    1985-02-01

    The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.

  11. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  12. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  13. Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David

    2012-01-01

    The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.

  14. Transient pressure analysis of fractured well in bi-zonal gas reservoirs

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-Long; Zhang, Lie-Hui; Liu, Yong-hui; Hu, Shu-Yong; Liu, Qi-Guo

    2015-05-01

    For hydraulic fractured well, how to evaluate the properties of fracture and formation are always tough jobs and it is very complex to use the conventional method to do that, especially for partially penetrating fractured well. Although the source function is a very powerful tool to analyze the transient pressure for complex structure well, the corresponding reports on gas reservoir are rare. In this paper, the continuous point source functions in anisotropic reservoirs are derived on the basis of source function theory, Laplace transform method and Duhamel principle. Application of construction method, the continuous point source functions in bi-zonal gas reservoir with closed upper and lower boundaries are obtained. Sequentially, the physical models and transient pressure solutions are developed for fully and partially penetrating fractured vertical wells in this reservoir. Type curves of dimensionless pseudo-pressure and its derivative as function of dimensionless time are plotted as well by numerical inversion algorithm, and the flow periods and sensitive factors are also analyzed. The source functions and solutions of fractured well have both theoretical and practical application in well test interpretation for such gas reservoirs, especial for the well with stimulated reservoir volume around the well in unconventional gas reservoir by massive hydraulic fracturing which always can be described with the composite model.

  15. A phase coherence approach to identifying co-located earthquakes and tremor

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Ampuero, J.-P.

    2018-05-01

    We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.

  16. Uncertainties for seismic moment tensors and applications to nuclear explosions, volcanic events, and earthquakes

    NASA Astrophysics Data System (ADS)

    Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.

    2017-12-01

    When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).

  17. Temporal and spatial heterogeneity of rupture process application in shakemaps of Yushu Ms7.1 earthquake, China

    NASA Astrophysics Data System (ADS)

    Kun, C.

    2015-12-01

    Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.

  18. Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source

    NASA Astrophysics Data System (ADS)

    Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen

    2018-05-01

    Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.

  19. Solution of the equation of heat conduction with time dependent sources: Programmed application to planetary thermal history

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1975-01-01

    A computer program (Program SPHERE) solving the inhomogeneous equation of heat conduction with radiation boundary condition on a thermally homogeneous sphere is described. The source terms are taken to be exponential functions of the time. Thermal properties are independent of temperature. The solutions are appropriate to studying certain classes of planetary thermal history. Special application to the moon is discussed.

  20. Emission source functions in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Shapoval, V. M.; Sinyukov, Yu. M.; Karpenko, Iu. A.

    2013-12-01

    Three-dimensional pion and kaon emission source functions are extracted from hydrokinetic model (HKM) simulations of central Au+Au collisions at the top Relativistic Heavy Ion Collider (RHIC) energy sNN=200 GeV. The model describes well the experimental data, previously obtained by the PHENIX and STAR collaborations using the imaging technique. In particular, the HKM reproduces the non-Gaussian heavy tails of the source function in the pair transverse momentum (out) and beam (long) directions, observed in the pion case and practically absent for kaons. The role of rescatterings and long-lived resonance decays in forming the mentioned long-range tails is investigated. The particle rescattering contribution to the out tail seems to be dominating. The model calculations also show substantial relative emission times between pions (with mean value 13 fm/c in the longitudinally comoving system), including those coming from resonance decays and rescatterings. A prediction is made for the source functions in Large Hadron Collider (LHC) Pb+Pb collisions at sNN=2.76 TeV, which are still not extracted from the measured correlation functions.

  1. Green’s functions for a volume source in an elastic half-space

    PubMed Central

    Zabolotskaya, Evgenia A.; Ilinskii, Yurii A.; Hay, Todd A.; Hamilton, Mark F.

    2012-01-01

    Green’s functions are derived for elastic waves generated by a volume source in a homogeneous isotropic half-space. The context is sources at shallow burial depths, for which surface (Rayleigh) and bulk waves, both longitudinal and transverse, can be generated with comparable magnitudes. Two approaches are followed. First, the Green’s function is expanded with respect to eigenmodes that correspond to Rayleigh waves. While bulk waves are thus ignored, this approximation is valid on the surface far from the source, where the Rayleigh wave modes dominate. The second approach employs an angular spectrum that accounts for the bulk waves and yields a solution that may be separated into two terms. One is associated with bulk waves, the other with Rayleigh waves. The latter is proved to be identical to the Green’s function obtained following the first approach. The Green’s function obtained via angular spectrum decomposition is analyzed numerically in the time domain for different burial depths and distances to the receiver, and for parameters relevant to seismo-acoustic detection of land mines and other buried objects. PMID:22423682

  2. Optical frequency switching scheme for a high-speed broadband THz measurement system based on the photomixing technique.

    PubMed

    Song, Hajun; Hwang, Sejin; Song, Jong-In

    2017-05-15

    This study presents an optical frequency switching scheme for a high-speed broadband terahertz (THz) measurement system based on the photomixing technique. The proposed system can achieve high-speed broadband THz measurements using narrow optical frequency scanning of a tunable laser source combined with a wavelength-switchable laser source. In addition, this scheme can provide a larger output power of an individual THz signal compared with that of a multi-mode THz signal generated by multiple CW laser sources. A swept-source THz tomography system implemented with a two-channel wavelength-switchable laser source achieves a reduced time for acquisition of a point spread function and a higher depth resolution in the same amount of measurement time compared with a system with a single optical source.

  3. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.

  4. Studying Regional Wave Source Time Functions Using the Empirical Green's Function Method: Application to Central Asia

    NASA Astrophysics Data System (ADS)

    Xie, J.; Schaff, D. P.; Chen, Y.; Schult, F.

    2013-12-01

    Reliably estimated source time functions (STFs) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection and discrimination, and minimization of parameter trade-off in attenuation studies. We have searched for candidate pairs of larger and small earthquakes in and around China that share the same focal mechanism but significantly differ in magnitudes, so that the empirical Green's function (EGF) method can be applied to study the STFs of the larger events. We conducted about a million deconvolutions using waveforms from 925 earthquakes, and screened the deconvolved traces to exclude those that are from event pairs that involved different mechanisms. Only 2,700 traces passed this screening and could be further analyzed using the EGF method. We have developed a series of codes for speeding up the final EGF analysis by implementing automations and user-graphic interface procedures. The codes have been fully tested with a subset of screened data and we are currently applying them to all the screened data. We will present a large number of deconvolved STFs retrieved using various phases (Lg, Pn, Sn and Pg and coda) with information on any directivities, any possible dependence of pulse durations on the wave types, on scaling relations for the pulse durations and event sizes, and on the estimated source static stress drops.

  5. Narrow band noise response of a Belleville spring resonator.

    PubMed

    Lyon, Richard H

    2013-09-01

    This study of nonlinear dynamics includes (i) an identification of quasi-steady states of response using equivalent linearization, (ii) the temporal simulation of the system using Heun's time step procedure on time domain analytic signals, and (iii) a laboratory experiment. An attempt has been made to select material and measurement parameters so that nearly the same systems are used and analyzed for all three parts of the study. This study illustrates important features of nonlinear response to narrow band excitation: (a) states of response that the system can acquire with transitions of the system between those states, (b) the interaction between the noise source and the vibrating load in which the source transmits energy to or draws energy from the load as transitions occur; (c) the lag or lead of the system response relative to the source as transitions occur that causes the average frequencies of source and response to differ; and (d) the determination of the state of response (mass or stiffness controlled) by observation of the instantaneous phase of the influence function. These analyses take advantage of the use of time domain analytic signals that have a complementary role to functions that are analytic in the frequency domain.

  6. Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms

    NASA Astrophysics Data System (ADS)

    Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng

    2013-04-01

    Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.

  7. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  8. Prenatal choline supplementation increases sensitivity to time by reducing non-scalar sources of variance in adult temporal processing

    PubMed Central

    Cheng, Ruey-Kuang; Meck, Warren H.

    2009-01-01

    Choline supplementation of the maternal diet has a long-term facilitative effect on timing and temporal memory of the offspring. To further delineate the impact of early nutritional status on interval timing, we examined effects of prenatal-choline supplementation on the temporal sensitivity of adult (6 mo) male rats. Rats that were given sufficient choline in their chow (CON: 1.1 g/kg) or supplemental choline added to their drinking water (SUP: 3.5 g/kg) during embryonic days (ED) 12–17 were trained with a peak-interval procedure that was shifted among 75%, 50%, and 25% probabilities of reinforcement with transitions from 18s –> 36s –>72s temporal criteria. Prenatal-choline supplementation systematically sharpened interval-timing functions by reducing the associative/non-temporal response enhancing effects of reinforcement probability on the Start response threshold, thereby reducing non-scalar sources of variance in the left-hand portion of the Gaussian-shaped response functions. No effect was observed for the Stop response threshold as a function of any of these manipulations. In addition, independence of peak time and peak rate was demonstrated as a function of reinforcement probability for both prenatal-choline supplemented and control rats. Overall, these results suggest that prenatal-choline supplementation facilitates timing by reducing impulsive responding early in the interval, thereby improving the superimposition of peak functions for different temporal criteria. PMID:17996223

  9. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  10. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  11. Model space exploration for determining landslide source history from long period seismic data

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Mangeney, Anne; Stutzmann, Eléonore; Capdeville, Yann; Moretti, Laurent; Calder, Eliza S.; Smith, Patrick J.; Cole, Paul; Le Friant, Anne

    2013-04-01

    The seismic signals generated by high magnitude landslide events can be recorded at remote stations, which provides access to the landslide process. During the "Boxing Day" eruption at Montserrat in 1997, the long period seismic signals generated by the debris avalanche are recorded by two stations at distances of 450 km and 1261 km. We investigate the landslide process considering that the landslide source can be described by single forces. The period band 25-50 sec is selected for which the landslide signal is clearly visible at the two stations. We first use the transverse component of the closest station to determine the horizontal forces. We model the seismogram by normal mode summation and investigate the model space. Two horizontal forces are found that best fit the data. These two horizontal forces have similar amplitude, but opposite direction and they are separated in time by 70 sec. The radiation pattern of the transverse component does not enable to determine the exact azimuth of these forces. We then model the vertical component of the seismograms which enable to retrieve both the vertical and horizontal forces. Using the parameter previously determined (amplitude ratio and time shift of the 2 horizontal forces), we further investigate the model space and show that a single vertical force together with the 2 horizontal forces enable to fit the data. The complete source time function can be described as follows: a horizontal force toward the opposite direction of the landslide flow is followed 40 sec later by a vertical downward force and 30 more seconds later by a horizontal force toward the direction of the flow. Inverting directly the seismograms in the period band 25-50sec enable to retrieve a source time function that is consistent with the 3 forces determined previously. The source time function in this narrow period band alone does not enable easily to recover the corresponding single forces. This method can be used to determine the source parameters using only 2 distant stations. It is successfully tested also on Mount St. Helens (1980) event which are recorded by more broadband stations.

  12. Gaussian process based independent analysis for temporal source separation in fMRI.

    PubMed

    Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole

    2017-05-15

    Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its temporal nature. fMRI source signals, biological stimuli and non-stimuli-related artifacts are all smooth over a time-scale compatible with the sampling time (TR). We therefore propose Gaussian process ICA (GPICA), which facilitates temporal dependency by the use of Gaussian process source priors. On two fMRI data sets with different sampling frequency, we show that the GPICA-inferred temporal components and associated spatial maps allow for a more definite interpretation than standard temporal ICA methods. The temporal structures of the sources are controlled by the covariance of the Gaussian process, specified by a kernel function with an interpretable and controllable temporal length scale parameter. We propose a hierarchical model specification, considering both instantaneous and convolutive mixing, and we infer source spatial maps, temporal patterns and temporal length scale parameters by Markov Chain Monte Carlo. A companion implementation made as a plug-in for SPM can be downloaded from https://github.com/dittehald/GPICA. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. The GALEX Time Domain Survey. I. Selection and Classification of Over a Thousand Ultraviolet Variable Sources

    NASA Astrophysics Data System (ADS)

    Gezari, S.; Martin, D. C.; Forster, K.; Neill, J. D.; Huber, M.; Heckman, T.; Bianchi, L.; Morrissey, P.; Neff, S. G.; Seibert, M.; Schiminovich, D.; Wyder, T. K.; Burgett, W. S.; Chambers, K. C.; Kaiser, N.; Magnier, E. A.; Price, P. A.; Tonry, J. L.

    2013-03-01

    We present the selection and classification of over a thousand ultraviolet (UV) variable sources discovered in ~40 deg2 of GALEX Time Domain Survey (TDS) NUV images observed with a cadence of 2 days and a baseline of observations of ~3 years. The GALEX TDS fields were designed to be in spatial and temporal coordination with the Pan-STARRS1 Medium Deep Survey, which provides deep optical imaging and simultaneous optical transient detections via image differencing. We characterize the GALEX photometric errors empirically as a function of mean magnitude, and select sources that vary at the 5σ level in at least one epoch. We measure the statistical properties of the UV variability, including the structure function on timescales of days and years. We report classifications for the GALEX TDS sample using a combination of optical host colors and morphology, UV light curve characteristics, and matches to archival X-ray, and spectroscopy catalogs. We classify 62% of the sources as active galaxies (358 quasars and 305 active galactic nuclei), and 10% as variable stars (including 37 RR Lyrae, 53 M dwarf flare stars, and 2 cataclysmic variables). We detect a large-amplitude tail in the UV variability distribution for M-dwarf flare stars and RR Lyrae, reaching up to |Δm| = 4.6 mag and 2.9 mag, respectively. The mean amplitude of the structure function for quasars on year timescales is five times larger than observed at optical wavelengths. The remaining unclassified sources include UV-bright extragalactic transients, two of which have been spectroscopically confirmed to be a young core-collapse supernova and a flare from the tidal disruption of a star by dormant supermassive black hole. We calculate a surface density for variable sources in the UV with NUV < 23 mag and |Δm| > 0.2 mag of ~8.0, 7.7, and 1.8 deg-2 for quasars, active galactic nuclei, and RR Lyrae stars, respectively. We also calculate a surface density rate in the UV for transient sources, using the effective survey time at the cadence appropriate to each class, of ~15 and 52 deg-2 yr-1 for M dwarfs and extragalactic transients, respectively.

  14. Ambient noise correlations on a mobile, deformable array.

    PubMed

    Naughton, Perry; Roux, Philippe; Yeakle, Riley; Schurgers, Curt; Kastner, Ryan; Jaffe, Jules S; Roberts, Paul L D

    2016-12-01

    This paper presents a demonstration of ambient acoustic noise processing on a set of free floating oceanic receivers whose relative positions vary with time. It is shown that it is possible to retrieve information that is relevant to the travel time between the receivers. With thousands of short time cross-correlations (10 s) of varying distance, it is shown that on average, the decrease in amplitude of the noise correlation function with increased separation follows a power law. This suggests that there may be amplitude information that is embedded in the noise correlation function. An incoherent beamformer is developed, which shows that it is possible to determine a source direction using an array with moving elements and large element separation. This incoherent beamformer is used to verify cases when the distribution of noise sources in the ocean allows one to recover travel time information between pairs of mobile receivers.

  15. Advanced RF and microwave functions based on an integrated optical frequency comb source.

    PubMed

    Xu, Xingyuan; Wu, Jiayang; Nguyen, Thach G; Shoeiby, Mehrdad; Chu, Sai T; Little, Brent E; Morandotti, Roberto; Mitchell, Arnan; Moss, David J

    2018-02-05

    We demonstrate advanced transversal radio frequency (RF) and microwave functions based on a Kerr optical comb source generated by an integrated micro-ring resonator. We achieve extremely high performance for an optical true time delay aimed at tunable phased array antenna applications, as well as reconfigurable microwave photonic filters. Our results agree well with theory. We show that our true time delay would yield a phased array antenna with features that include high angular resolution and a wide range of beam steering angles, while the microwave photonic filters feature high Q factors, wideband tunability, and highly reconfigurable filtering shapes. These results show that our approach is a competitive solution to implementing reconfigurable, high performance and potentially low cost RF and microwave signal processing functions for applications including radar and communication systems.

  16. Nanoseismic sources made in the laboratory: source kinematics and time history

    NASA Astrophysics Data System (ADS)

    McLaskey, G.; Glaser, S. D.

    2009-12-01

    When studying seismic signals in the field, the analysis of source mechanisms is always obscured by propagation effects such as scattering and reflections due to the inhomogeneous nature of the earth. To get around this complication, we measure seismic waves (wavelengths from 2 mm to 300 mm) in laboratory-sized specimens of extremely homogeneous isotropic materials. We are able to study the focal mechanism and time history of nanoseismic sources produced by fracture, impact, and sliding friction, roughly six orders of magnitude smaller and more rapid than typical earthquakes. Using very sensitive broadband conical piezoelectric sensors, we are able to measure surface normal displacements down to a few pm (10^-12 m) in amplitude. Thick plate specimens of homogeneous materials such as glass, steel, gypsum, and polymethylmethacrylate (PMMA) are used as propagation media in the experiments. Recorded signals are in excellent agreement with theoretically determined Green’s functions obtained from a generalized ray theory code for an infinite plate geometry. Extremely precise estimates of the source time history are made via full waveform inversion from the displacement time histories recorded by an array of at least ten sensors. Each channel is sampled at a rate of 5 MHz. The system is absolutely calibrated using the normal impact of a tiny (~1 mm) ball on the surface of the specimen. The ball impact induces a force pulse into the specimen a few ms in duration. The amplitude, duration, and shape of the force pulse were found to be well approximated by Hertzian-derived impact theory, while the total change in momentum of the ball is independently measured from its incoming and rebound velocities. Another calibration source, the sudden fracture of a thin-walled glass capillary tube laid on its side and loaded against the surface of the specimen produces a similar point force, this time with a source function very nearly a step in time with rise time of less than 500 ns. The force at which the capillary breaks is recorded using a force sensor and is used for absolute calibration. A third set of nanoseismic sources were generated from frictional sliding. In this case, the location and spatial extent of the source along the cm-scale fault is not precisely known and must be determined. These sources are much more representative of earthquakes and the determination of their focal mechanisms is the subject of ongoing research. Sources of this type have been observed on a great range of time scales with rise times ranging from 500 ns to hundreds of ms. This study tests the generality of the seismic source representation theory. The unconventional scale, geometry, and experimental arrangement facilitates the discussion of issues such as the point source approximation, the origin of uncertainty in moment tensor inversions, the applicability of magnitude calculations for non-double-couple sources, and the relationship between momentum and seismic moment.

  17. The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Vassiliou, M. S.

    1983-01-01

    Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.

  18. Using Value-Focused Thinking as an Alternative Means of Opportunity Assessment for Strategic Sourcing Applications

    DTIC Science & Technology

    2013-03-01

    comparison between two objectives at a time. The decision maker develops a micro - version of the value equation using only the two objectives that...variety of different functional areas. Table 10. New Alternatives Identified Alternative Source Base Recycling Services AFCEC Airfield Pavement Repair

  19. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  20. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  1. Nonlinear optimal control policies for buoyancy-driven flows in the built environment

    NASA Astrophysics Data System (ADS)

    Nabi, Saleh; Grover, Piyush; Caulfield, Colm

    2017-11-01

    We consider optimal control of turbulent buoyancy-driven flows in the built environment, focusing on a model test case of displacement ventilation with a time-varying heat source. The flow is modeled using the unsteady Reynolds-averaged equations (URANS). To understand the stratification dynamics better, we derive a low-order partial-mixing ODE model extending the buoyancy-driven emptying filling box problem to the case of where both the heat source and the (controlled) inlet flow are time-varying. In the limit of a single step-change in the heat source strength, our model is consistent with that of Bower et al.. Our model considers the dynamics of both `filling' and `intruding' added layers due to a time-varying source and inlet flow. A nonlinear direct-adjoint-looping optimal control formulation yields time-varying values of temperature and velocity of the inlet flow that lead to `optimal' time-averaged temperature relative to appropriate objective functionals in a region of interest.

  2. Poster — Thur Eve — 40: Automated Quality Assurance for Remote-Afterloading High Dose Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Anthony; Ravi, Ananth

    2014-08-15

    High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less

  3. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  4. Temporal trend and source apportionment of water pollution in different functional zones of Qiantang River, China.

    PubMed

    Su, Shiliang; Li, Dan; Zhang, Qi; Xiao, Rui; Huang, Fang; Wu, Jiaping

    2011-02-01

    The increasingly serious river water pollution in developing countries poses great threat to environmental health and human welfare. The assignment of river function to specific uses, known as zoning, is a useful tool to reveal variations of water environmental adaptability to human impact. Therefore, characterizing the temporal trend and identifying responsible pollution sources in different functional zones could greatly improve our knowledge about human impacts on the river water environment. The aim of this study is to obtain a deeper understanding of temporal trends and sources of water pollution in different functional zones with a case study of the Qiantang River, China. Measurement data were obtained and pretreated for 13 variables from 41 monitoring sites in four categories of functional zones during the period 1996-2004. An exploratory approach, which combines smoothing and non-parametric statistical tests, was applied to characterize trends of four significant parameters (permanganate index, ammonia nitrogen, total cadmium and fluoride) accounting for differences among different functional zones identified by discriminant analysis. Aided by GIS, yearly pollution index (PI) for each monitoring site was further mapped to compare the within-group variations in temporal dynamics for different functional zones. Rotated principal component analysis and receptor model (absolute principle component score-multiple linear regression, APCS-MLR) revealed that potential pollution sources and their corresponding contributions varied among the four functional zones. Variations of APCS values for each site of one functional zone as well as their annual average values highlighted the uncertainties associated with cross space-time effects in source apportionment. All these results reinforce the notion that the concept of zoning should be taken seriously in water pollution control. Being applicable to other rivers, the framework of management-oriented source apportionment is thus believed to have potentials to offer new insights into water management and advance the source apportionment framework as an operational basis for national and local governments. © 2010 Elsevier Ltd. All rights reserved.

  5. INFLUENCE OF THE GALACTIC GRAVITATIONAL FIELD ON THE POSITIONAL ACCURACY OF EXTRAGALACTIC SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larchenkova, Tatiana I.; Lutovinov, Alexander A.; Lyskova, Natalya S.

    We investigate the influence of random variations of the Galactic gravitational field on the apparent celestial positions of extragalactic sources. The basic statistical characteristics of a stochastic process (first-order moments, an autocorrelation function and a power spectral density) are used to describe a light ray deflection in a gravitational field of randomly moving point masses as a function of the source coordinates. We map a 2D distribution of the standard deviation of the angular shifts in positions of distant sources (including reference sources of the International Celestial Reference Frame) with respect to their true positions. For different Galactic matter distributionsmore » the standard deviation of the offset angle can reach several tens of μ as (microarcsecond) toward the Galactic center, decreasing down to 4–6 μ as at high galactic latitudes. The conditional standard deviation (“jitter”) of 2.5 μ as is reached within 10 years at high galactic latitudes and within a few months toward the inner part of the Galaxy. The photometric microlensing events are not expected to be disturbed by astrometric random variations anywhere except the inner part of the Galaxy as the Einstein–Chvolson times are typically much shorter than the jittering timescale. While a jitter of a single reference source can be up to dozens of μ as over some reasonable observational time, using a sample of reference sources would reduce the error in relative astrometry. The obtained results can be used for estimating the physical upper limits on the time-dependent accuracy of astrometric measurements.« less

  6. Identify source location and release time for pollutants undergoing super-diffusion and decay: Parameter analysis and model evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Sun, HongGuang; Lu, Bingqing; Garrard, Rhiannon; Neupauer, Roseanna M.

    2017-09-01

    Backward models have been applied for four decades by hydrologists to identify the source of pollutants undergoing Fickian diffusion, while analytical tools are not available for source identification of super-diffusive pollutants undergoing decay. This technical note evaluates analytical solutions for the source location and release time of a decaying contaminant undergoing super-diffusion using backward probability density functions (PDFs), where the forward model is the space fractional advection-dispersion equation with decay. Revisit of the well-known MADE-2 tracer test using parameter analysis shows that the peak backward location PDF can predict the tritium source location, while the peak backward travel time PDF underestimates the tracer release time due to the early arrival of tracer particles at the detection well in the maximally skewed, super-diffusive transport. In addition, the first-order decay adds additional skewness toward earlier arrival times in backward travel time PDFs, resulting in a younger release time, although this impact is minimized at the MADE-2 site due to tritium's half-life being relatively longer than the monitoring period. The main conclusion is that, while non-trivial backward techniques are required to identify pollutant source location, the pollutant release time can and should be directly estimated given the speed of the peak resident concentration for super-diffusive pollutants with or without decay.

  7. Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984

    USGS Publications Warehouse

    Sipkin, S.A.

    1987-01-01

    The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.

  8. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  9. Fitted Hanbury-Brown Twiss radii versus space-time variances in flow-dominated models

    NASA Astrophysics Data System (ADS)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-01

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simple Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.

  10. Fitted Hanbury-Brown-Twiss radii versus space-time variances in flow-dominated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frodermann, Evan; Heinz, Ulrich; Lisa, Michael Annan

    2006-04-15

    The inability of otherwise successful dynamical models to reproduce the Hanbury-Brown-Twiss (HBT) radii extracted from two-particle correlations measured at the Relativistic Heavy Ion Collider (RHIC) is known as the RHIC HBT Puzzle. Most comparisons between models and experiment exploit the fact that for Gaussian sources the HBT radii agree with certain combinations of the space-time widths of the source that can be directly computed from the emission function without having to evaluate, at significant expense, the two-particle correlation function. We here study the validity of this approach for realistic emission function models, some of which exhibit significant deviations from simplemore » Gaussian behavior. By Fourier transforming the emission function, we compute the two-particle correlation function, and fit it with a Gaussian to partially mimic the procedure used for measured correlation functions. We describe a novel algorithm to perform this Gaussian fit analytically. We find that for realistic hydrodynamic models the HBT radii extracted from this procedure agree better with the data than the values previously extracted from the space-time widths of the emission function. Although serious discrepancies between the calculated and the measured HBT radii remain, we show that a more apples-to-apples comparison of models with data can play an important role in any eventually successful theoretical description of RHIC HBT data.« less

  11. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    PubMed

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.

  12. A VLA (Very Large Array) Search for 5 GHz Radio Transients and Variables at Low Galactic Latitudes

    NASA Technical Reports Server (NTRS)

    Ofek, E. O.; Frail, D. A.; Breslauer, B.; Kulkarni, S. R.; Chandra, P.; Gal-Yam, A.; Kasliwal, M. M.; Gehrels, N.

    2012-01-01

    We present the results of a 5GHz survey with the Very Large Array (VLA) and the expanded VLA, designed to search for short-lived (approx < 1 day) transients and to characterize the variability of radio sources at milli-Jansky levels. A total sky area of 2.66 sq. deg, spread over 141 fields at low Galactic latitudes (b approx equals 6 - 8 deg) was observed 16 times with a cadence that was chosen to sample timescales of days, months and years. Most of the data were reduced, analyzed and searched for transients in near real time. Interesting candidates were followed up using visible light telescopes (typical delays of 1 - 2 hr) and the X-Ray Telescope on board the Swift satellite. The final processing of the data revealed a single possible transient with a flux density of f(sub v) approx equals 2.4mJy. This implies a transients sky surface density of kappa(f(sub v) > 1.8mJy) = 0.039(exp +0.13,+0.18) (sub .0.032,.0.038) / sq. deg (1, 2 sigma confidence errors). This areal density is consistent with the sky surface density of transients from the Bower et al. survey extrapolated to 1.8mJy. Our observed transient areal density is consistent with a Neutron Stars (NSs) origin for these events. Furthermore, we use the data to measure the sources variability on days to years time scales, and we present the variability structure function of 5GHz sources. The mean structure function shows a fast increase on approximately 1 day time scale, followed by a slower increase on time scales of up to 10 days. On time scales between 10 - 60 days the structure function is roughly constant. We find that approx > 30% of the unresolved sources brighter than 1.8mJy are variable at the > 4-sigma confidence level, presumably due mainly to refractive scintillation.

  13. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  14. Modeling Finite Faults Using the Adjoint Wave Field

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, V.; Liu, Q.; Tromp, J.

    2004-12-01

    Time-reversal acoustics, a technique in which an acoustic signal is recorded by an array of transducers, time-reversed, and retransmitted, is used, e.g., in medical therapy to locate and destroy gallstones (for a review see Fink, 1997). As discussed by Tromp et al. (2004), time-reversal techniques for locating sources are closely linked to so-called `adjoint methods' (Talagrand and Courtier, 1987), which may be used to evaluate the gradient of a misfit function. Tromp et al. (2004) illustrate how a (finite) source inversion may be implemented based upon the adjoint wave field by writing the change in the misfit function, δ χ, due to a change in the moment-density tensor, δ m, as an integral of the adjoint strain field ɛ x,t) over the fault plane Σ : δ χ = ∫ 0T∫_Σ ɛ x,T-t) :δ m(x,t) d2xdt. We find that if the real fault plane is located at a distance δ h in the direction of the fault normal hat n, then to first order an additional factor of ∫ 0T∫_Σ δ h (x) ∂ n ɛ x,T-t):m(x,t) d2xdt is added to the change in the misfit function. The adjoint strain is computed by using the time-reversed difference between data and synthetics recorded at all receivers as simultaneous sources and recording the resulting strain on the fault plane. In accordance with time-reversal acoustics, all the resulting waves will constructively interfere at the position of the original source in space and time. The level of convergence will be deterimined by factors such as the source-receiver geometry, the frequency of the recorded data and synthetics, and the accuracy of the velocity structure used when back propagating the wave field. The terms ɛ x,T-t) and ∂ n ɛ x,T-t):m(x,t) can be viewed as sensitivity kernels for the moment density and the faultplane location respectively. By looking at these quantities we can make an educated choice of fault parametrization given the data in hand. The process can then be repeated to invert for the best source model, as demonstrated by Tromp et al. (2004) for the magnitude of a point force. In this presentation we explore the applicability of adjoint methods to estimating finite source parameters. Fink, M. (1997), Time reversed acoustics, Physics Today, 50(3), 34--40. Talagrand, O., and P.~Courtier (1987), Variational assimilation of meteorological observations with the adjoint vorticity equatuation. I: Theory, Q. J. R. Meteorol. Soc., 113, 1311--1328. Tromp, J., C.~Tape, and Q.~Liu (2004), Waveform tomography, adjoint methods, time reversal, and banana-doughnut kernels, Geophys. Jour. Int., in press

  15. Real-time Recovery Efficiencies and Performance of the Palomar Transient Factory’s Transient Discovery Pipeline

    NASA Astrophysics Data System (ADS)

    Frohmaier, C.; Sullivan, M.; Nugent, P. E.; Goldstein, D. A.; DeRose, J.

    2017-05-01

    We present the transient source detection efficiencies of the Palomar Transient Factory (PTF), parameterizing the number of transients that PTF found versus the number of similar transients that occurred over the same period in the survey search area but were missed. PTF was an optical sky survey carried out with the Palomar 48 inch telescope over 2009-2012, observing more than 8000 square degrees of sky with cadences of between one and five days, locating around 50,000 non-moving transient sources, and spectroscopically confirming around 1900 supernovae. We assess the effectiveness with which PTF detected transient sources, by inserting ≃ 7 million artificial point sources into real PTF data. We then study the efficiency with which the PTF real-time pipeline recovered these sources as a function of the source magnitude, host galaxy surface brightness, and various observing conditions (using proxies for seeing, sky brightness, and transparency). The product of this study is a multi-dimensional recovery efficiency grid appropriate for the range of observing conditions that PTF experienced and that can then be used for studies of the rates, environments, and luminosity functions of different transient types using detailed Monte Carlo simulations. We illustrate the technique using the observationally well-understood class of type Ia supernovae.

  16. Dynamics of the Wulong Landslide Revealed by Broadband Seismic Records

    NASA Astrophysics Data System (ADS)

    Huang, X.; Dan, Y.

    2016-12-01

    Long-period seismic signals are frequently used to trace the dynamic process of large scale landslides. The catastrophic WuLong landslide occurred at 14:51 on 5 June 2009 (Beijing time, UTC+8) in Wulong Prefecture, Southwest China. The topography in landslide area varies dramatically, enhancing the complexity in its movement characteristics. The mass started sliding northward on the upper part of the cliff located upon the west slope of the Tiejianggou gully, and shifted its movement direction to northeastward after being blocked by stable bedrock in front, leaving a scratch zone. The sliding mass then moved downward along the west slope of the gully until it collided with the east slope, and broke up into small pieces after the collision, forming a debris flow along the gully. We use long-period seismic signals extracted from eight broadband seismic stations within 250 km of the landslide to estimate its source time functions. Combining with topographic surveys done before and after the event, we can also resolve kinematic parameters of sliding mass, i.e. velocities, displacements and trajectories, perfectly characterizing its movement features. The runout trajectory deduced from source time functions is consistent with the sliding path, including two direction changing processes, corresponding to scratching the western bedrock and collision with the east slope respectively. Topographic variations can be reflected from estimated velocities. The maximum velocity of the sliding mass reaches 35 m/s before the collision with the east slope of the Tiejianggou gully, resulting from the height difference between the source zone and the deposition zone. What is important is that dynamics of scratching and collision can be characterized by source time functions. Our results confirm that long-period seismic signals are sufficient to characterize dynamics and kinematics of large scale landslides which occur in a region with complex topography.

  17. Linear diffusion into a Faraday cage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warne, Larry Kevin; Lin, Yau Tang; Merewether, Kimball O.

    2011-11-01

    Linear lightning diffusion into a Faraday cage is studied. An early-time integral valid for large ratios of enclosure size to enclosure thickness and small relative permeability ({mu}/{mu}{sub 0} {le} 10) is used for this study. Existing solutions for nearby lightning impulse responses of electrically thick-wall enclosures are refined and extended to calculate the nearby lightning magnetic field (H) and time-derivative magnetic field (HDOT) inside enclosures of varying thickness caused by a decaying exponential excitation. For a direct strike scenario, the early-time integral for a worst-case line source outside the enclosure caused by an impulse is simplified and numerically integrated tomore » give the interior H and HDOT at the location closest to the source as well as a function of distance from the source. H and HDOT enclosure response functions for decaying exponentials are considered for an enclosure wall of any thickness. Simple formulas are derived to provide a description of enclosure interior H and HDOT as well. Direct strike voltage and current bounds for a single-turn optimally-coupled loop for all three waveforms are also given.« less

  18. Influence of meat source, pH and production time on zinc protoporphyrin IX formation as natural colouring agent in nitrite-free dry fermented sausages.

    PubMed

    De Maere, Hannelore; Chollet, Sylvie; De Brabanter, Jos; Michiels, Chris; Paelinck, Hubert; Fraeye, Ilse

    2018-01-01

    Nitrite is commonly used in meat products due to its plural technological advantages. However, it is controversial because of its detrimental side effects on health. Within the context of nitrite reduction, zinc protoporphyrin IX (Zn(II)PPIX) formation in meat products as natural red colouring agent has been suggested. This investigation presents the evaluation of naturally occurring pigments, namely Zn(II)PPIX, protoporphyrin IX (PPIX) and heme in nitrite-free dry fermented sausages in function of time, meat source (pork, horsemeat and a combination of both meat sources) and pH condition. In function of time, Zn(II)PPIX and PPIX were formed and heme content decreased. Higher pH conditions promoted Zn(II)PPIX and PPIX formation, whereas the influence of pH on heme was less clear. The use of horsemeat also promoted Zn(II)PPIX formation. Moreover, even similar amounts were formed when it was combined with pork. Product redness, however, could not be related to Zn(II)PPIX formation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Source Characterization of Microseismic Events using Empirical Green's Functions at the Basel EGS Project

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn

    2015-04-01

    The Empirical Green's Function (EGF) method uses pairs of events of high wave form similarity and adjacent hypocenters to decompose the influences of source time function, ray path, instrument site, and instrument response. The seismogram of the smaller event is considered as the Green's function which then can be deconvolved from the other seismogram. The result provides a reconstructed relative source time function (RSTF) of the larger event of that event pair. The comparison of the RSTFs at all stations of the observation systems produces information on the rupture process of the event based on an apparent directivity effect and possible changes in the RSTFs complexities. The Basel EGS dataset of 2006-2007 consists of about 2800 localized events of magnitudes between 0.0 < ML < 3.5 with event pairs of adequate magnitude difference for EGF analysis. The data has sufficient quality to analyse events with magnitudes down to ML = 0.0 for an apparent directivity effect although the approximate rupture duration for those events is of only a few milliseconds. The dataset shows a number of multiplets where fault plane solutions are known from earlier studies. Using the EGF method we compute rupture orientations for about 190 event pairs and compare them to the fault plane solutions of the multiplets. For the majority of events we observe a good consistency between the rupture direction found there and one of the previously determined nodal planes from fault plane solutions. In combination this resolves the fault plane ambiguity. Furthermore the rupture direction fitting yields estimates for projections of the rupture velocity on the horizontal plane. They seem to vary between the multiplets in the reservoir from 0.3 to 0.7 times the S-wave velocity. To our knowledge source characterization by EGF analysis has not yet been introduced to microseismic reservoirs with the data quality found in Basel. Our results show that EGF analysis can provide valuable additional insights on the distribution of rupture properties within the reservoir.

  20. The source of electrostatic fluctuations in the solar-wind

    NASA Technical Reports Server (NTRS)

    Lemons, D. S.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gary, S. P.; Gosling, J. T.

    1979-01-01

    Solar wind electron and ion distribution functions measured simultaneously with or close to times of intense electrostatic fluctuations are subjected to a linear Vlasov stability analysis. Although all distributions tested were found to be stable, the analysis suggests that the ion beam instability is the most likely source of the fluctuations.

  1. Resolution and quantification accuracy enhancement of functional delay and sum beamforming for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; Shen, Linbang

    2017-05-01

    Functional delay and sum (FDAS) is a novel beamforming algorithm introduced for the three-dimensional (3D) acoustic source identification with solid spherical microphone arrays. Being capable of offering significantly attenuated sidelobes with a fast speed, the algorithm promises to play an important role in interior acoustic source identification. However, it presents some intrinsic imperfections, specifically poor spatial resolution and low quantification accuracy. This paper focuses on conquering these imperfections by ridge detection (RD) and deconvolution approach for the mapping of acoustic sources (DAMAS). The suggested methods are referred to as FDAS+RD and FDAS+RD+DAMAS. Both computer simulations and experiments are utilized to validate their effects. Several interesting conclusions have emerged: (1) FDAS+RD and FDAS+RD+DAMAS both can dramatically ameliorate FDAS's spatial resolution and at the same time inherit its advantages. (2) Compared to the conventional DAMAS, FDAS+RD+DAMAS enjoys the same super spatial resolution, stronger sidelobe attenuation capability and more than two hundred times faster speed. (3) FDAS+RD+DAMAS can effectively conquer FDAS's low quantification accuracy. Whether the focus distance is equal to the distance from the source to the array center or not, it can quantify the source average pressure contribution accurately. This study will be of great significance to the accurate and quick localization and quantification of acoustic sources in cabin environments.

  2. New Insights into the Explosion Source from SPE

    NASA Astrophysics Data System (ADS)

    Patton, H. J.

    2015-12-01

    Phase I of the Source Physics Experiments (SPE) is a series of chemical explosions at varying depths and yields detonated in the same emplacement hole on Climax stock, a granitic pluton located on the Nevada National Security Site. To date, four of the seven planned tests have been conducted, the last in May 2015, called SPE-4P, with a scaled depth of burial of 1549 m/kt1/3 in order to localize the source in time and space. Surface ground motions validated that the source medium did not undergo spallation, and a key experimental objective was achieved where SPE-4P is the closest of all tests in the series to a pure monopole source and will serve as an empirical Green's function for analysis against other SPE tests. A scientific objective of SPE is to understand mechanisms of rock damage for generating seismic waves, particularly surface and S waves, including prompt damage under compressive stresses and "late-time" damage under tensile stresses. Studies have shown that prompt damage can explain ~75% of the seismic moment for some SPE tests. Spallation is a form of late-time damage and a facilitator of damage mechanisms under tensile stresses including inelastic brittle deformation and shear dilatancy on pre-existing faults or joints. As an empirical Green's function, SPE-4P allows the study of late-time damage mechanisms on other SPE tests that induce spallation and late-time damage, and I'll discuss these studies. The importance for nuclear monitoring cannot be overstated because new research shows that damage mechanisms can affect surface wave magnitude Ms more than tectonic release, and are a likely factor related to anomalous mb-Ms behavior for North Korean tests.

  3. Robust real-time fault tracking for the 2011 Mw 9.0 Tohoku earthquake based on the phased-array-interference principle

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Parolai, Stefano; Zschau, Jochen

    2013-04-01

    Based on the principle of the phased array interference, we have developed an Iterative Deconvolution Stacking (IDS) method for real-time kinematic source inversion using near-field strong-motion and GPS networks. In this method, the seismic and GPS stations work like an array radar. The whole potential fault area is scanned patch by patch by stacking the apparent source time functions, which are obtained through deconvolution between the recorded seismograms and synthetic Green's functions. Once some significant source signals are detected any when and where, their signatures are removed from the observed seismograms. The procedure is repeated until the accumulative seismic moment being found converges and the residual seismograms are reduced below the noise level. The new approach does not need any artificial constraint used in the source parameterization such as, for example, fixing the hypocentre, restricting the rupture velocity and rise time, etc. Thus, it can be used for automatic real-time source inversion. In the application to the 2011 Tohoku earthquake, the IDS method is proved to be robust and reliable on the fast estimation of moment magnitude, fault area, rupture direction, and maximum slip, etc. About at 100 s after the rupture initiation, we can get the information that the rupture mainly propagates along the up-dip direction and causes a maximum slip of 17 m, which is enough to release a tsunami early warning. About two minutes after the earthquake occurrence, the maximum slip is found to be 31 m, and the moment magnitude reaches Mw8.9 which is very close to the final moment magnitude (Mw9.0) of this earthquake.

  4. Modeling of Turbulence Generated Noise in Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2004-01-01

    A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.

  5. RADIATION WAVE DETECTION

    DOEpatents

    Wouters, L.F.

    1960-08-30

    Radiation waves can be detected by simultaneously measuring radiation- wave intensities at a plurality of space-distributed points and producing therefrom a plot of the wave intensity as a function of time. To this end. a detector system is provided which includes a plurality of nuclear radiation intensity detectors spaced at equal radial increments of distance from a source of nuclear radiation. Means are provided to simultaneously sensitize the detectors at the instant a wave of radiation traverses their positions. the detectors producing electrical pulses indicative of wave intensity. The system further includes means for delaying the pulses from the detectors by amounts proportional to the distance of the detectors from the source to provide an indication of radiation-wave intensity as a function of time.

  6. Design of a 3D-printed, open-source wrist-driven orthosis for individuals with spinal cord injury

    PubMed Central

    Mukherjee, Gaurav; Peters, Keshia M.; Yamane, Ann; Steele, Katherine M.

    2018-01-01

    Assistive technology, such as wrist-driven orthoses (WDOs), can be used by individuals with spinal cord injury to improve hand function. A lack of innovation and challenges in obtaining WDOs have limited their use. These orthoses can be heavy and uncomfortable for users and also time-consuming for orthotists to fabricate. The goal of this research was to design a WDO with user (N = 3) and orthotist (N = 6) feedback to improve the accessibility, customizability, and function of WDOs by harnessing advancements in 3D-printing. The 3D-printed WDO reduced hands-on assembly time to approximately 1.5 hours and the material costs to $15 compared to current fabrication methods. Varying improvements in users' hand function were observed during functional tests, such as the Jebsen Taylor Hand Function Test. For example, one participant's ability on the small object task improved by 29 seconds with the WDO, while another participant took 25 seconds longer to complete this task with the WDO. Two users had a significant increase in grasp strength with the WDO (13–122% increase), while the other participant was able to perform a pinching grasp for the first time. The WDO designs are available open-source to increase accessibility and encourage future innovation. PMID:29470557

  7. Formal integration of controlled-source and passive seismic data: Utilization of the CD-ROM experiment

    NASA Astrophysics Data System (ADS)

    Rumpfhuber, E.; Keller, G. R.; Velasco, A. A.

    2005-12-01

    Many large-scale experiments conduct both controlled-source and passive deployments to investigate the lithospheric structure of a targeted region. Many of these studies utilize each data set independently, resulting in different images of the Earth depending on the data set investigated. In general, formal integration of these data sets, such as joint inversions, with other data has not been performed. The CD-ROM experiment, which included both 2-D controlled-source and passive recording along a profile extending from southern Wyoming to northern New Mexico serves as an excellent data set to develop a formal integration strategy between both controlled source and passive experiments. These data are ideal to develop this strategy because: 1) the analysis of refraction/wide-angle reflection data yields Vp structure, and sometimes Vs structure, of the crust and uppermost mantle; 2) analysis of the PmP phase (Moho reflection) yields estimates of the average Vp of the crust for the crust; and 3) receiver functions contain full-crustal reverberations and yield the Vp/Vs ratio, but do not constrain the absolute P and S velocity. Thus, a simple form of integration involves using the Vp/Vs ratio from receiver functions and the average Vp from refraction measurements, to solve for the average Vs of the crust. When refraction/ wide-angle reflection data and several receiver functions nearby are available, an integrated 2-D model can be derived. In receiver functions, the PS conversion gives the S-wave travel-time (ts) through the crust along the raypath traveled from the Moho to the surface. Since the receiver function crustal reverberation gives the Vp/Vs ratio, it is also possible to use the arrival time of the converted phase, PS, to solve for the travel time of the direct teleseismic P-wave through the crust along the ray path. Raytracing can yield the point where the teleseismic wave intersects the Moho. In this approach, the conversion point is essentially a pseudo-shotpoint, thus the converted arrival at the surface can be jointly modeled with refraction data using a 3-D inversion code. Employing the combined CD-ROM data sets, we will be investigating the joint inversion results of controlled source data and receiver functions.

  8. Probabilistic source mechanism estimation based on body-wave waveforms through shift and stack algorithm

    NASA Astrophysics Data System (ADS)

    Massin, F.; Malcolm, A. E.

    2017-12-01

    Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).

  9. 3-D acoustic waveform simulation and inversion supplemented by infrasound sensors on a tethered weather balloon at Yasur Volcano, Vanuatu

    NASA Astrophysics Data System (ADS)

    Iezzi, A. M.; Fee, D.; Matoza, R. S.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Austin, A.; Kennedy, B.; Fitzgerald, R.; Gomez, C.; Key, N.

    2017-12-01

    Well-constrained acoustic waveform inversion can provide robust estimates of erupted volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be represented as the combination of pressure fluctuations from a volume change, directionality, and turbulence. The vertical dipole has not been addressed due to ground-based recording limitations. In this study we deployed a high-density seismo-acoustic network around Yasur Volcano, Vanuatu, including multiple acoustic sensors along a tethered balloon that was moved every 15-60 minutes. Yasur has frequent strombolian eruptions every 1-4 minutes from any one of three active vents within a 400 m diameter crater. Our experiment captured several explosions from each vent at 38 tether locations covering 200 in azimuth and a take-off range of 50 (Jolly et. al., in review). Additionally, FLIR, FTIR, and a variety of visual imagery were collected during the deployment to aid in the seismo-acoustic interpretations. The third dimension (vertical) of pressure sensor coverage allows us to more completely constrain the acoustic source. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3-D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a high-resolution digital elevation model created using structure-from-motion techniques. We then invert for the source location and multipole source-time function using a grid-search approach. We perform this inversion for multiple events from vents A and C to examine the source characteristics of the vents, including an infrasound-derived volume flux as a function of time. These volumes fluxes are then compared to those derived independently from geochemical and seismic inversion techniques. Jolly, A., Matoza, R., Fee, D., Kennedy, B., Iezzi, A., Fitzgerald, R., Austin, A., & Johnson, R. (in review). Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249.

  10. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  11. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  12. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  13. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data

    PubMed Central

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357

  14. Source Biases in Magnetotelluric Transfer Functions due to Pc3/Pc4 ( 10-100s) Geomagnetic Activity at Mid-Latitudes

    NASA Astrophysics Data System (ADS)

    Murphy, B. S.; Egbert, G. D.

    2017-12-01

    Discussion of possible bias in magnetotelluric (MT) transfer functions due to the finite spatial scale of external source fields has largely focused on long periods (>1000 s), where skin depths are large, and high latitudes (>60° N), where sources are dominated by narrow electrojets. However, a significant fraction ( 15%) of the 1000 EarthScope USArray apparent resistivity and phase curves exhibit nonphysical "humps" over a narrow period range (typically between 25-60 s) that are suggestive of narrow-band source effects. Maps of locations in the US where these biases are seen support this conclusion: they mostly occur in places where the Earth is highly resistive, such as cratonic regions, where skin depths are largest and hence where susceptibility to bias from short-wavelength sources would be greatest. We have analyzed EarthScope MT time series using cross-phase techniques developed in the space physics community to measure the period of local field line resonances associated with geomagnetic pulsations (Pc's). In most cases the biases occur near the periods of field line resonance determined from this analysis, suggesting that at mid-latitude ( 30°-50° N) Pc's can bias the time-averaged MT transfer functions. Because Pc's have short meridional wavelengths (hundreds of km), even at these relatively short periods the plane-wave assumption of the MT technique may be violated, at least in resistive domains with large skin depths. It is unclear if these biases (generally small) are problematic for MT data inversion, but their presence in the transfer functions is already a useful zeroth-order indicator of resistive regions of the Earth.

  15. Experimental and theoretical study of Rayleigh-Lamb waves in a plate containing a surface-breaking crack

    NASA Technical Reports Server (NTRS)

    Paffenholz, Joseph; Fox, Jon W.; Gu, Xiaobai; Jewett, Greg S.; Datta, Subhendu K.

    1990-01-01

    Scattering of Rayleigh-Lamb waves by a normal surface-breaking crack in a plate has been studied both theoretically and experimentally. The two-dimensionality of the far field, generated by a ball impact source, is exploited to characterize the source function using a direct integration technique. The scattering of waves generated by this impact source by the crack is subsequently solved by employing a Green's function integral expression for the scattered field coupled with a finite element representation of the near field. It is shown that theoretical results of plate response, both in frequency and time, are similar to those obtained experimentally. Additionally, implication for practical applications are discussed.

  16. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  17. SU-F-T-336: A Quick Auto-Planning (QAP) Method for Patient Intensity Modulated Radiotherapy (IMRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, J; Zhang, Z; Wang, J

    2016-06-15

    Purpose: The aim of this study is to develop a quick auto-planning system that permits fast patient IMRT planning with conformal dose to the target without manual field alignment and time-consuming dose distribution optimization. Methods: The planning target volume (PTV) of the source and the target patient were projected to the iso-center plane in certain beameye- view directions to derive the 2D projected shapes. Assuming the target interior was isotropic for each beam direction boundary analysis under polar coordinate was performed to map the source shape boundary to the target shape boundary to derive the source-to-target shape mapping function. Themore » derived shape mapping function was used to morph the source beam aperture to the target beam aperture over all segments in each beam direction. The target beam weights were re-calculated to deliver the same dose to the reference point (iso-center) as the source beam did in the source plan. The approach was tested on two rectum patients (one source patient and one target patient). Results: The IMRT planning time by QAP was 5 seconds on a laptop computer. The dose volume histograms and the dose distribution showed the target patient had the similar PTV dose coverage and OAR dose sparing with the source patient. Conclusion: The QAP system can instantly and automatically finish the IMRT planning without dose optimization.« less

  18. A high-speed, reconfigurable, channel- and time-tagged photon arrival recording system for intensity-interferometry and quantum optics experiments

    NASA Astrophysics Data System (ADS)

    Girish, B. S.; Pandey, Deepak; Ramachandran, Hema

    2017-08-01

    We present a compact, inexpensive multichannel module, APODAS (Avalanche Photodiode Output Data Acquisition System), capable of detecting 0.8 billion photons per second and providing real-time recording on a computer hard-disk, of channel- and time-tagged information of the arrival of upto 0.4 billion photons per second. Built around a Virtex-5 Field Programmable Gate Array (FPGA) unit, APODAS offers a temporal resolution of 5 nanoseconds with zero deadtime in data acquisition, utilising an efficient scheme for time and channel tagging and employing Gigabit ethernet for the transfer of data. Analysis tools have been developed on a Linux platform for multi-fold coincidence studies and time-delayed intensity interferometry. As illustrative examples, the second-order intensity correlation function ( g 2) of light from two commonly used sources in quantum optics —a coherent laser source and a dilute atomic vapour emitting spontaneously, constituting a thermal source— are presented. With easy reconfigurability and with no restriction on the total record length, APODAS can be readily used for studies over various time scales. This is demonstrated by using APODAS to reveal Rabi oscillations on nanosecond time scales in the emission of ultracold atoms, on the one hand, and, on the other hand, to measure the second-order correlation function on the millisecond time scales from tailored light sources. The efficient and versatile performance of APODAS promises its utility in diverse fields, like quantum optics, quantum communication, nuclear physics, astrophysics and biology.

  19. Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog

    NASA Technical Reports Server (NTRS)

    Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.

    1995-01-01

    A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.

  20. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  1. Retrieving rupture history using waveform inversions in time sequence

    NASA Astrophysics Data System (ADS)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  2. Diminishing returns: the influence of experience and environment on time-memory extinction in honey bee foragers.

    PubMed

    Moore, Darrell; Van Nest, Byron N; Seier, Edith

    2011-06-01

    Classical experiments demonstrated that honey bee foragers trained to collect food at virtually any time of day will return to that food source on subsequent days with a remarkable degree of temporal accuracy. This versatile time-memory, based on an endogenous circadian clock, presumably enables foragers to schedule their reconnaissance flights to best take advantage of the daily rhythms of nectar and pollen availability in different species of flowers. It is commonly believed that the time-memory rapidly extinguishes if not reinforced daily, thus enabling foragers to switch quickly from relatively poor sources to more productive ones. On the other hand, it is also commonly thought that extinction of the time-memory is slow enough to permit foragers to 'remember' the food source over a day or two of bad weather. What exactly is the time-course of time-memory extinction? In a series of field experiments, we determined that the level of food-anticipatory activity (FAA) directed at a food source is not rapidly extinguished and, furthermore, the time-course of extinction is dependent upon the amount of experience accumulated by the forager at that source. We also found that FAA is prolonged in response to inclement weather, indicating that time-memory extinction is not a simple decay function but is responsive to environmental changes. These results provide insights into the adaptability of FAA under natural conditions.

  3. Earthquake Directivity, Orientation, and Stress Drop Within the Subducting Plate at the Hikurangi Margin, New Zealand

    NASA Astrophysics Data System (ADS)

    Abercrombie, Rachel E.; Poli, Piero; Bannister, Stephen

    2017-12-01

    We develop an approach to calculate earthquake source directivity and rupture velocity for small earthquakes, using the whole source time function rather than just an estimate of the duration. We apply the method to an aftershock sequence within the subducting plate beneath North Island, New Zealand, and investigate its resolution. We use closely located, highly correlated empirical Green's function (EGF) events to obtain source time functions (STFs) for this well-recorded sequence. We stack the STFs from multiple EGFs at each station, to improve the stability of the STFs. Eleven earthquakes (M 3.3-4.5) have sufficient azimuthal coverage, and both P and S STFs, to investigate directivity. The time axis of each STF in turn is stretched to find the maximum correlation between all pairs of stations. We then invert for the orientation and rupture velocity of both unilateral and bilateral line sources that best match the observations. We determine whether they are distinguishable and investigate the effects of limited frequency bandwidth. Rupture orientations are resolvable for eight earthquakes, seven of which are predominantly unilateral, and all are consistent with rupture on planes similar to the main shock fault plane. Purely unilateral rupture is rarely distinguishable from asymmetric bilateral rupture, despite a good station distribution. Synthetic testing shows that rupture velocity is the least well-resolved parameter; estimates decrease with loss of high-frequency energy, and measurements are best considered minimum values. We see no correlation between rupture velocity and stress drop, and spatial stress drop variation cannot be explained as an artifact of varying rupture velocity.

  4. Development of an Implantable Pudendal Nerve Stimulator To Restore Bladder Function in Humans After SCI

    DTIC Science & Technology

    2016-10-01

    new version of the stimulator will be manufactured and tested again. This design-build-test cycle will be repeated multiple times during the second...AWARD NUMBER: W81XWH-15-C-0066 TITLE: Development of an Implantable Pudendal Nerve Stimulator To Restore Bladder Function in Humans After SCI...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and

  5. Skeletal Muscle Hypertrophy and Cardiometabolic Benefits after Spinal Cord Injury

    DTIC Science & Technology

    2016-10-01

    including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...COMPOSITION AND METABOLISM, FUNCTIONAL ELECTERICAL STIMULATION , IMMUNIOCHEMISTRY, SKELETAL MUSCLES, INFLAMMATORY BIOMARKERS, DUAL ENERGEY X-RAY...1. INTRODUCTION: Forty eight participants will be randomly assigned into neuromuscular electrical stimulation + functional electrical

  6. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  7. Molar Functional Relations and Clinical Behavior Analysis: Implications for Assessment and Treatment

    ERIC Educational Resources Information Center

    Waltz, Thomas J.; Follette, William C.

    2009-01-01

    The experimental analysis of behavior has identified several molar functional relations that are highly relevant to clinical behavior analysis. These include matching, discounting, momentum, and variability. Matching provides a broader analysis of how multiple sources of reinforcement influence how individuals choose to allocate their time and…

  8. Model space exploration for determining landslide source history from long period seismic data

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Mangeney, A.; Stutzmann, E.; Capdeville, Y.; Moretti, L.; Calder, E. S.; Smith, P. J.; Cole, P.; Le Friant, A.

    2012-12-01

    The seismic signals generated by high magnitude landslide events can be recorded at remote stations, which provides access to the landslide process. During the "Boxing Day" eruption at Montserrat in 1997, the long-period seismic signals generated by the debris avalanche are recorded by two stations at distances of 450km and 1261km. We investigate the landslide process considering that the landslide source can be described by single forces. The period band 25-50 sec is selected for which the landslide signal is clearly visible at the two stations. We first use the transverse component of the closest station to determine the horizontal forces. We model the seismogram by normal mode summation and investigate the model space. Two horizontal forces are found that best fit the data. These two horizontal forces have similar amplitude, but opposite direction and they are separated in time by 70 sec. The radiation pattern of the transverse component does not enable to determine the exact azimuth of these forces. We then model the vertical component of the seismograms which enable to retrieve both the vertical and horizontal forces. Using the parameter previously determined (amplitude ratio and time shift of the 2 horizontal forces), we further investigate the model space and show that a single vertical force together with the 2 horizontal forces enable to fit the data. The complete source time function can be described as follows: a horizontal force toward the opposite direction of the landslide flow is followed 40 sec later by a vertical downward force and 30 more seconds later by a horizontal force toward the direction of the flow. The volume of the landslide estimated from the force magnitude is compatible with the volume determined by field survey. Inverting directly the seismograms in the period band 25-50sec enable to retrieve a source time function that is consistent with the 3 forces determined previously. The source time function in this narrow period band alone does not enable easily to recover the corresponding single forces. This method can be used to determine the source parameters using only 2 distant stations. It is successfully tested also on other landslides such as Mount St. Helens (1980) event and Mount Steller event (2005) which are recorded by more broadband stations.

  9. Transfer function analysis of thermospheric perturbations

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-01-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  10. Transfer function analysis of thermospheric perturbations

    NASA Astrophysics Data System (ADS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-06-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  11. Real-time digital signal recovery for a multi-pole low-pass transfer function system.

    PubMed

    Lee, Jhinhwan

    2017-08-01

    In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.

  12. Infrasound Predictions Using the Weather Research and Forecasting Model: Atmospheric Green's Functions for the Source Physics Experiments 1-6.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppeliers, Christian; Aur, Katherine Anderson; Preston, Leiph

    This report shows the results of constructing predictive atmospheric models for the Source Physics Experiments 1-6. Historic atmospheric data are combined with topography to construct an atmo- spheric model that corresponds to the predicted (or actual) time of a given SPE event. The models are ultimately used to construct atmospheric Green's functions to be used for subsequent analysis. We present three atmospheric models for each SPE event: an average model based on ten one- hour snap shots of the atmosphere and two extrema models corresponding to the warmest, coolest, windiest, etc. atmospheric snap shots. The atmospheric snap shots consist ofmore » wind, temperature, and pressure profiles of the atmosphere for a one-hour time window centered at the time of the predicted SPE event, as well as nine additional snap shots for each of the nine preceding years, centered at the time and day of the SPE event.« less

  13. The Galex Time Domain Survey. I. Selection And Classification of Over a Thousand Ultraviolet Variable Sources

    NASA Technical Reports Server (NTRS)

    Gezari, S.; Martin, D. C.; Forster, K.; Neill, J. D.; Huber, M.; Heckman, T.; Bianchi, L.; Morrissey, P.; Neff, S. G.; Seibert, M.; hide

    2013-01-01

    We present the selection and classification of over a thousand ultraviolet (UV) variable sources discovered in approximately 40 deg(exp 2) of GALEX Time Domain Survey (TDS) NUV images observed with a cadence of 2 days and a baseline of observations of approximately 3 years. The GALEX TDS fields were designed to be in spatial and temporal coordination with the Pan-STARRS1 Medium Deep Survey, which provides deep optical imaging and simultaneous optical transient detections via image differencing. We characterize the GALEX photometric errors empirically as a function of mean magnitude, and select sources that vary at the 5 sigma level in at least one epoch. We measure the statistical properties of the UV variability, including the structure function on timescales of days and years. We report classifications for the GALEX TDS sample using a combination of optical host colors and morphology, UV light curve characteristics, and matches to archival X-ray, and spectroscopy catalogs. We classify 62% of the sources as active galaxies (358 quasars and 305 active galactic nuclei), and 10% as variable stars (including 37 RR Lyrae, 53 M dwarf flare stars, and 2 cataclysmic variables). We detect a large-amplitude tail in the UV variability distribution for M-dwarf flare stars and RR Lyrae, reaching up to absolute value(?m) = 4.6 mag and 2.9 mag, respectively. The mean amplitude of the structure function for quasars on year timescales is five times larger than observed at optical wavelengths. The remaining unclassified sources include UV-bright extragalactic transients, two of which have been spectroscopically confirmed to be a young core-collapse supernova and a flare from the tidal disruption of a star by dormant supermassive black hole. We calculate a surface density for variable sources in the UV with NUV less than 23 mag and absolute value(?m) greater than 0.2 mag of approximately 8.0, 7.7, and 1.8 deg(exp -2) for quasars, active galactic nuclei, and RR Lyrae stars, respectively. We also calculate a surface density rate in the UV for transient sources, using the effective survey time at the cadence appropriate to each class, of approximately 15 and 52 deg(exp -2 yr-1 for M dwarfs and extragalactic transients, respectively.

  14. Photoacoustic effect generated by moving optical sources: Motion in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Wenyu; Diebold, Gerald J.

    2016-03-28

    Although the photoacoustic effect is typically generated by pulsed or amplitude modulated optical beams, it is clear from examination of the wave equation for pressure that motion of an optical source in space will result in the production of sound as well. Here, the properties of the photoacoustic effect generated by moving sources in one dimension are investigated. The cases of a moving Gaussian beam, an oscillating delta function source, and an accelerating Gaussian optical sources are reported. The salient feature of one-dimensional sources in the linear acoustic limit is that the amplitude of the beam increases in time withoutmore » bound.« less

  15. A Real-Time Linux for Multicore Platforms

    DTIC Science & Technology

    2013-12-20

    under ARO support) to obtain a fully-functional OS for supporting real-time workloads on multicore platforms. This system, called LITMUS -RT...to be specified as plugin components. LITMUS -RT is open-source software (available at The views, opinions and/or findings contained in this report... LITMUS -RT (LInux Testbed for MUltiprocessor Scheduling in Real-Time systems), allows different multiprocessor real-time scheduling and

  16. Oscillator Noise Analysis

    NASA Astrophysics Data System (ADS)

    Demir, Alper

    2005-08-01

    Oscillators are key components of many kinds of systems, particularly electronic and opto-electronic systems. Undesired perturbations, i.e. noise, that exist in practical systems adversely affect the spectral and timing properties of the signals generated by oscillators resulting in phase noise and timing jitter. These are key performance limiting factors, being major contributors to bit-error-rate (BER) of RF and optical communication systems, and creating synchronization problems in clocked and sampled-data electronic systems. In noise analysis for oscillators, the key is figuring out how the various disturbances and noise sources in the oscillator end up as phase fluctuations. In doing so, one first computes transfer functions from the noise sources to the oscillator phase, or the sensitivity of the oscillator phase to these noise sources. In this paper, we first provide a discussion explaining the origins and the proper definition of this transfer or sensitivity function, followed by a critical review of the various numerical techniques for its computation that have been proposed by various authors over the past fifteen years.

  17. Principles of time evolution in classical physics

    NASA Astrophysics Data System (ADS)

    Güémez, J.; Fiolhais, M.

    2018-07-01

    We address principles of time evolution in classical mechanical/thermodynamical systems in translational and rotational motion, in three cases: when there is conservation of mechanical energy, when there is energy dissipation and when there is mechanical energy production. In the first case, the time derivative of the Hamiltonian vanishes. In the second one, when dissipative forces are present, the time evolution is governed by the minimum potential energy principle, or, equivalently, maximum increase of the entropy of the universe. Finally, in the third situation, when internal sources of work are available to the system, it evolves in time according to the principle of minimum Gibbs function. We apply the Lagrangian formulation to the systems, dealing with the non-conservative forces using restriction functions such as the Rayleigh dissipative function.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stinnett, Jacob; Vo, Duc Ta

    Significant peak shifts were noted in a laboratory LaBr 3 detector. To investigate these issues, three LaBr 3 detectors were used to collect spectra of Cs-137 with either Co-57, Co-60, or no secondary source included. The cobalt source locations were varied to control the deadtime, while the Cs-137 source remained in a fixed position relative to the detectors. Each setup was measured with a 0.8 μs and a 3.2 μs shaping time. All spectra were measured for a 100 second live time. All three LaBr 3 detectors were experienced peak-shifting as a function of deadtime and gamma-ray energies. However, themore » first detector (Detector A, described below) had significantly more severe peakshifting which was also affected by the shaping time.« less

  19. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  20. Real-time Recovery Efficiencies and Performance of the Palomar Transient Factory's Transient Discovery Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frohmaier, C.; Sullivan, M.; Nugent, P. E.

    In this paper, we present the transient source detection efficiencies of the Palomar Transient Factory (PTF), parameterizing the number of transients that PTF found versus the number of similar transients that occurred over the same period in the survey search area but were missed. PTF was an optical sky survey carried out with the Palomar 48 inch telescope over 2009–2012, observing more than 8000 square degrees of sky with cadences of between one and five days, locating around 50,000 non-moving transient sources, and spectroscopically confirming around 1900 supernovae. We assess the effectiveness with which PTF detected transient sources, by insertingmore » $$\\simeq 7$$ million artificial point sources into real PTF data. We then study the efficiency with which the PTF real-time pipeline recovered these sources as a function of the source magnitude, host galaxy surface brightness, and various observing conditions (using proxies for seeing, sky brightness, and transparency). The product of this study is a multi-dimensional recovery efficiency grid appropriate for the range of observing conditions that PTF experienced and that can then be used for studies of the rates, environments, and luminosity functions of different transient types using detailed Monte Carlo simulations. Finally, we illustrate the technique using the observationally well-understood class of type Ia supernovae.« less

  1. Real-time Recovery Efficiencies and Performance of the Palomar Transient Factory's Transient Discovery Pipeline

    DOE PAGES

    Frohmaier, C.; Sullivan, M.; Nugent, P. E.; ...

    2017-05-09

    In this paper, we present the transient source detection efficiencies of the Palomar Transient Factory (PTF), parameterizing the number of transients that PTF found versus the number of similar transients that occurred over the same period in the survey search area but were missed. PTF was an optical sky survey carried out with the Palomar 48 inch telescope over 2009–2012, observing more than 8000 square degrees of sky with cadences of between one and five days, locating around 50,000 non-moving transient sources, and spectroscopically confirming around 1900 supernovae. We assess the effectiveness with which PTF detected transient sources, by insertingmore » $$\\simeq 7$$ million artificial point sources into real PTF data. We then study the efficiency with which the PTF real-time pipeline recovered these sources as a function of the source magnitude, host galaxy surface brightness, and various observing conditions (using proxies for seeing, sky brightness, and transparency). The product of this study is a multi-dimensional recovery efficiency grid appropriate for the range of observing conditions that PTF experienced and that can then be used for studies of the rates, environments, and luminosity functions of different transient types using detailed Monte Carlo simulations. Finally, we illustrate the technique using the observationally well-understood class of type Ia supernovae.« less

  2. Predicting vertically-nonsequential wetting patterns with a source-responsive model

    USGS Publications Warehouse

    Nimmo, John R.; Mitchell, Lara

    2013-01-01

    Water infiltrating into soil of natural structure often causes wetting patterns that do not develop in an orderly sequence. Because traditional unsaturated flow models represent a water advance that proceeds sequentially, they fail to predict irregular development of water distribution. In the source-responsive model, a diffuse domain (D) represents flow within soil matrix material following traditional formulations, and a source-responsive domain (S), characterized in terms of the capacity for preferential flow and its degree of activation, represents preferential flow as it responds to changing water-source conditions. In this paper we assume water undergoing rapid source-responsive transport at any particular time is of negligibly small volume; it becomes sensible at the time and depth where domain transfer occurs. A first-order transfer term represents abstraction from the S to the D domain which renders the water sensible. In tests with lab and field data, for some cases the model shows good quantitative agreement, and in all cases it captures the characteristic patterns of wetting that proceed nonsequentially in the vertical direction. In these tests we determined the values of the essential characterizing functions by inverse modeling. These functions relate directly to observable soil characteristics, rendering them amenable to evaluation and improvement through hydropedologic development.

  3. Evaluating screening effects and Tusnami danger in bays

    NASA Astrophysics Data System (ADS)

    Ivanov, V. V.; Simonov, K. V.; Garder, O. I.

    1985-06-01

    In selecting sites for new construction in the Kuril Islands it is important to evaluate the tsunami danger of the pertinent parts of the coastline. Recommendations for the Kuril Islands have been published, but they are only preliminary. An effort has now been made to improve them by formulatating a more adequate model of the source with defining of those peculiarities of the specific position of a bay which exert the most significant influence on formation of the maximum tsunami wave in the analyzed coastal zone. The analysis was based on observational data for the Kamchatka tsunami of 1952, which was catastrophic for the shores of Kamchatka and the Kuril Islands. The data used were for Pearl Harbor, Honolulu and Hilo. The processing method involved breakdown of the record into the signal at the source and the impulse function for penetration of the wave into a bay. it was found that the record can be represented in the form of the convolution of the source function common for all the records of one tsunami and the specific impulse function for the propagation path specific for each bay. It was found that the signal at the tsunami source is a periodic process with beats of great duration with a relatively narrow spectrum. The impulse function for the paths for closed bays contains a small number of oscillations and varies in characteristic times on the order of 1 to 1.5 hours. The characteristic time of tsunami filling of a bay is important to know for shielding the bay against a tsunami wave.

  4. Synthesis procedure for linear time-varying feedback systems with large parameter ignorance

    NASA Technical Reports Server (NTRS)

    Mcdonald, T. E., Jr.

    1972-01-01

    The development of synthesis procedures for linear time-varying feedback systems is considered. It is assumed that the plant can be described by linear differential equations with time-varying coefficients; however, ignorance is associated with the plant in that only the range of the time-variations are known instead of exact functional relationships. As a result of this plant ignorance the use of time-varying compensation is ineffective so that only time-invariant compensation is employed. In addition, there is a noise source at the plant output which feeds noise through the feedback elements to the plant input. Because of this noise source the gain of the feedback elements must be as small as possible. No attempt is made to develop a stability criterion for time-varying systems in this work.

  5. Towards full waveform ambient noise inversion

    NASA Astrophysics Data System (ADS)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure are quantified using Hessian-vector products.

  6. Source spectra of the first four Source Physics Experiments (SPE) explosions from the frequency-domain moment-tensor inversion

    DOE PAGES

    Yang, Xiaoning

    2016-08-01

    In this study, I used seismic waveforms recorded within 2 km from the epicenter of the first four Source Physics Experiments (SPE) explosions to invert for the moment-tensor spectra of these explosions. I employed a one-dimensional (1D) Earth model for Green's function calculations. The model was developed from P- and R g-wave travel times and amplitudes. I selected data for the inversion based on the criterion that they had consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period,more » volumetric components of the moment-tensor spectra were well constrained.« less

  7. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  8. Computational methods for analyzing the transmission characteristics of a beta particle magnetic analysis system

    NASA Technical Reports Server (NTRS)

    Singh, J. J.

    1979-01-01

    Computational methods were developed to study the trajectories of beta particles (positrons) through a magnetic analysis system as a function of the spatial distribution of the radionuclides in the beta source, size and shape of the source collimator, and the strength of the analyzer magnetic field. On the basis of these methods, the particle flux, their energy spectrum, and source-to-target transit times have been calculated for Na-22 positrons as a function of the analyzer magnetic field and the size and location of the target. These data are in studies requiring parallel beams of positrons of uniform energy such as measurement of the moisture distribution in composite materials. Computer programs for obtaining various trajectories are included.

  9. Social Functioning and Adjustment in Chinese Children: The Imprint of Historical Time

    ERIC Educational Resources Information Center

    Chen, Xinyin; Cen, Guozhen; Li, Dan; He, Yunfeng

    2005-01-01

    This study examined, in 3 cohorts (1990, 1998, and 2002) of elementary school children (M age10 years), relations between social functioning and adjustment in different phases of the societal transition in China. Data were obtained from multiple sources. The results indicate that sociability-cooperation was associated with peer acceptance and…

  10. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  11. MultiElec: A MATLAB Based Application for MEA Data Analysis.

    PubMed

    Georgiadis, Vassilis; Stephanou, Anastasis; Townsend, Paul A; Jackson, Thomas R

    2015-01-01

    We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/.

  12. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This review provides a description of the FSS technique, a promising tool for the BCI community for online electrophysiological feature extraction, and offers interesting information to develop BCI applications to sustain hand control in stroke patients. PMID:17331989

  13. The influences of delay time on the stability of a market model with stochastic volatility

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-02-01

    The effects of the delay time on the stability of a market model are investigated, by using a modified Heston model with a cubic nonlinearity and cross-correlated noise sources. These results indicate that: (i) There is an optimal delay time τo which maximally enhances the stability of the stock price under strong demand elasticity of stock price, and maximally reduces the stability of the stock price under weak demand elasticity of stock price; (ii) The cross correlation coefficient of noises and the delay time play an opposite role on the stability for the case of the delay time <τo and the same role for the case of the delay time >τo. Moreover, the probability density function of the escape time of stock price returns, the probability density function of the returns and the correlation function of the returns are compared with other literatures.

  14. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  15. Voltage controlled current source

    DOEpatents

    Casne, Gregory M.

    1992-01-01

    A seven decade, voltage controlled current source is described for use in testing intermediate range nuclear instruments that covers the entire test current range of from 10 picoamperes to 100 microamperes. High accuracy is obtained throughout the entire seven decades of output current with circuitry that includes a coordinated switching scheme responsive to the input signal from a hybrid computer to control the input voltage to an antilog amplifier, and to selectively connect a resistance to the antilog amplifier output to provide a continuous output current source as a function of a preset range of input voltage. An operator controlled switch provides current adjustment for operation in either a real-time simulation test mode or a time response test mode.

  16. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    PubMed Central

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582

  17. Electrochemical Ionization and Analyte Charging in the Array of Micromachined UltraSonic Electrospray (AMUSE) Ion Source

    PubMed Central

    Forbes, Thomas P.; Degertekin, F. Levent; Fedorov, Andrei G.

    2010-01-01

    Electrochemistry and ion transport in a planar array of mechanically-driven, droplet-based ion sources are investigated using an approximate time scale analysis and in-depth computational simulations. The ion source is modeled as a controlled-current electrolytic cell, in which the piezoelectric transducer electrode, which mechanically drives the charged droplet generation using ultrasonic atomization, also acts as the oxidizing/corroding anode (positive mode). The interplay between advective and diffusive ion transport of electrochemically generated ions is analyzed as a function of the transducer duty cycle and electrode location. A time scale analysis of the relative importance of advective vs. diffusive ion transport provides valuable insight into optimality, from the ionization prospective, of alternative design and operation modes of the ion source operation. A computational model based on the solution of time-averaged, quasi-steady advection-diffusion equations for electroactive species transport is used to substantiate the conclusions of the time scale analysis. The results show that electrochemical ion generation at the piezoelectric transducer electrodes located at the back-side of the ion source reservoir results in poor ionization efficiency due to insufficient time for the charged analyte to diffuse away from the electrode surface to the ejection location, especially at near 100% duty cycle operation. Reducing the duty cycle of droplet/analyte ejection increases the analyte residence time and, in turn, improves ionization efficiency, but at an expense of the reduced device throughput. For applications where this is undesirable, i.e., multiplexed and disposable device configurations, an alternative electrode location is incorporated. By moving the charging electrode to the nozzle surface, the diffusion length scale is greatly reduced, drastically improving ionization efficiency. The ionization efficiency of all operating conditions considered is expressed as a function of the dimensionless Peclet number, which defines the relative effect of advection as compared to diffusion. This analysis is general enough to elucidate an important role of electrochemistry in ionization efficiency of any arrayed ion sources, be they mechanically-driven or electrosprays, and is vital for determining optimal design and operation conditions. PMID:20607111

  18. Functional near-infrared spectroscopy at small source-detector distance by means of high dynamic-range fast-gated SPAD acquisitions: first in-vivo measurements

    NASA Astrophysics Data System (ADS)

    Di Sieno, L.; Contini, D.; Dalla Mora, A.; Torricelli, A.; Spinelli, L.; Cubeddu, R.; Tosi, A.; Boso, G.; Pifferi, A.

    2013-06-01

    In this article, we show experimental results of time-resolved optical spectroscopy performed with small distance between launching and detecting fibers. It was already demonstrated that depth discrimination is independent of source-detector separation and that measurements at small source detector distance provide better contrast and spatial resolution. The main disadvantage is represent by the huge increase in early photons (scarcely diffused by tissue) peak that can saturate the dynamic range of most detectors, hiding information carried by late photons. Thanks to a fast-gated Single- Photon Avalanche Diode (SPAD) module, we are able to reject the peak of early photons and to obtain high-dynamic range acquisitions. We exploit fast-gated SPAD module to perform for the first time functional near-infrared spectroscopy (fNIRS) at small source-detector distance for in vivo measurements and we demonstrate the possibility to detect non-invasively the dynamics of oxygenated and deoxygenated haemoglobin occurring in the motor cortex during a motor task. We also show the improvement in terms of signal amplitude and Signal-to-Noise Ratio (SNR) obtained exploiting fast-gated SPAD performances with respect to "non-gated" measurements.

  19. Effects of volcanic tremor on noise-based measurements of temporal velocity changes at Hawaiian volcanoes

    NASA Astrophysics Data System (ADS)

    Ballmer, S.; Wolfe, C. J.; Okubo, P.; Haney, M. M.; Thurber, C. H.

    2011-12-01

    Green's functions calculated with ambient seismic noise may aid in volcano research and monitoring. The continuous character of ambient seismic noise and hence of the reconstructed Green's functions has enabled measurements of short-term (~days) temporal perturbations in seismic velocities. Very small but clear velocity decreases prior to some volcanic eruptions have been documented and motivate our present study. We apply this method to Hawaiian volcanoes using data from the USGS Hawaiian Volcano Observatory (HVO) seismic network. In order to obtain geologically relevant and reliable results, stable Green's functions need to be recovered from the ambient noise. Station timing problems, changes in noise source directivity, as well as changes in the source's spectral content are known biases that critically affect the Green's functions' stability and hence need to be considered. Here we show that volcanic tremor is a potential additional bias. During the time period of our study (2007-present), we find that volcanic tremor is a common feature in the HVO seismic data. Pu'u O'o tremor is continuously present before a dike intrusion into Kilauea's east rift zone in June 2007 and Halema'uma'u tremor occurs before and during resumed Kilauea summit activity from early 2008 and onward. For the frequency band considered (0.1-0.9 Hz), we find that these active tremor sources can drastically modify the recovered Green's functions for station pairs on the entire island at higher (> 0.5 Hz) frequencies, although the effect of tremor appears diminished at lower frequencies. In this presentation, we perform measurements of temporal velocity changes using ambient noise Green's functions and explore how volcanic tremor affects the results. Careful quality assessment of reconstructed Green's functions appears to be essential for the desired high precision measurements.

  20. Observations on Rupture Behaviour of Fluid Induced Events at the Basel EGS Based on Empirical Green's Function Analysis

    NASA Astrophysics Data System (ADS)

    Folesky, J.; Kummerow, J.; Shapiro, S. A.; Asanuma, H.; Häring, M. O.

    2015-12-01

    The Emprirical Green's Function (EGF) method uses pairs of events of high wave form similarity and adjacent hypocenters to decompose the influences of source time function, ray path, instrument site, and instrument response. The seismogram of the smaller event is considered as the Green's Function which then can be deconvolved from the other seismogram. The result provides a reconstructed relative source time function (RSTF) of the larger event of that event pair. The comparison of the RSTFs at different stations of the observation systems produces information on the rupture process of the larger event based on the observation of the directivity effect and on changing RSTFs complexities.The Basel EGS dataset of 2006-2007 consists of about 2800 localized events of magnitudes between 0.0

  1. Cancellation of spurious arrivals in Green's function extraction and the generalized optical theorem

    USGS Publications Warehouse

    Snieder, R.; Van Wijk, K.; Haney, M.; Calvert, R.

    2008-01-01

    The extraction of the Green's function by cross correlation of waves recorded at two receivers nowadays finds much application. We show that for an arbitrary small scatterer, the cross terms of scattered waves give an unphysical wave with an arrival time that is independent of the source position. This constitutes an apparent inconsistency because theory predicts that such spurious arrivals do not arise, after integration over a complete source aperture. This puzzling inconsistency can be resolved for an arbitrary scatterer by integrating the contribution of all sources in the stationary phase approximation to show that the stationary phase contributions to the source integral cancel the spurious arrival by virtue of the generalized optical theorem. This work constitutes an alternative derivation of this theorem. When the source aperture is incomplete, the spurious arrival is not canceled and could be misinterpreted to be part of the Green's function. We give an example of how spurious arrivals provide information about the medium complementary to that given by the direct and scattered waves; the spurious waves can thus potentially be used to better constrain the medium. ?? 2008 The American Physical Society.

  2. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  3. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  4. A FOURIER-TRANSFORMED BREMSSTRAHLUNG FLASH MODEL FOR THE PRODUCTION OF X-RAY TIME LAGS IN ACCRETING BLACK HOLE SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroon, John J.; Becker, Peter A., E-mail: jkroon@gmu.edu, E-mail: pbecker@gmu.edu

    Accreting black hole sources show a wide variety of rapid time variability, including the manifestation of time lags during X-ray transients, in which a delay (phase shift) is observed between the Fourier components of the hard and soft spectra. Despite a large body of observational evidence for time lags, no fundamental physical explanation for the origin of this phenomenon has been presented. We develop a new theoretical model for the production of X-ray time lags based on an exact analytical solution for the Fourier transform describing the diffusion and Comptonization of seed photons propagating through a spherical corona. The resultingmore » Green's function can be convolved with any source distribution to compute the associated Fourier transform and time lags, hence allowing us to explore a wide variety of injection scenarios. We show that thermal Comptonization is able to self-consistently explain both the X-ray time lags and the steady-state (quiescent) X-ray spectrum observed in the low-hard state of Cyg X-1. The reprocessing of bremsstrahlung seed photons produces X-ray time lags that diminish with increasing Fourier frequency, in agreement with the observations for a wide range of sources.« less

  5. An open-source framework for analyzing N-electron dynamics. II. Hybrid density functional theory/configuration interaction methodology.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe

    2017-10-30

    In this contribution, we extend our framework for analyzing and visualizing correlated many-electron dynamics to non-variational, highly scalable electronic structure method. Specifically, an explicitly time-dependent electronic wave packet is written as a linear combination of N-electron wave functions at the configuration interaction singles (CIS) level, which are obtained from a reference time-dependent density functional theory (TDDFT) calculation. The procedure is implemented in the open-source Python program detCI@ORBKIT, which extends the capabilities of our recently published post-processing toolbox (Hermann et al., J. Comput. Chem. 2016, 37, 1511). From the output of standard quantum chemistry packages using atom-centered Gaussian-type basis functions, the framework exploits the multideterminental structure of the hybrid TDDFT/CIS wave packet to compute fundamental one-electron quantities such as difference electronic densities, transient electronic flux densities, and transition dipole moments. The hybrid scheme is benchmarked against wave function data for the laser-driven state selective excitation in LiH. It is shown that all features of the electron dynamics are in good quantitative agreement with the higher-level method provided a judicious choice of functional is made. Broadband excitation of a medium-sized organic chromophore further demonstrates the scalability of the method. In addition, the time-dependent flux densities unravel the mechanistic details of the simulated charge migration process at a glance. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. The Swift-BAT Hard X-ray Transient Monitor

    NASA Technical Reports Server (NTRS)

    Krimm, Hans; Markwardt, C. B.; Sanwal, D.; Tueller, J.

    2006-01-01

    The Burst Alert Telescope (BAT) on the Swift satellite is a large field of view instrument that continually monitors the sky to provide the gamma-ray burst trigger for Swift. An average of more than 70% of the sky is observed on a daily basis. The survey mode data is processed on two sets on time scales: from one minute to one day as part of the transient monitor program, and from one spacecraft pointing (approx.20 minutes) to the full mission duration for the hard X-ray survey program. The transient monitor has recently become public through the web site http:// swift.gsfc.nasa.gov/docs/swift/results/transients/. Sky images are processed to detect astrophysical sources in the 15-50 keV energy band and the detected flux or upper limit is calculated for >100 sources on time scales up to one day. Light curves are updated each time that new BAT data becomes available (approx.10 times daily). In addition, the monitor is sensitive to an outburst from a new or unknown source. Sensitivity as a function of time scale for catalog and unknown sources will be presented. The daily exposure for a typical source is approx.1500-3000 seconds, with a 1-sigma sensitivity of approx.4 mCrab. 90% of the sources are sampled at least every 16 days, but many sources are sampled daily. It is expected that the Swift-BAT transient monitor will become an important resource for the high energy astrophysics community.

  7. Solving the multi-frequency electromagnetic inverse source problem by the Fourier method

    NASA Astrophysics Data System (ADS)

    Wang, Guan; Ma, Fuming; Guo, Yukun; Li, Jingzhi

    2018-07-01

    This work is concerned with an inverse problem of identifying the current source distribution of the time-harmonic Maxwell's equations from multi-frequency measurements. Motivated by the Fourier method for the scalar Helmholtz equation and the polarization vector decomposition, we propose a novel method for determining the source function in the full vector Maxwell's system. Rigorous mathematical justifications of the method are given and numerical examples are provided to demonstrate the feasibility and effectiveness of the method.

  8. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  9. Shot noise cross-correlation functions and cross spectra - Implications for models of QPO X-ray sources

    NASA Technical Reports Server (NTRS)

    Shibazaki, N.; Elsner, R. F.; Bussard, R. W.; Ebisuzaki, T.; Weisskopf, M. C.

    1988-01-01

    The cross-correlation functions (CCFs) and cross spectra expected for quasi-periodic oscillation (QPO) shot noise models are calculated under various assumptions, and the results are compared to observations. Effects due to possible coherence of the QPO oscillations are included. General formulas for the cross spectrum, the cross-phase spectrum, and the time-delay spectrum for QPO shot models are calculated and discussed. It is shown that the CCFs, cross spectra, and power spectra observed for Cyg X-e2 imply that the spectrum of the shots evolves with time, with important implications for the interpretation of these functions as well as of observed average energy spectra. The possible origins for the observed hard lags are discussed, and some physical difficulties for the Comptonization model are described. Classes of physical models for QPO sources are briefly addressed, and it is concluded that models involving shot formation at the surface of neutron stars are favored by observation.

  10. Consistent Simulation Framework for Efficient Mass Discharge and Source Depletion Time Predictions of DNAPL Contaminants in Heterogeneous Aquifers Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Koch, J.

    2014-12-01

    Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.

  11. The relationship between sources and functions of social support and dimensions of child- and parent-related stress.

    PubMed

    Guralnick, M J; Hammond, M A; Neville, B; Connor, R T

    2008-12-01

    In this longitudinal study, we examined the relationship between the sources and functions of social support and dimensions of child- and parent-related stress for mothers of young children with mild developmental delays. Sixty-three mothers completed assessments of stress and support at two time points. Multiple regression analyses revealed that parenting support during the early childhood period (i.e. advice on problems specific to their child and assistance with child care responsibilities), irrespective of source, consistently predicted most dimensions of parent stress assessed during the early elementary years and contributed unique variance. General support (i.e. primarily emotional support and validation) from various sources had other, less widespread effects on parental stress. The multidimensional perspective of the construct of social support that emerged suggested mechanisms mediating the relationship between support and stress and provided a framework for intervention.

  12. Time-resolved two-window measurement of Wigner functions for coherent backscatter from a turbid medium

    NASA Astrophysics Data System (ADS)

    Reil, Frank; Thomas, John E.

    2002-05-01

    For the first time we are able to observe the time-resolved Wigner function of enhanced backscatter from a random medium using a novel two-window technique. This technique enables us to directly verify the phase-conjugating properties of random media. An incident divergent beam displays a convergent enhanced backscatter cone. We measure the joint position and momentum (x, p) distributions of the light field as a function of propagation time in the medium. The two-window technique allows us to independently control the resolutions for position and momentum, thereby surpassing the uncertainty limit associated with Fourier transform pairs. By using a low-coherence light source in a heterodyne detection scheme, we observe enhanced backscattering resolved by path length in the random medium, providing information about the evolution of optical coherence as a function of penetration depth in the random medium.

  13. Carbon Nanotube Fiber Ionization Mass Spectrometry: A Fundamental Study of a Multi-Walled Carbon Nanotube Functionalized Corona Discharge Pin for Polycyclic Aromatic Hydrocarbons Analysis.

    PubMed

    Nahan, Keaton S; Alvarez, Noe; Shanov, Vesselin; Vonderheide, Anne

    2017-11-01

    Mass spectrometry continues to tackle many complicated tasks, and ongoing research seeks to simplify its instrumentation as well as sampling. The desorption electrospray ionization (DESI) source was the first ambient ionization source to function without extensive gas requirements and chromatography. Electrospray techniques generally have low efficiency for ionization of nonpolar analytes and some researchers have resorted to methods such as direct analysis in real time (DART) or desorption atmospheric pressure chemical ionization (DAPCI) for their analysis. In this work, a carbon nanotube fiber ionization (nanoCFI) source was developed and was found to be capable of solid phase microextraction (SPME) of nonpolar analytes as well as ionization and sampling similar to that of direct probe atmospheric pressure chemical ionization (DP-APCI). Conductivity and adsorption were maintained by utilizing a corona pin functionalized with a multi-walled carbon nanotube (MWCNT) thread. Quantitative work with the nanoCFI source with a designed corona discharge pin insert demonstrated linearity up to 0.97 (R 2 ) of three target PAHs with phenanthrene internal standard. Graphical Abstract ᅟ.

  14. Carbon Nanotube Fiber Ionization Mass Spectrometry: A Fundamental Study of a Multi-Walled Carbon Nanotube Functionalized Corona Discharge Pin for Polycyclic Aromatic Hydrocarbons Analysis

    NASA Astrophysics Data System (ADS)

    Nahan, Keaton S.; Alvarez, Noe; Shanov, Vesselin; Vonderheide, Anne

    2017-09-01

    Mass spectrometry continues to tackle many complicated tasks, and ongoing research seeks to simplify its instrumentation as well as sampling. The desorption electrospray ionization (DESI) source was the first ambient ionization source to function without extensive gas requirements and chromatography. Electrospray techniques generally have low efficiency for ionization of nonpolar analytes and some researchers have resorted to methods such as direct analysis in real time (DART) or desorption atmospheric pressure chemical ionization (DAPCI) for their analysis. In this work, a carbon nanotube fiber ionization (nanoCFI) source was developed and was found to be capable of solid phase microextraction (SPME) of nonpolar analytes as well as ionization and sampling similar to that of direct probe atmospheric pressure chemical ionization (DP-APCI). Conductivity and adsorption were maintained by utilizing a corona pin functionalized with a multi-walled carbon nanotube (MWCNT) thread. Quantitative work with the nanoCFI source with a designed corona discharge pin insert demonstrated linearity up to 0.97 (R2) of three target PAHs with phenanthrene internal standard. [Figure not available: see fulltext.

  15. Effects of topography and crustal heterogeneities on the source estimation of LP event at Kilauea volcano

    USGS Publications Warehouse

    Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.

    2008-01-01

    The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.

  16. Dancing bees tune both duration and rate of waggle-run production in relation to nectar-source profitability.

    PubMed

    Seeley, T D; Mikheyev, A S; Pagano, G J

    2000-09-01

    For more than 50 years, investigators of the honey bee's waggle dance have reported that richer food sources seem to elicit longer-lasting and livelier dances than do poorer sources. However, no one had measured both dance duration and liveliness as a function of food-source profitability. Using video analysis, we found that nectar foragers adjust both the duration (D) and the rate (R) of waggle-run production, thereby tuning the number of waggle runs produced per foraging trip (W, where W= DR) as a function of food-source profitability. Both duration and rate of waggle-run production increase with rising food-source profitability. Moreover, we found that a dancing bee adjusts the rate of waggle-run production (R) in relation to food-source profitability by adjusting the mean duration of the return-phase portion of her dance circuits. This finding raises the possibility that bees can use return-phase duration as an index of food-source profitability. Finally, dances having different levels of liveliness have different mean durations of the return phase, indicating that dance liveliness can be quantified in terms of the time interval between consecutive waggle runs.

  17. Spatiotemporal Modelling of Dust Storm Sources Emission in West Asia

    NASA Astrophysics Data System (ADS)

    Khodabandehloo, E.; Alimohamdadi, A.; Sadeghi-Niaraki, A.; Darvishi Boloorani, A.; Alesheikh, A. A.

    2013-09-01

    Dust aerosol is the largest contributor to aerosol mass concentrations in the troposphere and has considerable effects on the air quality of spatial and temporal scales. Arid and semi-arid areas of the West Asia are one of the most important regional dust sources in the world. These phenomena directly or indirectly affecting almost all aspects life in almost 15 countries in the region. So an accurate estimate of dust emissions is very crucial for making a common understanding and knowledge of the problem. Because of the spatial and temporal limits of the ground-based observations, remote sensing methods have been found to be more efficient and useful for studying the West Asia dust source. The vegetation cover limits dust emission by decelerating the surface wind velocities and therefore reducing the momentum transport. While all models explicitly take into account the change of wind speed and soil moisture in calculating dust emissions, they commonly employ a "climatological" land cover data for identifying dust source locations and neglect the time variation of surface bareness. In order to compile the aforementioned model, land surface features such as soil moisture, texture, type, and vegetation and also wind speed as atmospheric parameter are used. Having used NDVI data show significant change in dust emission, The modeled dust emission with static source function in June 2008 is 17.02 % higher than static source function and similar result for Mach 2007 show the static source function is 8.91 % higher than static source function. we witness a significant improvement in accuracy of dust forecasts during the months of most soil vegetation changes (spring and winter) compared to outputs resulted from static model, in which NDVI data are neglected.

  18. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  19. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  20. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  1. The Raptor Real-Time Processing Architecture

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  2. Raptor -- Mining the Sky in Real Time

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.

    2004-06-01

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  3. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    PubMed

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.

  4. Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis

    PubMed Central

    Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2016-01-01

    Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257

  5. Using Network Theory to Understand Seismic Noise in Dense Arrays

    NASA Astrophysics Data System (ADS)

    Riahi, N.; Gerstoft, P.

    2015-12-01

    Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.

  6. INTEGRAL/SPI data segmentation to retrieve source intensity variations

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-07-01

    Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.

  7. Updating the orbital ephemeris of the dipping source XB 1254-690 and the distance to the source

    NASA Astrophysics Data System (ADS)

    Gambino, Angelo F.; Iaria, Rosario; Di Salvo, Tiziana; Matranga, Marco; Burderi, Luciano; Pintore, Fabio; Riggio, Alessandro; Sanna, Andrea

    2017-09-01

    XB 1254-690 is a dipping low mass X-ray binary system hosting a neutron star and showing type I X-ray bursts. We aim at obtaining a more accurate orbital ephemeris and at constraining the orbital period derivative of the system for the first time. In addition, we want to better constrain the distance to the source in order to locate the system in a well defined evolutive scenario. We apply, for the first time, an orbital timing technique to XB 1254-690, using the arrival times of the dips present in the light curves that have been collected during 26 yr of X-ray pointed observations acquired from different space missions. We estimate the dip arrival times using a statistical method that weights the count-rate inside the dip with respect to the level of persistent emission outside the dip. We fit the obtained delays as a function of the orbital cycles both with a linear and a quadratic function. We infer the orbital ephemeris of XB 1254-690, improving the accuracy of the orbital period with respect to previous estimates. We infer a mass of M 2 = 0.42 ± 0.04 M ʘ for the donor star, in agreement with estimations already present in literature, assuming that the star is in thermal equilibrium while it transfers part of its mass via the inner Lagrangian point, and assuming a neutron star mass of 1.4 M ʘ. Using these assumptions, we also constrain the distance to the source, finding a value of 7.6 ± 0.8 kpc. Finally, we discuss the evolution of the system, suggesting that it is compatible with a conservative mass transfer driven by magnetic braking.

  8. Algorithm for removing scalp signals from functional near-infrared spectroscopy signals in real time using multidistance optodes.

    PubMed

    Kiguchi, Masashi; Funane, Tsukasa

    2014-11-01

    A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.

  9. Varying efficacy of superdisintegrants in orally disintegrating tablets among different manufacturers.

    PubMed

    Mittapalli, R K; Qhattal, H S Sha; Lockman, P R; Yamsani, M R

    2010-11-01

    The main objective of the present study was to develop an orally disintegrating tablet formulation of domperidone and to study the functionality differences of superdisintegrants each obtained from two different sources on the tablet properties. Domperidone tablets were formulated with different superdisintegrants by direct compression. The effect of the type of superdisintegrant, its concentration and source was studied by measuring the in-vitro disintegration time, wetting time, water absorption ratios, drug release by dissolution and in-vivo oral disintegration time. Tablets prepared with crospovidone had lower disintegration times than tablets prepared from sodium starchglycolate and croscarmellose sodium. Formulations prepared with Polyplasdone XL, Ac-Di-Sol, and Explotab (D series) were better than formulations prepared with superdisintegrants obtained from other sources (DL series) which had longer disintegration times and lower water uptake ratios. The in-vivo disintegration time of formulation D-106 containing polyplasdone XL was significantly lower than that of the marketed formulation Domel-MT. The results from this study suggest that disintegration of orally disintegrating tablets is dependent on the nature of superdisintegrant, concentration in the formulation and its source. Even though a superdisintegrant meets USP standards there can be a variance among manufacturers in terms of performance. This is not only limited to in-vitro studies but carries over to disintegration times in the human population.

  10. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  11. Envelope of coda waves for a double couple source due to non-linear elasticity

    NASA Astrophysics Data System (ADS)

    Calisto, Ignacia; Bataille, Klaus

    2014-10-01

    Non-linear elasticity has recently been considered as a source of scattering, therefore contributing to the coda of seismic waves, in particular for the case of explosive sources. This idea is analysed further here, theoretically solving the expression for the envelope of coda waves generated by a point moment tensor in order to compare with earthquake data. For weak non-linearities, one can consider each point of the non-linear medium as a source of scattering within a homogeneous and linear medium, for which Green's functions can be used to compute the total displacement of scattered waves. These sources of scattering have specific radiation patterns depending on the incident and scattered P or S waves, respectively. In this approach, the coda envelope depends on three scalar parameters related to the specific non-linearity of the medium; however these parameters only change the scale of the coda envelope. The shape of the coda envelope is sensitive to both the source time function and the intrinsic attenuation. We compare simulations using this model with data from earthquakes in Taiwan, with a good fit.

  12. Disentangling the major source areas for an intense aerosol advection in the Central Mediterranean on the basis of Potential Source Contribution Function modeling of chemical and size distribution measurements

    NASA Astrophysics Data System (ADS)

    Petroselli, Chiara; Crocchianti, Stefano; Moroni, Beatrice; Castellini, Silvia; Selvaggi, Roberta; Nava, Silvia; Calzolai, Giulia; Lucarelli, Franco; Cappelletti, David

    2018-05-01

    In this paper, we combined a Potential Source Contribution Function (PSCF) analysis of daily chemical aerosol composition data with hourly aerosol size distributions with the aim to disentangle the major source areas during a complex and fast modulating advection event impacting on Central Italy in 2013. Chemical data include an ample set of metals obtained by Proton Induced X-ray Emission (PIXE), main soluble ions from ionic chromatography and elemental and organic carbon (EC, OC) obtained by thermo-optical measurements. Size distributions have been recorded with an optical particle counter for eight calibrated size classes in the 0.27-10 μm range. We demonstrated the usefulness of the approach by the positive identification of two very different source areas impacting during the transport event. In particular, biomass burning from Eastern Europe and desert dust from Sahara sources have been discriminated based on both chemistry and size distribution time evolution. Hourly BT provided the best results in comparison to 6 h or 24 h based calculations.

  13. StarBase: A Firm Real-Time Database Manager for Time-Critical Applications

    DTIC Science & Technology

    1995-01-01

    Mellon University [IO]. StarBase differs from previous RT-DBMS work [l, 2, 31 in that a) it relies on a real - time operating system which provides...simulation studies, StarBase uses a real - time operating system to provide basic real-time functionality and deals with issues beyond transaction...re- source scheduling provided by the underlying real - time operating system . Issues of data contention are dealt with by use of a priority

  14. Antioxidant intake and cognitive function of elderly men and women: the Cache County Study.

    PubMed

    Wengreen, H J; Munger, R G; Corcoran, C D; Zandi, P; Hayden, K M; Fotuhi, M; Skoog, I; Norton, M C; Tschanz, J; Breitner, J C S; Welsh-Bohmer, K A

    2007-01-01

    We prospectively examined associations between intakes of antioxidants (vitamins C, vitamin E, and carotene) and cognitive function and decline among elderly men and women of the Cache County Study on Memory and Aging in Utah. In 1995, 3831 residents 65 years of age or older completed a baseline survey that included a food frequency questionnaire and cognitive assessment. Cognitive function was assessed using an adapted version of the Modified Mini-Mental State examination (3MS) at baseline and at three subsequent follow-up interviews spanning approximately 7 years. Multivariable-mixed models were used to estimate antioxidant nutrient effects on average 3MS score over time. Increasing quartiles of vitamin C intake alone and combined with vitamin E were associated with higher baseline average 3MS scores (p-trend = 0.013 and 0.02 respectively); this association appeared stronger for food sources compared to supplement or food and supplement sources combined. Study participants with lower levels of intake of vitamin C, vitamin E and carotene had a greater acceleration of the rate of 3MS decline over time compared to those with higher levels of intake. High antioxidant intake from food and supplement sources of vitamin C, vitamin E, and carotene may delay cognitive decline in the elderly.

  15. CF NEUTRON TIME OF FLIGHT TRANSMISSION FOR MATERIAL IDENTIFICATION FOR WEAPONS TRAINERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mihalczo, John T; Valentine, Timothy E; Blakeman, Edward D

    2011-01-01

    The neutron transmission, elastic scattering, and non elastic reactions can be used to distinguish various isotopes. Neutron transmission as a function of energy can be used in some cases to identify materials in unknown objects. A time tagged californium source that provides a fission spectrum of neutrons is a useful source for neutron time-of-flight (TOF) transmission measurements. Many nuclear weapons trainer units for a particular weapons system (no fissile, but of same weight and center of gravity) in shipping containers were returned to the National Nuclear Security Administration Y-12 National Security Complex in the mid 1990s. Nuclear Materials Identification Systemmore » (NMIS) measurements with a time tagged californium neutron source were used to verify that these trainers did not contain fissile material. In these blind tests, the time distributions of neutrons through the containers were measured as a function of position to locate the approximate center of the trainer in the container. Measurements were also performed with an empty container. TOF template matching measurements were then performed at this location for a large number of units. In these measurements, the californium source was located on one end of the container and a proton recoil scintillator was located on the other end. The variations in the TOF transmission for times corresponding to 1 to 5 MeV were significantly larger than statistical. Further examination of the time distribution or the energy dependence revealed that these variations corresponded to the variations in the neutron cross section of aluminum averaged over the energy resolution of the californium TOF measurement with a flight path of about 90 cm. Measurements using different thicknesses of aluminum were also performed with the source and detector separated the same distance as for the trainer measurements. These comparison measurements confirmed that the material in the trainers was aluminum, and the total thickness of aluminum through the trainers was determined. This is an example of how californium transmission TOF measurements can be used to identify materials.« less

  16. The Transfer Function Model as a Tool to Study and Describe Space Weather Phenomena

    NASA Technical Reports Server (NTRS)

    Porter, Hayden S.; Mayr, Hans G.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    The Transfer Function Model (TFM) is a semi-analytical, linear model that is designed especially to describe thermospheric perturbations associated with magnetic storms and substorm. activity. It is a multi-constituent model (N2, O, He H, Ar) that accounts for wind induced diffusion, which significantly affects not only the composition and mass density but also the temperature and wind fields. Because the TFM adopts a semianalytic approach in which the geometry and temporal dependencies of the driving sources are removed through the use of height-integrated Green's functions, it provides physical insight into the essential properties of processes being considered, which are uncluttered by the accidental complexities that arise from particular source geometrie and time dependences. Extending from the ground to 700 km, the TFM eliminates spurious effects due to arbitrarily chosen boundary conditions. A database of transfer functions, computed only once, can be used to synthesize a wide range of spatial and temporal sources dependencies. The response synthesis can be performed quickly in real-time using only limited computing capabilities. These features make the TFM unique among global dynamical models. Given these desirable properties, a version of the TFM has been developed for personal computers (PC) using advanced platform-independent 3D visualization capabilities. We demonstrate the model capabilities with simulations for different auroral sources, including the response of ducted gravity waves modes that propagate around the globe. The thermospheric response is found to depend strongly on the spatial and temporal frequency spectra of the storm. Such varied behavior is difficult to describe in statistical empirical models. To improve the capability of space weather prediction, the TFM thus could be grafted naturally onto existing statistical models using data assimilation.

  17. Ambient seismic noise interferometry in Hawai'i reveals long-range observability of volcanic tremor

    USGS Publications Warehouse

    Ballmer, Silke; Wolfe, Cecily; Okubo, Paul G.; Haney, Matt; Thurber, Clifford H.

    2013-01-01

    The use of seismic noise interferometry to retrieve Green's functions and the analysis of volcanic tremor are both useful in studying volcano dynamics. Whereas seismic noise interferometry allows long-range extraction of interpretable signals from a relatively weak noise wavefield, the characterization of volcanic tremor often requires a dense seismic array close to the source. We here show that standard processing of seismic noise interferometry yields volcanic tremor signals observable over large distances exceeding 50 km. Our study comprises 2.5 yr of data from the U.S. Geological Survey Hawaiian Volcano Observatory short period seismic network. Examining more than 700 station pairs, we find anomalous and temporally coherent signals that obscure the Green's functions. The time windows and frequency bands of these anomalous signals correspond well with the characteristics of previously studied volcanic tremor sources at Pu'u 'Ō'ō and Halema'uma'u craters. We use the derived noise cross-correlation functions to perform a grid-search for source location, confirming that these signals are surface waves originating from the known tremor sources. A grid-search with only distant stations verifies that useful tremor signals can indeed be recovered far from the source. Our results suggest that the specific data processing in seismic noise interferometry—typically used for Green's function retrieval—can aid in the study of both the wavefield and source location of volcanic tremor over large distances. In view of using the derived Green's functions to image heterogeneity and study temporal velocity changes at volcanic regions, however, our results illustrate how care should be taken when contamination by tremor may be present.

  18. AIR POLLUTION EPIDEMIOLOGY: CAN INFORMATION BE OBTAINED FROM THE VARIATIONS IN SIGNIFICANCE AND RISK AS A FUNCTION OF DAYS AFTER EXPOSURE (LAG STRUCTURE)?

    EPA Science Inventory

    Determine if analysis of lag structure from time series epidemiology, using gases, particles, and source factor time series, can contribute to understanding the relationships among various air pollution indicators. Methods: Analyze lag structure from an epidemiologic study of ca...

  19. An ion source for radiofrequency-pulsed glow discharge time-of-flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    González Gago, C.; Lobo, L.; Pisonero, J.; Bordel, N.; Pereiro, R.; Sanz-Medel, A.

    2012-10-01

    A Grimm-type glow discharge (GD) has been designed and constructed as an ion source for pulsed radiofrequency GD spectrometry when coupled to an orthogonal time of flight mass spectrometer. Pulse shapes of argon species and analytes were studied as a function of the discharge conditions using a new in-house ion source (UNIOVI GD) and results have been compared with a previous design (PROTOTYPE GD). Different behavior and shapes of the pulse profiles have been observed for the two sources evaluated, particularly for the plasma gas ionic species detected. In the more analytically relevant region (afterglow), signals for 40Ar+ with this new design were negligible, while maximum intensity was reached earlier in time for 41(ArH)+ than when using the PROTOTYPE GD. Moreover, while maximum 40Ar+ signals measured along the pulse period were similar in both sources, 41(ArH)+ and 80(Ar2)+ signals tend to be noticeable higher using the PROTOTYPE chamber. The UNIOVI GD design was shown to be adequate for sensitive direct analysis of solid samples, offering linear calibration graphs and good crater shapes. Limits of detection (LODs) are in the same order of magnitude for both sources, although the UNIOVI source provides slightly better LODs for those analytes with masses slightly higher than 41(ArH)+.

  20. [Research on Time-frequency Characteristics of Magneto-acoustic Signal of Different Thickness Medium Based on Wave Summing Method].

    PubMed

    Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng

    2015-08-01

    Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.

  1. Indirect boundary element method to simulate elastic wave propagation in piecewise irregular and flat regions

    NASA Astrophysics Data System (ADS)

    Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.

    2016-06-01

    A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.

  2. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  3. Photoprotection in sequestered plastids of sea slugs and respective algal sources

    PubMed Central

    Cruz, Sónia; Cartaxana, Paulo; Newcomer, Rebecca; Dionísio, Gisela; Calado, Ricardo; Serôdio, João; Pelletreau, Karen N.; Rumpho, Mary E.

    2015-01-01

    Some sea slugs are capable of retaining functional sequestered chloroplasts (kleptoplasts) for variable periods of time. The mechanisms supporting the maintenance of these organelles in animal hosts are still largely unknown. Non-photochemical quenching (NPQ) and the occurrence of a xanthophyll cycle were investigated in the sea slugs Elysia viridis and E. chlorotica using chlorophyll fluorescence measurements and pigment analysis. The photoprotective capacity of kleptoplasts was compared to that observed in their respective algal source, Codium tomentosum and Vaucheria litorea. A functional xanthophyll cycle and a rapidly reversible NPQ component were found in V. litorea and E. chlorotica but not in C. tomentosum and E. viridis. To our knowledge, this is the first report of the absence of a functional xanthophyll cycle in a green macroalgae. The absence of a functional xanthophyll cycle in C. tomentosum could contribute to the premature loss of photosynthetic activity and relatively short-term retention of kleptoplasts in E. viridis. On the contrary, E. chlorotica displays one of the longest functional examples of kleptoplasty known so far. We speculate that different efficiencies of photoprotection and repair mechanisms of algal food sources play a role in the longevity of photosynthetic activity in kleptoplasts retained by sea slugs. PMID:25601025

  4. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.

  5. Restoration of Bladder and Bowel Function Using Electrical Stimulation and Block after Spinal Cord Injury

    DTIC Science & Technology

    2016-10-01

    AWARD NUMBER: W81XWH-14-2-0132 TITLE: Restoration of Bladder and Bowel Function Using Electrical Stimulation and Block after Spinal Cord Injury...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...COVERED 29 Sep 2015 - 28 Sep 2016 4. TITLE AND SUBTITLE Restoration of Bladder and Bowel Function Using Electrical Stimulation and Block after Spinal

  6. Reactivating Neural Circuits with Clinically Accessible Stimulation to Restore Hand Function in Persons with Tetraplegia

    DTIC Science & Technology

    2017-09-01

    AWARD NUMBER: W81XWH-16-1-0395 TITLE: Reactivating Neural Circuits with Clinically Accessible Stimulation to Restore Hand Function in...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...Clinically Accessible Stimulation to Restore Hand Function in Persons with Tetraplegia 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  7. Transport and solubility of Hetero-disperse dry deposition particulate matter subject to urban source area rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Ying, G.; Sansalone, J.

    2010-03-01

    SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.

  8. Waveform inversion in the frequency domain for the simultaneous determination of earthquake source mechanism and moment function

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Inoue, H.

    2008-06-01

    We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the CMT and the moment function, and may be useful for identification of tsunami earthquakes.

  9. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Biomass burning source characterization requirements in air quality models with and without data assimilation: challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Zhang, J. L.; Reid, J. S.; Curtis, C. A.; Westphal, D. L.

    2007-12-01

    Quantitative models of the transport and evolution of atmospheric pollution have graduated from the laboratory to become a part of the operational activity of forecast centers. Scientists studying the composition and variability of the atmosphere put great efforts into developing methods for accurately specifying sources of pollution, including natural and anthropogenic biomass burning. These methods must be adapted for use in operational contexts, which impose additional strictures on input data and methods. First, only input data sources available in near real-time are suitable for use in operational applications. Second, operational applications must make use of redundant data sources whenever possible. This is a shift in philosophy: in a research context, the most accurate and complete data set will be used, whereas in an operational context, the system must be designed with maximum redundancy. The goal in an operational context is to produce, to the extent possible, consistent and timely output, given sometimes inconsistent inputs. The Naval Aerosol Analysis and Prediction System (NAAPS), a global operational aerosol analysis and forecast system, recently began incorporating assimilation of satellite-derived aerosol optical depth. Assimilation of satellite AOD retrievals has dramatically improved aerosol analyses and forecasts from this system. The use of aerosol data assimilation also changes the strategy for improving the smoke source function. The absolute magnitude of emissions events can be refined through feedback from the data assimilation system, both in real- time operations and in post-processing analysis of data assimilation results. In terms of the aerosol source functions, the largest gains in model performance are now to be gained by reducing data latency and minimizing missed detections. In this presentation, recent model development work on the Fire Locating and Monitoring of Burning Emissions (FLAMBE) system that provides smoke aerosol boundary conditions for NAAPS is described, including redundant integration of multiple satellite platforms and development of feedback loops between the data assimilation system and smoke source.

  11. Spatiotemporal-Thematic Data Processing for the Semantic Web

    NASA Astrophysics Data System (ADS)

    Hakimpour, Farshad; Aleman-Meza, Boanerges; Perry, Matthew; Sheth, Amit

    This chapter presents practical approaches to data processing in the space, time and theme dimensions using existing Semantic Web technologies. It describes how we obtain geographic and event data from Internet sources and also how we integrate them into an RDF store. We briefly introduce a set of functionalities in space, time and semantics. These functionalities are implemented based on our existing technology for main-memory-based RDF data processing developed at the LSDIS Lab. A number of these functionalities are exposed as REST Web services. We present two sample client-side applications that are developed using a combination of our services with Google Maps service.

  12. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    NASA Astrophysics Data System (ADS)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio-magnetotelluric time-series, providing the means to assess quality of response functions obtained through processing.

  13. Time-reversal imaging techniques applied to tremor waveforms near Cholame, California to locate tectonic tremor

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2012-12-01

    Frequently, the lack of distinctive phase arrivals makes locating tectonic tremor more challenging than locating earthquakes. Classic location algorithms based on travel times cannot be directly applied because impulsive phase arrivals are often difficult to recognize. Traditional location algorithms are often modified to use phase arrivals identified from stacks of recurring low-frequency events (LFEs) observed within tremor episodes, rather than single events. Stacking the LFE waveforms improves the signal-to-noise ratio for the otherwise non-distinct phase arrivals. In this study, we apply a different method to locate tectonic tremor: a modified time-reversal imaging approach that potentially exploits the information from the entire tremor waveform instead of phase arrivals from individual LFEs. Time reversal imaging uses the waveforms of a given seismic source recorded by multiple seismometers at discrete points on the surface and a 3D velocity model to rebroadcast the waveforms back into the medium to identify the seismic source location. In practice, the method works by reversing the seismograms recorded at each of the stations in time, and back-propagating them from the receiver location individually into the sub-surface as a new source time function. We use a staggered-grid, finite-difference code with 2.5 ms time steps and a grid node spacing of 50 m to compute the rebroadcast wavefield. We calculate the time-dependent curl field at each grid point of the model volume for each back-propagated seismogram. To locate the tremor, we assume that the source time function back-propagated from each individual station produces a similar curl field at the source position. We then cross-correlate the time dependent curl field functions and calculate a median cross-correlation coefficient at each grid point. The highest median cross-correlation coefficient in the model volume is expected to represent the source location. For our analysis, we use the velocity model of Thurber et al. (2006) interpolated to a grid spacing of 50 m. Such grid spacing corresponds to frequencies of up to 8 Hz, which is suitable to calculate the wave propagation of tremor. Our dataset contains continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault as well as data from the HRSN and PBO networks. Initial synthetic results from tests on a 2D plane using a line of 15 receivers suggest that we are able to recover accurate event locations to within 100 m horizontally and 300 m depth. We conduct additional synthetic tests to determine the influence of signal-to-noise ratio, number of stations used, and the uncertainty in the velocity model on the location result by adding noise to the seismograms and perturbations to the velocity model. Preliminary results show accurate show location results to within 400 m with a median signal-to-noise ratio of 3.5 and 5% perturbations in the velocity model. The next steps will entail performing the synthetic tests on the 3D velocity model, and applying the method to tremor waveforms. Furthermore, we will determine the spatial and temporal distribution of the source locations and compare our results to those by Sumy and others.

  14. Effects of fourth-order dispersion in very high-speed optical time-division multiplexed transmission.

    PubMed

    Capmany, J; Pastor, D; Sales, S; Ortega, B

    2002-06-01

    We present a closed-form expression for computation of the output pulse's rms time width in an optical fiber link with up to fourth-order dispersion (FOD) by use of an optical source with arbitrary linewidth and chirp parameters. We then specialize the expression to analyze the effect of FOD on the transmission of very high-speed linear optical time-division multiplexing systems. By suitable source chirping, FOD can be compensated for to an upper link-length limit above which other techniques must be employed. Finally, a design formula to estimate the maximum attainable bit rate limited by FOD as a function of the link length is also presented.

  15. Dynamic surface acoustic response to a thermal expansion source on an anisotropic half space.

    PubMed

    Zhao, Peng; Zhao, Ji-Cheng; Weaver, Richard

    2013-05-01

    The surface displacement response to a distributed thermal expansion source is solved using the reciprocity principle. By convolving the strain Green's function with the thermal stress field created by an ultrafast laser illumination, the complete surface displacement on an anisotropic half space induced by laser absorption is calculated in the time domain. This solution applies to the near field surface displacement due to pulse laser absorption. The solution is validated by performing ultrafast laser pump-probe measurements and showing very good agreement between the measured time-dependent probe beam deflection and the computed surface displacement.

  16. Source locations for impulsive electric signals seen in the night ionosphere of Venus

    NASA Technical Reports Server (NTRS)

    Russell, C. T.; Von Dornum, M.; Scarf, F. L.

    1989-01-01

    A mapping of the rate of occurrence of impulsive VLF noise bursts in Venus' dark low altitude ionosphere, which increases rapidly with decreasing altitude, as a function of latitude and longitude indicates enhanced occurrence rates over Atla. In a 30-sec observing period, there are impulsive signals 70 percent of the time at 160 km in the region of maximum occurrence; the occurrence rates, moreover, increase with decreasing latitude, so that the equatorial rate is of the order of 1.6 times that at 30 deg latitude. These phenomena are in keeping with lightning-generated wave sources.

  17. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    NASA Astrophysics Data System (ADS)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  18. Small-scale seismic inversion using surface waves extracted from noise cross correlation.

    PubMed

    Gouédard, Pierre; Roux, Philippe; Campillo, Michel

    2008-03-01

    Green's functions can be retrieved between receivers from the correlation of ambient seismic noise or with an appropriate set of randomly distributed sources. This principle is demonstrated in small-scale geophysics using noise sources generated by human steps during a 10-min walk in the alignment of a 14-m-long accelerometer line array. The time-domain correlation of the records yields two surface wave modes extracted from the Green's function between each pair of accelerometers. A frequency-wave-number Fourier analysis yields each mode contribution and their dispersion curve. These dispersion curves are then inverted to provide the one-dimensional shear velocity of the near surface.

  19. Deconvolution of squared velocity waveform as applied to the study of a noncoherent short-period radiator in the earthquake source

    NASA Astrophysics Data System (ADS)

    Gusev, A. A.; Pavlov, V. M.

    1991-07-01

    We consider an inverse problem of determination of short-period (high-frequency) radiator in an extended earthquake source. This radiator is assumed to be noncoherent (i.e., random), it can be described by its power flux or brightness (which depends on time and location over the extended source). To decide about this radiator we try to use temporal intensity function (TIF) of a seismic waveform at a given receiver point. It is defined as (time-varying) mean elastic wave energy flux through unit area. We suggest estimating it empirically from the velocity seismogram by its squaring and smoothing. We refer to this function as “observed TIF”. We believe that one can represent TIF produced by an extended radiator and recorded at some receiver point in the earth as convolution of the two components: (1) “ideal” intensity function (ITIF) which would be recorded in the ideal nonscattering earth from the same radiator; and (2) intensity function which would be recorded in the real earth from unit point instant radiator (“intensity Green's function”, IGF). This representation enables us to attempt to estimate an ITIF of a large earthquake by inverse filtering or deconvolution of the observed TIF of this event, using the observed TIF of a small event (actually, fore-or aftershock) as the empirical IGF. Therefore, the effect of scattering is “stripped off”. Examples of the application of this procedure to real data are given. We also show that if one can determine far-field ITIF for enough rays, one can extract from them the information on space-time structure of the radiator (that is, of brightness function). We apply this theoretical approach to short-period P-wave records of the 1978 Miyagi-oki earthquake ( M=7.6). Spatial and temporal centroids of a short-period radiator are estimated.

  20. Correlation functions from a unified variational principle: Trial Lie groups

    NASA Astrophysics Data System (ADS)

    Balian, R.; Vénéroni, M.

    2015-11-01

    Time-dependent expectation values and correlation functions for many-body quantum systems are evaluated by means of a unified variational principle. It optimizes a generating functional depending on sources associated with the observables of interest. It is built by imposing through Lagrange multipliers constraints that account for the initial state (at equilibrium or off equilibrium) and for the backward Heisenberg evolution of the observables. The trial objects are respectively akin to a density operator and to an operator involving the observables of interest and the sources. We work out here the case where trial spaces constitute Lie groups. This choice reduces the original degrees of freedom to those of the underlying Lie algebra, consisting of simple observables; the resulting objects are labeled by the indices of a basis of this algebra. Explicit results are obtained by expanding in powers of the sources. Zeroth and first orders provide thermodynamic quantities and expectation values in the form of mean-field approximations, with dynamical equations having a classical Lie-Poisson structure. At second order, the variational expression for two-time correlation functions separates-as does its exact counterpart-the approximate dynamics of the observables from the approximate correlations in the initial state. Two building blocks are involved: (i) a commutation matrix which stems from the structure constants of the Lie algebra; and (ii) the second-derivative matrix of a free-energy function. The diagonalization of both matrices, required for practical calculations, is worked out, in a way analogous to the standard RPA. The ensuing structure of the variational formulae is the same as for a system of non-interacting bosons (or of harmonic oscillators) plus, at non-zero temperature, classical Gaussian variables. This property is explained by mapping the original Lie algebra onto a simpler Lie algebra. The results, valid for any trial Lie group, fulfill consistency properties and encompass several special cases: linear responses, static and time-dependent fluctuations, zero- and high-temperature limits, static and dynamic stability of small deviations.

  1. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  2. A Brief Review on the Use of Functional Near-Infrared Spectroscopy (fNIRS) for Language Imaging Studies in Human Newborns and Adults

    ERIC Educational Resources Information Center

    Quaresima, Valentina; Bisconti, Silvia; Ferrari, Marco

    2012-01-01

    Upon stimulation, real time maps of cortical hemodynamic responses can be obtained by non-invasive functional near-infrared spectroscopy (fNIRS) which measures changes in oxygenated and deoxygenated hemoglobin after positioning multiple sources and detectors over the human scalp. The current commercially available transportable fNIRS systems have…

  3. Time-Constrained Functional Connectivity Analysis of Cortical Networks Underlying Phonological Decoding in Typically Developing School-Aged Children: A Magnetoencephalography Study

    ERIC Educational Resources Information Center

    Simos, Panagiotis G.; Rezaie, Roozbeh; Fletcher, Jack M.; Papanicolaou, Andrew C.

    2013-01-01

    The study investigated functional associations between left hemisphere occipitotemporal, temporoparietal, and inferior frontal regions during oral pseudoword reading in 58 school-aged children with typical reading skills (aged 10.4 [plus or minus] 1.6, range 7.5-12.5 years). Event-related neuromagnetic data were used to compute source-current…

  4. Monolithic LED arrays, next generation smart lighting sources

    NASA Astrophysics Data System (ADS)

    Lagrange, Alexandre; Bono, Hubert; Templier, François

    2016-03-01

    LED have become the main light sources of the future as they open the path for intelligent use of light in time, intensity and color. In many usages, strong energy economy is done by adjusting these properties. The smart lighting has three dimensions, energy efficiency brought by GaN blue emitting LEDs, integration of electronics, sensors, microprocessors in the lighting system and development of new functionalities and services provided by the light. Monolithic LED arrays allow two major innovations, the spatial control of light emission and the adjustment of the electrical properties of the source.

  5. Optimal strategies for observation of active galactic nuclei variability with Imaging Atmospheric Cherenkov Telescopes

    NASA Astrophysics Data System (ADS)

    Giomi, Matteo; Gerard, Lucie; Maier, Gernot

    2016-07-01

    Variable emission is one of the defining characteristic of active galactic nuclei (AGN). While providing precious information on the nature and physics of the sources, variability is often challenging to observe with time- and field-of-view-limited astronomical observatories such as Imaging Atmospheric Cherenkov Telescopes (IACTs). In this work, we address two questions relevant for the observation of sources characterized by AGN-like variability: what is the most time-efficient way to detect such sources, and what is the observational bias that can be introduced by the choice of the observing strategy when conducting blind surveys of the sky. Different observing strategies are evaluated using simulated light curves and realistic instrument response functions of the Cherenkov Telescope Array (CTA), a future gamma-ray observatory. We show that strategies that makes use of very small observing windows, spread over large periods of time, allows for a faster detection of the source, and are less influenced by the variability properties of the sources, as compared to strategies that concentrate the observing time in a small number of large observing windows. Although derived using CTA as an example, our conclusions are conceptually valid for any IACTs facility, and in general, to all observatories with small field of view and limited duty cycle.

  6. Gpufit: An open-source toolkit for GPU-accelerated curve fitting.

    PubMed

    Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark

    2017-11-16

    We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.

  7. High-Performance AC Power Source by Applying Robust Stability Control Technology for Precision Material Machining

    NASA Astrophysics Data System (ADS)

    Chang, En-Chih

    2018-02-01

    This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.

  8. Global excitation of wave phenomena in a dissipative multiconstituent medium. III - Response characteristics for different sources in the earth's thermosphere

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.

    1987-01-01

    A linear trasnfer function model of the earth's thermosphere which includes the electric field momentum source is used to study the differences in the response characteristics for Joule heating and momentum coupling in the thermosphere. It is found that, for Joule/particle heating, the temperature and density perturbations contain a relatively large trapped component which has the property of a low-pass filter, with slow decay after the source is turned off. The decay time is sensitive to the altitude of energy deposition and is significantly reduced as the source peak moves from 125 to 150 km. For electric field momentum coupling, the trapped components in the temperature and density perturbations are relatively small. In the curl field of the velocity, however, the trapped component dominates, but compared with the temperature and density its decay time is much shorter. Outside the source region the form of excitation is of secondary importance for the generation of the various propagating gravity wave modes.

  9. Solute source depletion control of forward and back diffusion through low-permeability zones

    NASA Astrophysics Data System (ADS)

    Yang, Minjune; Annable, Michael D.; Jawitz, James W.

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.

  10. Solute source depletion control of forward and back diffusion through low-permeability zones.

    PubMed

    Yang, Minjune; Annable, Michael D; Jawitz, James W

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Empirical and Theoretical Aspects of Generation and Transfer of Information in a Neuromagnetic Source Network

    PubMed Central

    Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal

    2011-01-01

    Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968

  12. Searching and exploitation of distributed geospatial data sources via the Naval Research Lab's Geospatial Information Database (GIDB) Portal System

    NASA Astrophysics Data System (ADS)

    McCreedy, Frank P.; Sample, John T.; Ladd, William P.; Thomas, Michael L.; Shaw, Kevin B.

    2005-05-01

    The Naval Research Laboratory"s Geospatial Information Database (GIDBTM) Portal System has been extended to now include an extensive geospatial search functionality. The GIDB Portal System interconnects over 600 distributed geospatial data sources via the Internet with a thick client, thin client and a PDA client. As the GIDB Portal System has rapidly grown over the last two years (adding hundreds of geospatial sources), the obvious requirement has arisen to more effectively mine the interconnected sources in near real-time. How the GIDB Search addresses this issue is the prime focus of this paper.

  13. Cyber Victimization and Perceived Stress: Linkages to Late Adolescents' Cyber Aggression and Psychological Functioning

    ERIC Educational Resources Information Center

    Wright, Michelle F.

    2015-01-01

    The present study examined multiple sources of strain, particular cyber victimization, and perceived stress from parents, peers, and academics, in relation to late adolescents' (ages 16-18; N = 423) cyber aggression, anxiety, and depression, each assessed 1 year later (Time 2). Three-way interactions revealed that the relationship between Time 1…

  14. Legumes as Functional Ingredients in Gluten-Free Bakery and Pasta Products.

    PubMed

    Foschia, Martina; Horstmann, Stefan W; Arendt, Elke K; Zannini, Emanuele

    2017-02-28

    The increasing demand for gluten-free food products from consumers has triggered food technologists to investigate a wide range of gluten-free ingredients from different sources to reproduce the unique network structure developed by gluten in a wheat-dough system. In recent times, the attention has been focused on novel application of legume flour or ingredients. The interest in this crop category is mainly attributed to their functional properties, such as solubility and water-binding capacity, which play an important role in gluten-free food formulation and processing. Their nutritional profile may also counteract the lack of nutrients commonly highlighted in commercial gluten-free bakery and pasta products, providing valuable sources of protein, dietary fiber, vitamins, minerals, and complex carbohydrates, which in turn have a positive impact on human health. This review reports the main chemical and functional characteristics of legumes and their functional application in gluten-free products.

  15. Numerical simulations of Asian dust storms using a coupled climate-aerosol microphysical model

    NASA Astrophysics Data System (ADS)

    Su, Lin; Toon, Owen B.

    2009-07-01

    We have developed a three-dimensional coupled microphysical/climate model based on the National Center for Atmospheric Research Community Atmospheres Model and the University of Colorado/NASA Community Aerosol and Radiation Model for Atmospheres. We have used the model to investigate the sources, removal processes, transport, and optical properties of Asian dust aerosol and its impact on downwind regions. The model simulations are conducted primarily during the time frame of the Aerosol Characterization Experiment-Asia field experiment (March-May 2001) since considerable in situ data are available at that time. Our dust source function follows Ginoux et al. (2001). We modified the dust source function by using the friction velocity instead of the 10-m wind based on wind erosion theory, by adding a size-dependent threshold friction velocity following Marticorena and Bergametti (1995) and by adding a soil moisture correction. A Weibull distribution is implemented to estimate the subgrid-scale wind speed variability. We use eight size bins for mineral dust ranging from 0.1 to 10 μm radius. Generally, the model reproduced the aerosol optical depth retrieved by the ground-based Aerosol Robotic Network (AERONET) Sun photometers at six study sites ranging in location from near the Asian dust sources to the Eastern Pacific region. By constraining the dust complex refractive index from AERONET retrievals near the dust source, we also find the single-scattering albedo to be consistent with AERONET retrievals. However, large regional variations are observed due to local pollution. The timing of dust events is comparable to the National Institute for Environmental Studies (NIES) lidar data in Beijing and Nagasaki. However, the simulated dust aerosols are at higher altitudes than those observed by the NIES lidar.

  16. On recovering distributed IP information from inductive source time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Kang, Seogi; Oldenburg, Douglas W.

    2016-10-01

    We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: (1) Obtain a 3-D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. (2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. (3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (1) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (2) the horizontal location of a target body can be well recovered through inversion; (3) the overall geometry of a target body might be recovered but for ATEM data a depth weighting is required in the inversion; (4) we can recover estimates of intrinsic τ and η that may be useful for distinguishing between two chargeable targets.

  17. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)

    2002-01-01

    Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.

  18. Rupture process of 2016, 25 January earthquake, Alboran Sea (South Spain, Mw= 6.4) and aftershocks series

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; del Fresno, C.; Cantavella, J.; Sanz de Galdeano, C.; Udias, A.

    2016-12-01

    We have studied the rupture process of the 25 January 2016 earthquake (Mw =6.4) occurred in South Spain in the Alboran Sea. Main shock, foreshock and largest aftershocks (Mw =4.5) have been relocated using the NonLinLoc algorithm. Results obtained show a NE-SW distribution of foci at shallow depth (less than 15 km). For main shock, focal mechanism has been obtained from slip inversion over the rupture plane of teleseismic data, corresponding to left-lateral strike-slip motion. The rupture starts at 7 km depth and it propagates upward with a complex source time function. In order to obtain a more detailed source time function and to validate the results obtained from teleseismic data, we have used the Empirical Green Functions method (EGF) at regional distances. Finally, results of the directivity effect from teleseismic Rayleigh waves and the EGF method, are consistent with a rupture propagation to the NE. These results are interpreted in terms of the main geological features in the region.

  19. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  20. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  1. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  2. Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Ahn, Kyungjin

    2015-02-01

    We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.

  3. Evaluation of light detector surface area for functional Near Infrared Spectroscopy.

    PubMed

    Wang, Lei; Ayaz, Hasan; Izzetoglu, Meltem; Onaral, Banu

    2017-10-01

    Functional Near Infrared Spectroscopy (fNIRS) is an emerging neuroimaging technique that utilizes near infrared light to detect cortical concentration changes of oxy-hemoglobin and deoxy-hemoglobin non-invasively. Using light sources and detectors over the scalp, multi-wavelength light intensities are recorded as time series and converted to concentration changes of hemoglobin via modified Beer-Lambert law. Here, we describe a potential source for systematic error in the calculation of hemoglobin changes and light intensity measurements. Previous system characterization and analysis studies looked into various fNIRS parameters such as type of light source, number and selection of wavelengths, distance between light source and detector. In this study, we have analyzed the contribution of light detector surface area to the overall outcome. Results from Monte Carlo based digital phantoms indicated that selection of detector area is a critical system parameter in minimizing the error in concentration calculations. The findings here can guide the design of future fNIRS sensors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  5. Evaluation of hazard and integrity monitor functions for integrated alerting and notification using a sensor simulation framework

    NASA Astrophysics Data System (ADS)

    Bezawada, Rajesh; Uijt de Haag, Maarten

    2010-04-01

    This paper discusses the results of an initial evaluation study of hazard and integrity monitor functions for use with integrated alerting and notification. The Hazard and Integrity Monitor (HIM) (i) allocates information sources within the Integrated Intelligent Flight Deck (IIFD) to required functionality (like conflict detection and avoidance) and determines required performance of these information sources as part of that function; (ii) monitors or evaluates the required performance of the individual information sources and performs consistency checks among various information sources; (iii) integrates the information to establish tracks of potential hazards that can be used for the conflict probes or conflict prediction for various time horizons including the 10, 5, 3, and <3 minutes used in our scenario; (iv) detects and assesses the class of the hazard and provide possible resolutions. The HIM monitors the operation-dependent performance parameters related to the potential hazards in a manner similar to the Required Navigation Performance (RNP). Various HIM concepts have been implemented and evaluated using a previously developed sensor simulator/synthesizer. Within the simulation framework, various inputs to the IIFD and its subsystems are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. The framework and HIM functions are implemented in SimulinkR, a modeling language developed by The MathworksTM. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft.

  6. The excitation and characteristic frequency of the long-period volcanic event: An approach based on an inhomogeneous autoregressive model of a linear dynamic system

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Kumazawa, M.; Yamaoka, K.; Chouet, B.A.

    1998-01-01

    We present a method to quantify the source excitation function and characteristic frequencies of long-period volcanic events. The method is based on an inhomogeneous autoregressive (AR) model of a linear dynamic system, in which the excitation is assumed to be a time-localized function applied at the beginning of the event. The tail of an exponentially decaying harmonic waveform is used to determine the characteristic complex frequencies of the event by the Sompi method. The excitation function is then derived by operating an AR filter constructed from the characteristic frequencies to the entire seismogram of the event, including the inhomogeneous part of the signal. We apply this method to three long-period events at Kusatsu-Shirane Volcano, central Japan, whose waveforms display simple decaying monochromatic oscillations except for the beginning of the events. We recover time-localized excitation functions lasting roughly 1 s at the start of each event and find that the estimated functions are very similar to each other at all the stations of the seismic network for each event. The phases of the characteristic oscillations referred to the estimated excitation function fall within a narrow range for almost all the stations. These results strongly suggest that the excitation and mode of oscillation are both dominated by volumetric change components. Each excitation function starts with a pronounced dilatation consistent with a sudden deflation of the volumetric source which may be interpreted in terms of a choked-flow transport mechanism. The frequency and Q of the characteristic oscillation both display a temporal evolution from event to event. Assuming a crack filled with bubbly water as seismic source for these events, we apply the Van Wijngaarden-Papanicolaou model to estimate the acoustic properties of the bubbly liquid and find that the observed changes in the frequencies and Q are consistently explained by a temporal change in the radii of the bubbles characterizing the bubbly water in the crack.

  7. Nonparametric Conditional Estimation

    DTIC Science & Technology

    1987-02-01

    the data because the statistician has complete control over the method. It is especially reasonable when there is a bone fide loss function to which...For example, the sample mean is m(Fn). Most calculations that statisticians perform on a set of data can be expressed as statistical functionals on...of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering

  8. Generalized Boolean Functions as Combiners

    DTIC Science & Technology

    2017-06-01

    unable to find an analytically way of calculating a number for the complexity. Given the data we presented, there is not a obvious way to predict what...including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...backbone of many computer functions. Cryptography drives online commerce and allows privileged information safe transit between two parties as well as many

  9. Frequency domain and full waveform time domain inversion of ground based magnetometer, electrometer and incoherent scattering radar arrays to image strongly heterogenous 3-D Earth structure, ionospheric structure, and to predict the intensity of GICs in the power grid

    NASA Astrophysics Data System (ADS)

    Schultz, A.; Imamura, N.; Bonner, L. R., IV; Cosgrove, R. B.

    2016-12-01

    Ground-based magnetometer and electrometer arrays provide the means to probe the structure of the Earth's interior, the interactions of space weather with the ionosphere, and to anticipate the intensity of geomagnetically induced currents (GICs) in power grids. We present a local-to-continental scale view of a heterogeneous 3-D crust and mantle as determined from magnetotelluric (MT) observations across arrays of ground-based electric and magnetic field sensors. MT impedance tensors describe the relationship between electric and magnetic fields at a given site, thus implicitly they contain all known information on the 3-D electrical resistivity structure beneath and surrounding that site. By using multivariate transfer functions to project real-time magnetic observatory network data to areas surrounding electric power grids, and by projecting those magnetic fields through MT impedance tensors, the projected magnetic field can be transformed into predictions of electric fields along the path of the transmission lines, an essential element of predicting the intensity of GICs in the grid. Finally, we explore GICs, i.e. Earth-ionosphere coupling directly in the time-domain. We consider the fully coupled EM system, where we allow for a non-stationary ionospheric source field of arbitrary complexity above a 3-D Earth. We solve the simultaneous inverse problem for 3-D Earth conductivity and source field structure directly in the time domain. In the present work, we apply this method to magnetotelluric data obtained from a synchronously operating array of 25 MT stations that collected continuous MT waveform data in the interior of Alaska during the autumn and winter of 2015 under the footprint of the Poker Flat (Alaska) Incoherent Scattering Radar (PFISR). PFISR data yield functionals of the ionospheric electric field and ionospheric conductivity that constrain the MT source field. We show that in this region conventional robust MT processing methods struggle to produce reliable MT response functions at periods much greater than about 2,000 s, a consequence, we believe, of the complexity of the ionospheric source fields in this high latitude setting. This provides impetus for direct waveform inversion methods that dispense with typical parametric assumptions made about the MT source fields.

  10. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  11. Stress Wave Source Characterization: Impact, Fracture, and Sliding Friction

    NASA Astrophysics Data System (ADS)

    McLaskey, Gregory Christofer

    Rapidly varying forces, such as those associated with impact, rapid crack propagation, and fault rupture, are sources of stress waves which propagate through a solid body. This dissertation investigates how properties of a stress wave source can be identified or constrained using measurements recorded at an array of sensor sites located far from the source. This methodology is often called the method of acoustic emission and is useful for structural health monitoring and the noninvasive study of material behavior such as friction and fracture. In this dissertation, laboratory measurements of 1--300 mm wavelength stress waves are obtained by means of piezoelectric sensors which detect high frequency (10 kHz--3MHz) motions of a specimen's surface, picometers to nanometers in amplitude. Then, stress wave source characterization techniques are used to study ball impact, drying shrinkage cracking in concrete, and the micromechanics of stick-slip friction of Poly(methyl methacrylate) (PMMA) and rock/rock interfaces. In order to quantitatively relate recorded signals obtained with an array of sensors to a particular stress wave source, wave propagation effects and sensor distortions must be accounted for. This is achieved by modeling the physics of wave propagation and transduction as linear transfer functions. Wave propagation effects are precisely modeled by an elastodynamic Green's function, sensor distortion is characterized by an instrument response function, and the stress wave source is represented with a force moment tensor. These transfer function models are verified though calibration experiments which employ two different mechanical calibration sources: ball impact and glass capillary fracture. The suitability of the ball impact source model, based on Hertzian contact theory, is experimentally validated for small (˜1 mm) balls impacting massive plates composed of four different materials: aluminum, steel, glass, and PMMA. Using this transfer function approach and the two mechanical calibration sources, four types of piezoelectric sensors were calibrated: three commercially available sensors and the Glaser-type conical piezoelectric sensor, which was developed in the Glaser laboratory. The distorting effects of each sensor are modeled using autoregressive-moving average (ARMA) models, and because vital phase information is robustly incorporated into these models, they are useful for simulating or removing sensor-induced distortions, so that a displacement time history can be retrieved from recorded signals. The Glaser-type sensor was found to be very well modeled as a unidirectional displacement sensor which detects stress wave disturbances down to about 1 picometer in amplitude. Finally, the merits of a fully calibrated experimental system are demonstrated in a study of stress wave sources arising from sliding friction, and the relationship between those sources and earthquakes. A laboratory friction apparatus was built for this work which allows the micro-mechanisms of friction to be studied with stress wave analysis. Using an array of 14 Glaser-type sensors, and precise models of wave propagation effects and the sensor distortions, the physical origins of the stress wave sources are explored. Force-time functions and focal mechanisms are determined for discrete events found amid the "noise" of friction. These localized events are interpreted to be the rupture of micrometer-sized contacts, known as asperities. By comparing stress wave sources from stick-slip experiments on plastic/plastic and rock/rock interfaces, systematic differences were found. The rock interface produces very rapid (<1 microsecond) implosive forces indicative of brittle asperity failure and fault gouge formation, while rupture on the plastic interface releases only shear force and produces a source more similar to earthquakes commonly recorded in the field. The difference between the mechanisms is attributed to the vast differences in the hardness and melting temperatures of the two materials, which affect the distribution of asperities as well as their failure behavior. With proper scaling, the strong link between material properties and laboratory earthquakes will aid in our understanding of fault mechanics and the generation of earthquakes and seismic tremor.

  12. Analytical time-domain Green’s functions for power-law media

    PubMed Central

    Kelly, James F.; McGough, Robert J.; Meerschaert, Mark M.

    2008-01-01

    Frequency-dependent loss and dispersion are typically modeled with a power-law attenuation coefficient, where the power-law exponent ranges from 0 to 2. To facilitate analytical solution, a fractional partial differential equation is derived that exactly describes power-law attenuation and the Szabo wave equation [“Time domain wave-equations for lossy media obeying a frequency power-law,” J. Acoust. Soc. Am. 96, 491–500 (1994)] is an approximation to this equation. This paper derives analytical time-domain Green’s functions in power-law media for exponents in this range. To construct solutions, stable law probability distributions are utilized. For exponents equal to 0, 1∕3, 1∕2, 2∕3, 3∕2, and 2, the Green’s function is expressed in terms of Dirac delta, exponential, Airy, hypergeometric, and Gaussian functions. For exponents strictly less than 1, the Green’s functions are expressed as Fox functions and are causal. For exponents greater than or equal than 1, the Green’s functions are expressed as Fox and Wright functions and are noncausal. However, numerical computations demonstrate that for observation points only one wavelength from the radiating source, the Green’s function is effectively causal for power-law exponents greater than or equal to 1. The analytical time-domain Green’s function is numerically verified against the material impulse response function, and the results demonstrate excellent agreement. PMID:19045774

  13. The use of functional data analysis to study variability in childrens speech: Further data

    NASA Astrophysics Data System (ADS)

    Koenig, Laura L.; Lucero, Jorge C.

    2002-05-01

    Much previous research has reported increased token-to-token variability in children relative to adults, but the sources and implications of this variability remain matters of debate. Recently, functional data analysis has been used as a tool to gain greater insight into the nature of variability in children's and adults' speech data. In FDA, signals are time-normalized using a smooth function of time. The magnitude of the time-warping function provides an index of phasing (temporal) variability, and a separate index of amplitude variability is calculated from the time-normalized signal. Here, oral airflow data are analyzed from 5-year-olds, 10-year-olds, and adult women producing laryngeal and oral fricatives (/h, s, z/). The preliminary FDA results show that children generally have higher temporal and amplitude indices than adults, suggesting greater variability both in gestural timing and magnitude. However, individual patterns are evident in the relative magnitude of the two indices, and in which consonants show the highest values. The time-varying patterns of flow variability over time in /s/ are also explored as a method of inferring relative variability among laryngeal and oral gestures. [Work supported by NIH and CNPq, Brazil.

  14. The biofilm-controlling functions of rechargeable antimicrobial N-halamine dental unit waterline tubing.

    PubMed

    Porteous, Nuala; Schoolfield, John; Luo, Jie; Sun, Yuyu

    2011-01-01

    A study was conducted to test the biofilm-controlling functions of N-halamine tubing over an eight-month period. A laboratory system, simulating a teaching dental clinic, was used to test rechargeable N-halamine tubing (T) compared to an untreated control (C) using the unit manufacturer's tubing. For the long-term study, a recharged tubing (RC) treated with bleach was used to compare with the test (T) and the control (C) tubing. Source tap water was cycled through the lines at 1.4 mL/minute, five minutes on and 25 minutes off, eight hours/day, five days/week. Every three weeks, samples of effluent, recovered adherent bacteria from inside tubing surfaces, and SEM images were examined for bacterial and biofilm growth. After sampling, a recharging solution of chlorine bleach (1 : 10 dilution) was run through T and RC lines, left overnight, and rinsed out the next morning. One-way ANOVAs and Spearman correlations were performed to detect significant differences for T, RC, and C, and determine significance with time period and source water, respectively. Mean log CFU/mL for C effluent > T (p = 0.028), and C tubing > T (p = 0.035). Spearman correlations were significant between effluent and source water level for T (rho = 0.817), and T tubing (0.750); between RC tubing and source water level (rho = 0.836), and time (rho = 0.745); and between C and time (rho = 0.873). SEM imaging confirmed the presence of biofilm inside RC and C, but not inside T. N-halamine tubing completely inhibited biofilm formation without negatively affecting the physical properties of the effluent water. Further research on N-halamine tubing using a pure water source is recommended, as T effluent bacterial levels reflected the source tap water quality and proliferation of planktonic bacteria with no biofilm activity.

  15. The Biofilm-Controlling Functions of Rechargeable Antimicrobial N-halamine Dental Unit Waterline Tubing

    PubMed Central

    Porteous, Nuala; Schoolfield, John; Luo, Jie; Sun, Yuyu

    2015-01-01

    Objective A study was conducted to test the biofilm-controlling functions of N-halamine tubing over an eight-month period. Methods A laboratory system, simulating a teaching dental clinic, was used to test rechargeable N-halamine tubing (T) compared to an untreated control (C) using the unit manufacturer’s tubing. For the long-term study, a recharged tubing (RC) treated with bleach was used to compare with the test (T) and the control (C) tubing. Source tap water was cycled through the lines at 1.4 mL/minute, five minutes on and 25 minutes off, eight hours/day, five days/week. Every three weeks, samples of effluent, recovered adherent bacteria from inside tubing surfaces, and SEM images were examined for bacterial and biofilm growth. After sampling, a recharging solution of chlorine bleach (1:10 dilution) was run through T and RC lines, left overnight, and rinsed out the next morning. One-way ANOVAs and Spearman correlations were performed to detect significant differences for T, RC, and C, and determine significance with time period and source water, respectively. Results Mean log CFU/mL for C effluent > T (p = 0.028), and C tubing > T (p = 0.035). Spearman correlations were significant between effluent and source water level for T (rho = 0.817), and T tubing (0.750); between RC tubing and source water level (rho = 0.836), and time (rho = 0.745); and between C and time (rho = 0.873). SEM imaging confirmed the presence of biofilm inside RC and C, but not inside T. Conclusion N-halamine tubing completely inhibited biofilm formation without negatively affecting the physical properties of the effluent water. Further research on N-halamine tubing using a pure water source is recommended, as T effluent bacterial levels reflected the source tap water quality and proliferation of planktonic bacteria with no biofilm activity. PMID:22403982

  16. Development and Characterization of a High-Energy Neutron Time-of-Flight Imaging System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madden, Amanda Christine; Schirato, Richard C.; Swift, Alicia L.

    We present that Los Alamos National Laboratory has developed a prototype of a high-energy neutron time-of-flight imaging system for the non-destructive evaluation of dense, massive, and/or high atomic number objects. High-energy neutrons provide the penetrating power, and thus the high dynamic range necessary to image internal features and defects of such objects. The addition of the time gating capability allows for scatter rejection when paired with a pulsed monoenergetic beam, or neutron energy selection when paired with a pulsed broad-spectrum neutron source. The Time Gating to Reject Scatter and Select Energy (TiGReSSE) system was tested at the Los Alamos Neutronmore » Science Center’s (LANSCE) Weapons Nuclear Research (WNR) facility, a spallation neutron source, to provide proof of concept measurements and to characterize the instrument response. This paper will show results of several objects imaged during this run cycle. In addition, results from system performance metrics such as the Modulation Transfer Function and the Detective Quantum Efficiency measured as a function of neutron energy, characterize the current system performance and inform the next generation of neutron imaging instrument.« less

  17. Development and Characterization of a High-Energy Neutron Time-of-Flight Imaging System

    DOE PAGES

    Madden, Amanda Christine; Schirato, Richard C.; Swift, Alicia L.; ...

    2017-02-09

    We present that Los Alamos National Laboratory has developed a prototype of a high-energy neutron time-of-flight imaging system for the non-destructive evaluation of dense, massive, and/or high atomic number objects. High-energy neutrons provide the penetrating power, and thus the high dynamic range necessary to image internal features and defects of such objects. The addition of the time gating capability allows for scatter rejection when paired with a pulsed monoenergetic beam, or neutron energy selection when paired with a pulsed broad-spectrum neutron source. The Time Gating to Reject Scatter and Select Energy (TiGReSSE) system was tested at the Los Alamos Neutronmore » Science Center’s (LANSCE) Weapons Nuclear Research (WNR) facility, a spallation neutron source, to provide proof of concept measurements and to characterize the instrument response. This paper will show results of several objects imaged during this run cycle. In addition, results from system performance metrics such as the Modulation Transfer Function and the Detective Quantum Efficiency measured as a function of neutron energy, characterize the current system performance and inform the next generation of neutron imaging instrument.« less

  18. Visualization of Green's Function Anomalies for Megathrust Source in Nankai Trough by Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Miyakoshi, K.; Tsurugi, M.; Kawase, H.; Kamae, K.

    2014-12-01

    Effect of various areas (asperities or SMGA) in the source of a megathrust subduction zone earthquake on the simulated long-period ground motions is studied. For this case study we employed a source fault model proposed by HERP (2012) for future M9-class event in the Nankai trough. Velocity structure is 3-D JIVSM model developed for long-period ground motion simulations. The target site OSKH02 "Konohana" is located in center of the Osaka basin. Green's functions for large number of sub-sources (>1000) were calculated by FDM using the reciprocity approach. Depths, strike and dip angles of sub-sources are adjusted to the shape of upper boundary of the Philippine Sea plate. The target period range is 4-20sec. Strongly nonuniform distribution of peak amplitudes of Green's functions is observed (see Figure), and two areas have anomalously large amplitudes: (1) a large along-strike elongated area just south of Kii peninsula and (2) a similar area south of Kii peninsula but shifted toward the Nankai trough. Elongation of the first anomaly fits well 10-15km isolines of the depth distribution of the Philippine Sea plate, while target site is located in the direction perpendicular to these isolines. For this reason, preliminarily we suppose that plate shape may have critical effect on the simulated ground motions, via a cumulative effect of sub-source radiation patterns and specific strike and dip angle distributions. Analysis of the time delay of the peak arrivals at OKSH02 demonstrates that Green's functions from the second anomaly, located in shallow part of plate boundary, are mostly composed of surface waves.

  19. Changes in Family Cohesion and Acculturative Stress among Recent Latino Immigrants

    PubMed Central

    IBAÑEZ, GLADYS E.; DILLON, FRANK; SANCHEZ, MARIANA; DE LA ROSA, MARIO; LI, TAN; VILLAR, MARIA ELENA

    2016-01-01

    Family relationships can serve as an important source of support during the acculturation process; yet, how the stress related to acculturation, or acculturative stress, may impact family functioning across time is not clear. Participants (n = 479), between the ages of 18-34 were recruited using respondent driven sampling methodology. Findings suggest family cohesion decreased over time; however, it decreased less for those reporting more acculturative stress. The implication is that for those Latino immigrants who struggle to adapt to their new host culture, family remains a source of support more so than for those who do not struggle as much. PMID:27087789

  20. Small signal measurement of Sc 2O 3 AlGaN/GaN moshemts

    NASA Astrophysics Data System (ADS)

    Luo, B.; Mehandru, R.; Kang, B. S.; Kim, J.; Ren, F.; Gila, B. P.; Onstine, A. H.; Abernathy, C. R.; Pearton, S. J.; Gotthold, D.; Birkhahn, R.; Peres, B.; Fitch, R.; Gillespie, J. K.; Jenkins, T.; Sewell, J.; Via, D.; Crespo, A.

    2004-02-01

    The rf performance of 1 × 200 μm 2 AlGaN/GaN MOS-HEMTs with Sc 2O 3 used as both the gate dielectric and as a surface passivation layer is reported. A maximum fT of ˜11 GHz and fMAX of 19 GHz were obtained. The equivalent device parameters were extracted by fitting this data to obtain the transconductance, drain resistance, drain-source resistance, transfer time and gate-drain and gate-source capacitance as a function of gate voltage. The transfer time is in the order 0.5-1 ps and decreases with increasing gate voltage.

  1. Satellite Observations of Rapidly Varying Cosmic X-ray Sources. Ph.D. Thesis - Catholic Univ.

    NASA Technical Reports Server (NTRS)

    Maurer, G. S.

    1979-01-01

    The X-ray source data obtained with the high energy celestial X-ray detector on the Orbiting Solar Observatory -8 are presented. The results from the 1977 Crab observation show nonstatistical fluctuations in the pulsed emission and in the structure of the integrated pulse profile which cannot be attributed to any known systematic effect. The Hercules observations presented here provide information on three different aspects of the pulsed X-ray emission: the variation of pulsed flux as a function of the time from the beginning of the ON-state, the variation of pulsed flux as a function of binary phase, and the energy spectrum of the pulse emission.

  2. Non-contact time-domain imaging of functional brain activation and heterogeneity of superficial signals

    NASA Astrophysics Data System (ADS)

    Wabnitz, H.; Mazurenka, M.; Di Sieno, L.; Contini, D.; Dalla Mora, A.; Farina, A.; Hoshi, Y.; Kirilina, E.; Macdonald, R.; Pifferi, A.

    2017-07-01

    Non-contact scanning at small source-detector separation enables imaging of cerebral and extracranial signals at high spatial resolution and their separation based on early and late photons accounting for the related spatio-temporal characteristics.

  3. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  4. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  5. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  6. Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr

    Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.

  7. Content-specific evidence accumulation in inferior temporal cortex during perceptual decision-making

    PubMed Central

    Tremel, Joshua J.; Wheeler, Mark E.

    2015-01-01

    During a perceptual decision, neuronal activity can change as a function of time-integrated evidence. Such neurons may serve as decision variables, signaling a choice when activity reaches a boundary. Because the signals occur on a millisecond timescale, translating to human decision-making using functional neuroimaging has been challenging. Previous neuroimaging work in humans has identified patterns of neural activity consistent with an accumulation account. However, the degree to which the accumulating neuroimaging signals reflect specific sources of perceptual evidence is unknown. Using an extended face/house discrimination task in conjunction with cognitive modeling, we tested whether accumulation signals, as measured using functional magnetic resonance imaging (fMRI), are stimulus-specific. Accumulation signals were defined as a change in the slope of the rising edge of activation corresponding with response time (RT), with higher slopes associated with faster RTs. Consistent with an accumulation account, fMRI activity in face- and house-selective regions in the inferior temporal cortex increased at a rate proportional to decision time in favor of the preferred stimulus. This finding indicates that stimulus-specific regions perform an evidence integrative function during goal-directed behavior and that different sources of evidence accumulate separately. We also assessed the decision-related function of other regions throughout the brain and found that several regions were consistent with classifications from prior work, suggesting a degree of domain generality in decision processing. Taken together, these results provide support for an integration-to-boundary decision mechanism and highlight possible roles of both domain-specific and domain-general regions in decision evidence evaluation. PMID:25562821

  8. Short-term gas dispersion in idealised urban canopy in street parallel with flow direction

    NASA Astrophysics Data System (ADS)

    Chaloupecká, Hana; Jaňour, Zbyněk; Nosek, Štěpán

    2016-03-01

    Chemical attacks (e.g. Syria 2014-15 chlorine, 2013 sarine or Iraq 2006-7 chlorine) as well as chemical plant disasters (e.g. Spain 2015 nitric oxide, ferric chloride; Texas 2014 methyl mercaptan) threaten mankind. In these crisis situations, gas clouds are released. Dispersion of gas clouds is the issue of interest investigated in this paper. The paper describes wind tunnel experiments of dispersion from ground level point gas source. The source is situated in a model of an idealised urban canopy. The short duration releases of passive contaminant ethane are created by an electromagnetic valve. The gas cloud concentrations are measured in individual places at the height of the human breathing zone within a street parallel with flow direction by Fast-response Ionisation Detector. The simulations of the gas release for each measurement position are repeated many times under the same experimental set up to obtain representative datasets. These datasets are analysed to compute puff characteristics (arrival, leaving time and duration). The results indicate that the mean value of the dimensionless arrival time can be described as a growing linear function of the dimensionless coordinate in the street parallel with flow direction where the gas source is situated. The same might be stated about the dimensionless leaving time as well as the dimensionless duration, however these fits are worse. Utilising a linear function, we might also estimate some other statistical characteristics from datasets than the datasets means (medians, trimeans). The datasets of the dimensionless arrival time, the dimensionless leaving time and the dimensionless duration can be fitted by the generalized extreme value distribution (GEV) in all sampling positions except one.

  9. Evaluating the impact of improvements to the FLAMBE smoke source model on forecasts of aerosol distribution from NAAPS

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Reid, J. S.

    2006-12-01

    As more forecast models aim to include aerosol and chemical species, there is a need for source functions for biomass burning emissions that are accurate, robust, and operable in real-time. NAAPS is a global aerosol forecast model running every six hours and forecasting distributions of biomass burning, industrial sulfate, dust, and sea salt aerosols. This model is run operationally by the U.S. Navy as an aid to planning. The smoke emissions used as input to the model are calculated from the data collected by the FLAMBE system, driven by near-real-time active fire data from GOES WF_ABBA and MODIS Rapid Response. The smoke source function uses land cover data to predict properties of detected fires based on literature data from experimental burns. This scheme is very sensitive to the choice of land cover data sets. In areas of rapid land cover change, the use of static land cover data can produce artifactual changes in emissions unrelated to real changes in fire patterns. In South America, this change may be as large as 40% over five years. We demonstrate the impact of a modified land cover scheme on FLAMBE emissions and NAAPS forecasts, including a fire size algorithm developed using MODIS burned area data. We also describe the effects of corrections to emissions estimates for cloud and satellite coverage. We outline areas where existing data sources are incomplete and improvements are required to achieve accurate modeling of biomass burning emissions in real time.

  10. LUMINOSITY FUNCTIONS OF LMXBs IN CENTAURUS A: GLOBULAR CLUSTERS VERSUS THE FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, Rasmus; Gilfanov, Marat; Sivakoff, Gregory R.

    2009-08-10

    We study the X-ray luminosity function (XLF) of low-mass X-ray binaries (LMXB) in the nearby early-type galaxy Centaurus A, concentrating primarily on two aspects of binary populations: the XLF behavior at the low-luminosity limit and the comparison between globular cluster and field sources. The 800 ksec exposure of the deep Chandra VLP program allows us to reach a limiting luminosity of {approx}8 x 10{sup 35} erg s{sup -1}, about {approx}2-3 times deeper than previous investigations. We confirm the presence of the low-luminosity break of the overall LMXB XLF at log(L{sub X} ) {approx} 37.2-37.6, below which the luminosity distribution followsmore » a dN/d(ln L) {approx} const law. Separating globular cluster and field sources, we find a statistically significant difference between the two luminosity distributions with a relative underabundance of faint sources in the globular cluster population. This demonstrates that the samples are drawn from distinct parent populations and may disprove the hypothesis that the entire LMXB population in early-type galaxies is created dynamically in globular clusters. As a plausible explanation for this difference in the XLFs, we suggest an enhanced fraction of helium-accreting systems in globular clusters, which are created in collisions between red giants and neutron stars. Due to the four times higher ionization temperature of He, such systems are subject to accretion disk instabilities at {approx}20 times higher mass accretion rate and, therefore, are not observed as persistent sources at low luminosities.« less

  11. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action

    PubMed Central

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter

    2018-01-01

    Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works. PMID:29746490

  12. Beam shaping of light sources using circular photonic crystal funnel

    NASA Astrophysics Data System (ADS)

    Kumar, Mrityunjay; Kumar, Mithun; Dinesh Kumar, V.

    2012-10-01

    A novel two-dimensional circular photonic crystal (CPC) structure with a sectorial opening for shaping the beam of light sources was designed and investigated. When combined with light sources, the structure acts like an antenna emitting a directional beam which could be advantageously used in several nanophotonic applications. Using the two-dimensional finite-difference time-domain (2D FDTD) method, we examined the effects of geometrical parameters of the structure on the directional and transmission properties of emitted radiation. Further, we examined the transmitting and receiving properties of a pair of identical structures as a function of distance between them.

  13. Analytical solutions for tomato peeling with combined heat flux and convective boundary conditions

    NASA Astrophysics Data System (ADS)

    Cuccurullo, G.; Giordano, L.; Metallo, A.

    2017-11-01

    Peeling of tomatoes by radiative heating is a valid alternative to steam or lye, which are expensive and pollutant methods. Suitable energy densities are required in order to realize short time operations, thus involving only a thin layer under the tomato surface. This paper aims to predict the temperature field in rotating tomatoes exposed to the source irradiation. Therefore, a 1D unsteady analytical model is presented, which involves a semi-infinite slab subjected to time dependent heating while convective heat transfer takes place on the exposed surface. In order to account for the tomato rotation, the heat source is described as the positive half-wave of a sinusoidal function. The problem being linear, the solution is derived following the Laplace Transform Method. In addition, an easy-to-handle solution for the problem at hand is presented, which assumes a differentiable function for approximating the source while neglecting convective cooling, the latter contribution turning out to be negligible for the context at hand. A satisfying agreement between the two analytical solutions is found, therefore, an easy procedure for a proper design of the dry heating system can be set up avoiding the use of numerical simulations.

  14. Dynamic Initiator Imaging at the Advanced Photon Source: Understanding the early stages of initiator function and subsequent explosive interactions

    NASA Astrophysics Data System (ADS)

    Sanchez, Nate; Neal, Will; Jensen, Brian; Gibson, John; Martinez, Mike; Jaramillo, Dennis; Iverson, Adam; Carlson, Carl

    2017-06-01

    Recent advances in diagnostics coupled with synchrotron sources have allowed the in-situ investigation of exploding foil initiators (EFI) during flight. We present the first images of EFIs during flight utilizing x-ray phase contrast imaging at the Advanced Photon Source (APS) located in Argonne National Laboratory. These images have provided the DOE/DoD community with unprecedented images resolving details on the micron scale of the flyer formation, plasma instabilities and in flight characteristics along with the subsequent interaction with high explosives on the nanosecond time scale. Phase contrast imaging has allowed the ability to make dynamic measurements on the length and time scale necessary to resolve initiator function and provide insight to key design parameters. These efforts have also probed the fundamental physics at ``burst'' to better understand what burst means in a physical sense, rather than the traditional understanding of burst as a peak in voltage and increase in resistance. This fundamental understanding has led to increased knowledge on the mechanisms of burst and has allowed us to improve our predictive capability through magnetohydrodnamic modeling. Results will be presented from several EFI designs along with a look to the future for upcoming work.

  15. The Contribution of Coseismic Displacements due to Splay Faults Into the Local Wavefield of the 1964 Alaska Tsunami

    NASA Astrophysics Data System (ADS)

    Suleimani, E.; Ruppert, N.; Fisher, M.; West, D.; Hansen, R.

    2008-12-01

    The Alaska Earthquake Information Center conducts tsunami inundation mapping for coastal communities in Alaska. For many locations in the Gulf of Alaska, the 1964 tsunami generated by the Mw9.2 Great Alaska earthquake may be the worst-case tsunami scenario. We use the 1964 tsunami observations to verify our numerical model of tsunami propagation and runup, therefore it is essential to use an adequate source function of the 1964 earthquake to reduce the level of uncertainty in the modeling results. It was shown that the 1964 co-seismic slip occurred both on the megathrust and crustal splay faults (Plafker, 1969). Plafker (2006) suggested that crustal faults were a major contributor to vertical displacements that generated local tsunami waves. Using eyewitness arrival times of the highest observed waves, he suggested that the initial tsunami wave was higher and closer to the shore, than if it was generated by slip on the megathrust. We conduct a numerical study of two different source functions of the 1964 tsunami to test whether the crustal splay faults had significant effects on local tsunami runup heights and arrival times. The first source function was developed by Johnson et al. (1996) through joint inversion of the far-field tsunami waveforms and geodetic data. The authors did not include crustal faults in the inversion, because the contribution of these faults to the far-field tsunami was negligible. The second is the new coseismic displacement model developed by Suito and Freymueller (2008, submitted). This model extends the Montague Island fault farther along the Kenai Peninsula coast and thus reduces slip on the megathrust in that region. We also use an improved geometry of the Patton Bay fault based on the deep crustal seismic reflection and earthquake data. We propagate tsunami waves generated by both source models across the Pacific Ocean and record wave amplitudes at the locations of the tide gages that recorded the 1964 tsunami. As expected, the two sources produce very similar waveforms in the far field that are also in good agreement with the tide gage records. In order to study the near-field tsunami effects, we will construct embedded telescoping bathymetry grids around tsunami generation area to calculate tsunami arrival times and sea surface heights for both source models of the 1964 earthquake, and use available observation data to verify the model results.

  16. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    CorAL is a software Library designed to aid in the analysis of femtoscipic data. Femtoscopic data are a class of measured quantities used in heavy-ion collisions to characterize particle emitting source sizes. The most common type of this data is two-particle correleations induced by the Hanbury-Brown/Twiss (HBT) Effect, but can also include correlations induced by final-state interactions between pairs of emitted particles in a heavy-ion collision. Because heavy-ion collisions are complex many particle systems, modeling hydrodynamical models or hybrid techniques. Using the CRAB module, CorAL can turn the output from these models into something that can be directley compared tomore » experimental data. CorAL can also take the raw experimentally measured correlation functions and image them by inverting the Koonin-Pratt equation to extract the space-time emission profile of the particle emitting source. This source function can be further analyzed or directly compared to theoretical calculations.« less

  18. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  19. Absolute dose calibration of an X-ray system and dead time investigations of photon-counting techniques

    NASA Astrophysics Data System (ADS)

    Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.

    2002-07-01

    High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.

  20. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  1. Focusing and steering through absorbing and aberrating layers: application to ultrasonic propagation through the skull.

    PubMed

    Tanter, M; Thomas, J L; Fink, M

    1998-05-01

    The time-reversal process is applied to focus pulsed ultrasonic waves through the human skull bone. The aim here is to treat brain tumors, which are difficult to reach with classical surgery means. Such a surgical application requires precise control of the size and location of the therapeutic focal beam. The severe ultrasonic attenuation in the skull reduces the efficiency of the time reversal process. Nevertheless, an improvement of the time reversal process in absorbing media has been investigated and applied to the focusing through the skull [J.-L. Thomas and M. Fink, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 43, 1122-1129 (1996)]. Here an extension of this technique is presented in order to focus on a set of points surrounding an initial artificial source implanted in the tissue volume to treat. From the knowledge of the Green's function matched to this initial source location a new Green's function matched to various points of interest is deduced in order to treat the whole volume. In a homogeneous medium, conventional steering consists of tilting the wave front focused on the acoustical source. In a heterogeneous medium, this process is only valid for small angles or when aberrations are located in a layer close to the array. It is shown here how to extend this method to aberrating and absorbing layers, like the skull bone, located at any distance from the array of transducers.

  2. SEISRISK II; a computer program for seismic hazard estimation

    USGS Publications Warehouse

    Bender, Bernice; Perkins, D.M.

    1982-01-01

    The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.

  3. Orbital Parameters for Two "IGR" Sources

    NASA Astrophysics Data System (ADS)

    Thompson, Thomas; Tomsick, J.; Rothschild, R.; in't Zand, J.; Walter, R.

    2006-09-01

    With recent and archival Rossi X-ray Timing Explorer observations of the heavily absorbed X-ray pulsars IGR J17252-3616 (hereafter J17252) and IGR J16393-4643 (hereafter J16393), we carried out a pulse timing analysis to determine the orbital parameters of the two binary systems. We find that both INTEGRAL sources are High Mass X-ray Binary (HMXB) systems. The orbital solution to J17252 has a projected semi-major axis of 101 ± 3 lt-s and a period of 9.7403 ± 0.0004 days, implying a mass function of 11.7 ± 1.2 M_sun. The orbital solution to J16393, on the other hand, is not unambiguously known due to weaker and less-consistent pulsations. The most likely orbital solution has a projected semi-major axis of 43 ± 2 lt-s and an orbital period of 3.6875 ± 0.0006 days, yielding a mass function of 6.5 ± 1.1 M_sun. The orbits of both sources are consistent with circular, with e < 0.2-0.25 and the 90% confidence level. The orbital and pulse periods of each source place the systems in the region of the Corbet diagram populated by supergiant wind accretors. J17252 is an eclipsing binary system, and provides an exciting opportunity to obtain a neutron star mass measurement.

  4. Assessing climate change impact on complementarity between solar and hydro power in areas affected by glacier shrinkage

    NASA Astrophysics Data System (ADS)

    Diah Puspitarini, Handriyanti; François, Baptiste; Zoccatelli, Davide; Brown, Casey; Creutin, Jean-Dominique; Zaramella, Mattia; Borga, Marco

    2017-04-01

    Variable Renewable Energy (VRE) sources such as wind, solar and runoff sources are variable in time and space, following their driving weather variables. In this work we aim to analyse optimal mixes of energy sources, i.e. mixes of sources which minimize the deviation between energy load and generation, for a region in the Upper Adige river basin (Eastern Italian Alps) affected by glacier shrinking. The study focuses on hydropower (run of the river - RoR) and solar energy, and analyses the current situation as well different climate change scenarios. Changes in glacier extent in response to climate warming and/or altered precipitation regimes have the potential to substantially alter the magnitude and timing, as well as the spatial variation of watershed-scale hydrologic fluxes. This may change the complementarity with solar power as well. In this study, we analyse the climate change impact on complementarity between RoR and solar using the Decision Scaling approach (Brown et al. 2012). With this approach, the system vulnerability is separated from the climatic hazard that can come from any set of past or future climate conditions. It departs from conventional top-down impact studies because it explores the sensitivity of the system response to a plausible range of climate variations rather than its sensitivity to the time-varying outcome of individual GCM projections. It mainly relies on the development of Climate Response Functions that bring together i) the sensitivity of some system success and/or failure indicators to key external drivers (i.e. mean features of regional climate) and ii) the future values of these drivers as simulated from climate simulation chains. The main VRE sources used in the study region are solar- and hydro-power (with an important fraction of run-of-the river hydropower). The considered indicator of success is the 'energy penetration' coefficient, defined as the long-run percentage of energy demand naturally met by the VRE on an hourly basis. Climate response functions, developed in a 2D climate change space (change in mean temperature and precipitation), are built from multiple hydro-climatic scenarios obtained by perturbing the observed weather time series with the change factor method, and considering given glacier storage states. Climate experiments are further used for assessing these change factors from different emission scenarios, climate models and future prediction lead times. Their positioning on the Climate Response Function allows discussing the risk/opportunities pertaining to changes in VRE penetration in the future. Results show i) the large impact of glacier shrinkage on the complementarity between solar and RoR energy sources and ii) that the impact is decreasing with time, with the main alterations to be expected in the coming 30 years. Brown, C., Ghile, Y., Laverty, M., Li, K., (2012). Decision scaling: Linking bottom up vulnerability analysis with climate projections in the water sector. Water Resour Res 48. 515 doi:10.1029/2011WR011212

  5. Comparison of the space-time extent of the emission source in d +Au and Au + Au collisions at √{sNN} = 200 GeV

    NASA Astrophysics Data System (ADS)

    Ajitanand, N. N.; Phenix Collaboration

    2014-11-01

    Two-pion interferometry measurements in d +Au and Au + Au collisions at √{sNN} = 200 GeV are used to extract and compare the Gaussian source radii Rout, Rside and Rlong, which characterize the space-time extent of the emission sources. The comparisons, which are performed as a function of collision centrality and the mean transverse momentum for pion pairs, indicate strikingly similar patterns for the d +Au and Au + Au systems. They also indicate a linear dependence of Rside on the initial transverse geometric size R bar , as well as a smaller freeze-out size for the d +Au system. These patterns point to the important role of final-state re-scattering effects in the reaction dynamics of d +Au collisions.

  6. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Creating Impact Functions to Estimate the Domestic Effects of Global Climate Action

    EPA Science Inventory

    Quantifying and monetizing the impacts of climate change can be challenging due to the complexity of impacts, availability of data, variability across geographic and temporal time scales, sources of uncertainty, and computational constraints. Recent advancements in using consist...

  8. INVERSION OF SOURCE TIME FUNCTION USING BOREHOLE ARRAY SONIC WAVEFORMS. (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  9. Characterization of a tin-loaded liquid scintillator for gamma spectroscopy and neutron detection

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Harvey, Taylor; Weinmann-Smith, Robert; Walker, James; Noh, Young; Farley, Richard; Enqvist, Andreas

    2018-07-01

    A tin-loaded liquid scintillator has been developed for gamma spectroscopy and neutron detection. The scintillator was characterized in regard to energy resolution, pulse shape discrimination, neutron light output function, and timing resolution. The loading of tin into scintillators with low effective atomic number was demonstrated to provide photopeaks with acceptable energy resolution. The scintillator was shown to have reasonable neutron/gamma discrimination capability based on the charge comparison method. The effect on the discrimination quality of the total charge integration time and the initial delay time for tail charge integration was studied. To obtain the neutron light output function, the time-of-flight technique was utilized with a 252Cf source. The light output function was validated with the MCNPX-PoliMi code by comparing the measured and simulated pule height spectra. The timing resolution of the developed scintillator was also evaluated. The tin-loading was found to have negligible impact on the scintillation decay times. However, a relatively large degradation of timing resolution was observed due to the reduced light yield.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balian, R., E-mail: roger.balian@cea.fr; Vénéroni, M.

    Time-dependent expectation values and correlation functions for many-body quantum systems are evaluated by means of a unified variational principle. It optimizes a generating functional depending on sources associated with the observables of interest. It is built by imposing through Lagrange multipliers constraints that account for the initial state (at equilibrium or off equilibrium) and for the backward Heisenberg evolution of the observables. The trial objects are respectively akin to a density operator and to an operator involving the observables of interest and the sources. We work out here the case where trial spaces constitute Lie groups. This choice reduces themore » original degrees of freedom to those of the underlying Lie algebra, consisting of simple observables; the resulting objects are labeled by the indices of a basis of this algebra. Explicit results are obtained by expanding in powers of the sources. Zeroth and first orders provide thermodynamic quantities and expectation values in the form of mean-field approximations, with dynamical equations having a classical Lie–Poisson structure. At second order, the variational expression for two-time correlation functions separates–as does its exact counterpart–the approximate dynamics of the observables from the approximate correlations in the initial state. Two building blocks are involved: (i) a commutation matrix which stems from the structure constants of the Lie algebra; and (ii) the second-derivative matrix of a free-energy function. The diagonalization of both matrices, required for practical calculations, is worked out, in a way analogous to the standard RPA. The ensuing structure of the variational formulae is the same as for a system of non-interacting bosons (or of harmonic oscillators) plus, at non-zero temperature, classical Gaussian variables. This property is explained by mapping the original Lie algebra onto a simpler Lie algebra. The results, valid for any trial Lie group, fulfill consistency properties and encompass several special cases: linear responses, static and time-dependent fluctuations, zero- and high-temperature limits, static and dynamic stability of small deviations.« less

  11. Handling times and saturating transmission functions in a snail-worm symbiosis.

    PubMed

    Hopkins, Skylar R; McGregor, Cari M; Belden, Lisa K; Wojdak, Jeremy M

    2018-06-16

    All dynamic species interaction models contain an assumption that describes how contact rates scale with population density. Choosing an appropriate contact-density function is important, because different functions have different implications for population dynamics and stability. However, this choice can be challenging, because there are many possible functions, and most are phenomenological and thus difficult to relate to underlying ecological processes. Using one such phenomenological function, we described a nonlinear relationship between field transmission rates and host density in a common snail-oligochaete symbiosis. We then used a well-known contact function from predator-prey models, the Holling Type II functional response, to describe and predict host snail contact rates in the laboratory. The Holling Type II functional response accurately described both the nonlinear contact-density relationship and the average contact duration that we observed. Therefore, we suggest that contact rates saturate with host density in this system because each snail contact requires a non-instantaneous handling time, and additional possible contacts do not occur during that handling time. Handling times and nonlinear contact rates might also explain the nonlinear relationship between symbiont transmission and snail density that we observed in the field, which could be confirmed by future work that controls for other potential sources of seasonal variation in transmission rates. Because most animal contacts are not instantaneous, the Holling Type II functional response might be broadly relevant to diverse host-symbiont systems.

  12. Microbial biodiversity of the atmosphere

    NASA Astrophysics Data System (ADS)

    Klein, Ann Maureen

    Microorganisms are critical to the functioning of terrestrial and aquatic ecosystems and may also play a role in the functioning of the atmosphere. However, little is known about the diversity and function of microorganisms in the atmosphere. To investigate the forces driving the assembly of bacterial microbial communities in the atmosphere, I measured temporal variation in bacterial diversity and composition over diurnal and inter-day time scales. Results suggest that bacterial communities in the atmosphere markedly vary over diurnal time scales and are likely structured by inputs from both local terrestrial and long-distance sources. To assess the potential functions of bacteria and fungi in the atmosphere, I characterized total and potentially active communities using both RNA- and DNA-based data. Results suggest there are metabolically active microorganisms in the atmosphere that may affect atmospheric functions including precipitation development and carbon cycling. This dissertation includes previously published and unpublished co-authored material.

  13. Development of a noncompact source theory with applications to helicopter rotors

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Brown, T. J.

    1976-01-01

    A new formulation for determining the acoustic field of moving bodies, based on acoustic analogy, is derived. The acoustic pressure is given as the sum of two integrals, one of which has a derivative with respect to time. The integrands are functions of the normal velocity and surface pressure of the body. A computer program based on this formulation was used to calculate acoustic pressure signatures for several helicoptor rotors from experimental surface pressure data. Results are compared with those from compact source calculations. It is shown that noncompactness of steady sources on the rotor can account for the high harmonics of the pressure system. Thickness noise is shown to be a significant source of sound, especially for blunt airfoils in regions where noncompact source theory should be applied.

  14. PM2.5-induced changes in cardiac function of hypertensive rats depend on wind direction and specific sources in Steubenville, Ohio.

    PubMed

    Kamal, Ali S; Rohr, Annette C; Mukherjee, Bhramar; Morishita, Masako; Keeler, Gerald J; Harkema, Jack R; Wagner, James G

    2011-06-01

    Increases in particulate matter less than 2.5 µm (PM(2.5)) in ambient air is linked to acute cardiovascular morbidity and mortality. Specific components and potential emission sources of PM(2.5) responsible for adverse health effects of cardiovascular function are unclear. Spontaneously hypertensive rats were implemented with radiotelemeters to record ECG responses during inhalation exposure to concentrated ambient particles (CAPs) for 13 consecutive days in Steubenville, OH. Changes in heart rate (HR) and its variability (HRV) were compared to PM(2.5) trace elements in 30-min time frames to capture acute physiological responses with real-time fluctuations in PM(2.5) composition. Using positive matrix factorization, six major source factors were identified: (i) coal/secondary, (ii) mobile sources, (iii) metal coating/processing, (iv) iron/steel manufacturing, (v) lead and (vi) incineration. Exposure-related changes in HR and HRV were dependant on winds predominately from either the northeast (NE) or southwest (SW). During SW winds, the metal processing factor was associated with increased HR, whereas factors of incineration, lead and iron/steel with NE winds were associated with decreased HR. Decreased SDNN was dominated during NE winds by the incinerator factor, and with SW winds by the metal factor. Metals and mobile source factors also had minor impacts on decreased SDNN with NE winds. Individual elemental components loaded onto these factors generally showed significant associations, although there were some discrepancies. Acute cardiovascular changes in response to ambient PM(2.5) exposure can be attributed to specific PM constituents and sources linked with incineration, metal processing, and iron/steel production.

  15. Variation with interplanetary sector of the total magnetic field measured at the OGO 2, 4, and 6 satellites

    NASA Technical Reports Server (NTRS)

    Langel, R. A.

    1973-01-01

    Variations in the scalar magnetic field (delta B) from the polar orbiting OGO 2, 4, and 6 spacecraft are examined as a function of altitude for times when the interplanetary magnetic field is toward the sun and for times when the interplanetary magnetic field away from the sun. This morphology is basically the same as that found when all data, irrespective of interplanetary magnetic sector, are averaged together. Differences in delta B occur, both between sectors and between seasons, which are similar in nature to variations in the surface delta Z found by Langel (1973c). The altitude variation of delta B at sunlit local times, together with delta Z at the earth's surface, demonstrates that the delta Z and delta B which varies with sector has an ionospheric source. Langel (1973b) showed that the positive delta B region in the dark portion of the hemisphere is due to at least two sources, the westward electrojet and an unidentified non-ionospheric source(s). Comparison of magnetic variations between season/sector at the surface and at the satellite, in the dark portion of the hemisphere, indicates that these variations are caused by variations in the latitudinally narrow electrojet currents and not by variations in the non-ionospheric source of delta B.

  16. Decoding of Ankle Flexion and Extension from Cortical Current Sources Estimated from Non-invasive Brain Activity Recording Methods.

    PubMed

    Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue

    2017-01-01

    The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.

  17. Cell source determines the immunological impact of biomimetic nanoparticles.

    PubMed

    Evangelopoulos, Michael; Parodi, Alessandro; Martinez, Jonathan O; Yazdi, Iman K; Cevenini, Armando; van de Ven, Anne L; Quattrocchi, Nicoletta; Boada, Christian; Taghipour, Nima; Corbo, Claudia; Brown, Brandon S; Scaria, Shilpa; Liu, Xuewu; Ferrari, Mauro; Tasciotti, Ennio

    2016-03-01

    Recently, engineering the surface of nanotherapeutics with biologics to provide them with superior biocompatibility and targeting towards pathological tissues has gained significant popularity. Although the functionalization of drug delivery vectors with cellular materials has been shown to provide synthetic particles with unique biological properties, these approaches may have undesirable immunological repercussions upon systemic administration. Herein, we comparatively analyzed unmodified multistage nanovectors and particles functionalized with murine and human leukocyte cellular membrane, dubbed Leukolike Vectors (LLV), and the immunological effects that may arise in vitro and in vivo. Previously, LLV demonstrated an avoidance of opsonization and phagocytosis, in addition to superior targeting of inflammation and prolonged circulation. In this work, we performed a comprehensive evaluation of the importance of the source of cellular membrane in increasing their systemic tolerance and minimizing an inflammatory response. Time-lapse microscopy revealed LLV developed using a cellular coating derived from a murine (i.e., syngeneic) source resulted in an active avoidance of uptake by macrophage cells. Additionally, LLV composed of a murine membrane were found to have decreased uptake in the liver with no significant effect on hepatic function. As biomimicry continues to develop, this work demonstrates the necessity to consider the source of biological material in the development of future drug delivery carriers. Copyright © 2015. Published by Elsevier Ltd.

  18. Swift-BAT: Transient Source Monitoring

    NASA Astrophysics Data System (ADS)

    Barbier, L. M.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Markwardt, C.; Mushotzky, R.; Parsons, A.; Sakamoto, T.; Tueller, J.; Fenimore, E.; Palmer, D.; Skinner, G.; Swift-BAT Team

    2005-12-01

    The Burst Alert Telescope (BAT) on the Swift satellite is a large field of view instrument that continually monitors the sky to provide the gamma-ray burst trigger for Swift. An average of more than 70% of the sky is observed on a daily basis. The survey mode data is processed on two sets of time scales: from one minute to one day as part of the transient monitor program, and from one spacecraft pointing ( ˜20 minutes) to the full mission duration for the hard X-ray survey program. In the transient monitor program, sky images are processed to detect astrophysical sources in six energy bands covering 15-350 keV. The detected flux or upper limit in each energy band is calculated for >300 objects on time scales up to one day. In addition, the monitor is sensitive to an outburst from a new or unknown source. Sensitivity as a function of time scale for catalog and unknown sources will be presented. The daily exposure for a typical source is ˜1500 - 3000 seconds, with a 1-sigma sensitivity of ˜4mCrab. 90% of the sources are sampled at least every 16 days, but many sources are sampled daily. The BAT team will soon make the results of the transient monitor public to the astrophysical community through the Swift mission web page. It is expected that the Swift-BAT transient monitor will become an important resource for the high energy astrophysics community.

  19. Retrieving robust noise-based seismic velocity changes from sparse data sets: synthetic tests and application to Klyuchevskoy volcanic group (Kamchatka)

    NASA Astrophysics Data System (ADS)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N. M.; Droznin, D. V.; Droznina, S. Ya; Senyukov, S. L.; Gordeev, E. I.

    2018-05-01

    Continuous noise-based monitoring of seismic velocity changes provides insights into volcanic unrest, earthquake mechanisms and fluid injection in the sub-surface. The standard monitoring approach relies on measuring travel time changes of late coda arrivals between daily and reference noise cross-correlations, usually chosen as stacks of daily cross-correlations. The main assumption of this method is that the shape of the noise correlations does not change over time or, in other terms, that the ambient-noise sources are stationary through time. These conditions are not fulfilled when a strong episodic source of noise, such as a volcanic tremor for example, perturbs the reconstructed Green's function. In this paper we propose a general formulation for retrieving continuous time series of noise-based seismic velocity changes without the requirement of any arbitrary reference cross-correlation function. Instead, we measure the changes between all possible pairs of daily cross-correlations and invert them using different smoothing parameters to obtain the final velocity change curve. We perform synthetic tests in order to establish a general framework for future applications of this technique. In particular, we study the reliability of velocity change measurements versus the stability of noise cross-correlation functions. We apply this approach to a complex dataset of noise cross-correlations at Klyuchevskoy volcanic group (Kamchatka), hampered by loss of data and the presence of highly non-stationary seismic tremors.

  20. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  1. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  2. Taking potential probability function maps to the local scale and matching them with land use maps

    NASA Astrophysics Data System (ADS)

    Garg, Saryu; Sinha, Vinayak; Sinha, Baerbel

    2013-04-01

    Source-Receptor models have been developed using different methods. Residence-time weighted concentration back trajectory analysis and Potential Source Contribution Function (PSCF) are the two most popular techniques for identification of potential sources of a substance in a defined geographical area. Both techniques use back trajectories calculated using global models and assign values of probability/concentration to various locations in an area. These values represent the probability of threshold exceedances / the average concentration measured at the receptor in air masses with a certain residence time over a source area. Both techniques, however, have only been applied to regional and long-range transport phenomena due to inherent limitation with respect to both spatial accuracy and temporal resolution of the of back trajectory calculations. Employing the above mentioned concepts of residence time weighted concentration back-trajectory analysis and PSCF, we developed a source-receptor model capable of identifying local and regional sources of air pollutants like Particulate Matter (PM), NOx, SO2 and VOCs. We use 1 to 30 minute averages of concentration values and wind direction and speed from a single receptor site or from multiple receptor sites to trace the air mass back in time. The model code assumes all the atmospheric transport to be Lagrangian and linearly extrapolates air masses reaching the receptor location, backwards in time for a fixed number of steps. We restrict the model run to the lifetime of the chemical species under consideration. For long lived species the model run is limited to < 4 hrs as spatial uncertainty increases the longer an air mass is linearly extrapolated back in time. The final model output is a map, which can be compared with the local land use map to pinpoint sources of different chemical substances and estimate their source strength. Our model has flexible space- time grid extrapolation steps of 1-5 minutes and 1-5 km grid resolution. By making use of high temporal resolution data, our model can produce maps for different times of the day, thus accounting for temporal changes and activity profiles of different sources. The main advantage of our approach compared to geostationary numerical methods that interpolate measured concentration values of multiple measurement sites to produce maps (gridding) is that the maps produced are more accurate in terms of spatial identification of sources. The model was applied to isoprene and meteorological data recorded during clean post-monsoon season (1 October- 7 October, 2012) between 11 am and 4 pm at a receptor site in the North-West Indo-Gangetic Plains (IISER Mohali, 30.665° N, 76.729°E, 300 m asl), near the foothills of the Himalayan range. Considering the lifetime of isoprene, the model was run only 2 hours backward in time. The map shows highest residence time weighted concentration of isoprene (up to 3.5 ppbv) over agricultural land with high number of trees (>180 trees/gridsquare); moderate concentrations for agricultural lands with low tree density (1.5-2.5 ppbv for 250 μg/m3 for traffic hotspots in Chandigarh City are observed. Based on the validation against the land use maps, the model appears to do an excellent job in source apportionment and identifying emission hotspots. Acknowledgement: We thank the IISER Mohali Atmospheric Chemistry Facility for data and the Ministry of Human Resource Development (MHRD), India and IISER Mohali for funding the facility. Chinmoy Sarkar is acknowledged for technical support, Saryu Garg thanks the Max Planck-DST India Partner Group on Tropospheric OH reactivity and VOCs for funding the research.

  3. Cerebello-cortical network fingerprints differ between essential, Parkinson's and mimicked tremors.

    PubMed

    Muthuraman, Muthuraman; Raethjen, Jan; Koirala, Nabin; Anwar, Abdul Rauf; Mideksa, Kidist G; Elble, Rodger; Groppa, Sergiu; Deuschl, Günter

    2018-06-01

    Cerebello-thalamo-cortical loops play a major role in the emergence of pathological tremors and voluntary rhythmic movements. It is unclear whether these loops differ anatomically or functionally in different types of tremor. We compared age- and sex-matched groups of patients with Parkinson's disease or essential tremor and healthy controls (n = 34 per group). High-density 256-channel EEG and multi-channel EMG from extensor and flexor muscles of both wrists were recorded simultaneously while extending the hands against gravity with the forearms supported. Tremor was thereby recorded from patients, and voluntarily mimicked tremor was recorded from healthy controls. Tomographic maps of EEG-EMG coherence were constructed using a beamformer algorithm coherent source analysis. The direction and strength of information flow between different coherent sources were estimated using time-resolved partial-directed coherence analyses. Tremor severity and motor performance measures were correlated with connection strengths between coherent sources. The topography of oscillatory coherent sources in the cerebellum differed significantly among the three groups, but the cortical sources in the primary sensorimotor region and premotor cortex were not significantly different. The cerebellar and cortical source combinations matched well with known cerebello-thalamo-cortical connections derived from functional MRI resting state analyses according to the Buckner-atlas. The cerebellar sources for Parkinson's tremor and essential tremor mapped primarily to primary sensorimotor cortex, but the cerebellar source for mimicked tremor mapped primarily to premotor cortex. Time-resolved partial-directed coherence analyses revealed activity flow mainly from cerebellum to sensorimotor cortex in Parkinson's tremor and essential tremor and mainly from cerebral cortex to cerebellum in mimicked tremor. EMG oscillation flowed mainly to the cerebellum in mimicked tremor, but oscillation flowed mainly from the cerebellum to EMG in Parkinson's and essential tremor. The topography of cerebellar involvement differed among Parkinson's, essential and mimicked tremors, suggesting different cerebellar mechanisms in tremorogenesis. Indistinguishable areas of sensorimotor cortex and premotor cerebral cortex were involved in all three tremors. Information flow analyses suggest that sensory feedback and cortical efferent copy input to cerebellum are needed to produce mimicked tremor, but tremor in Parkinson's disease and essential tremor do not depend on these mechanisms. Despite the subtle differences in cerebellar source topography, we found no evidence that the cerebellum is the source of oscillation in essential tremor or that the cortico-bulbo-cerebello-thalamocortical loop plays different tremorogenic roles in Parkinson's and essential tremor. Additional studies are needed to decipher the seemingly subtle differences in cerebellocortical function in Parkinson's and essential tremors.

  4. Extending compile-time reverse mode and exploiting partial separability in ADIFOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; El-Khadiri, M.

    1992-10-01

    The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less

  5. Functional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation.

    PubMed

    Gratton, Caterina; Laumann, Timothy O; Nielsen, Ashley N; Greene, Deanna J; Gordon, Evan M; Gilmore, Adrian W; Nelson, Steven M; Coalson, Rebecca S; Snyder, Abraham Z; Schlaggar, Bradley L; Dosenbach, Nico U F; Petersen, Steven E

    2018-04-18

    The organization of human brain networks can be measured by capturing correlated brain activity with fMRI. There is considerable interest in understanding how brain networks vary across individuals or neuropsychiatric populations or are altered during the performance of specific behaviors. However, the plausibility and validity of such measurements is dependent on the extent to which functional networks are stable over time or are state dependent. We analyzed data from nine high-quality, highly sampled individuals to parse the magnitude and anatomical distribution of network variability across subjects, sessions, and tasks. Critically, we find that functional networks are dominated by common organizational principles and stable individual features, with substantially more modest contributions from task-state and day-to-day variability. Sources of variation were differentially distributed across the brain and differentially linked to intrinsic and task-evoked sources. We conclude that functional networks are suited to measuring stable individual characteristics, suggesting utility in personalized medicine. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Determining Mass and Persistence of a Reactive Brominated-Solvent DNAPL Source Using Mass Depletion-Mass Flux Reduction Relationships During Pumping

    NASA Astrophysics Data System (ADS)

    Johnston, C. D.; Davis, G. B.; Bastow, T.; Annable, M. D.; Trefry, M. G.; Furness, A.; Geste, Y.; Woodbury, R.; Rhodes, S.

    2011-12-01

    Measures of the source mass and depletion characteristics of recalcitrant dense non-aqueous phase liquid (DNAPL) contaminants are critical elements for assessing performance of remediation efforts. This is in addition to understanding the relationships between source mass depletion and changes to dissolved contaminant concentration and mass flux in groundwater. Here we present results of applying analytical source-depletion concepts to pumping from within the DNAPL source zone of a 10-m thick heterogeneous layered aquifer to estimate the original source mass and characterise the time trajectory of source depletion and mass flux in groundwater. The multi-component, reactive DNAPL source consisted of the brominated solvent tetrabromoethane (TBA) and its transformation products (mostly tribromoethene - TriBE). Coring and multi-level groundwater sampling indicated the DNAPL to be mainly in lower-permeability layers, suggesting the source had already undergone appreciable depletion. Four simplified source dissolution models (exponential, power function, error function and rational mass) were able to describe the concentration history of the total molar concentration of brominated organics in extracted groundwater during 285 days of pumping. Approximately 152 kg of brominated compounds were extracted. The lack of significant kinetic mass transfer limitations in pumped concentrations was notable. This was despite the heterogeneous layering in the aquifer and distribution of DNAPL. There was little to choose between the model fits to pumped concentration time series. The variance of groundwater velocities in the aquifer determined during a partitioning inter-well tracer test (PITT) were used to parameterise the models. However, the models were found to be relatively insensitive to this parameter. All models indicated an initial source mass around 250 kg which compared favourably to an estimate of 220 kg derived from the PITT. The extrapolated concentrations from the dissolution models diverged, showing disparate approaches to possible remediation objectives. However, it also showed that an appreciable proportion of the source would need to be removed to discriminate between the models. This may limit the utility of such modelling early in the history of a DNAPL source. A further limitation is the simplified approach of analysing the combined parent/daughter compounds with different solubilities as a total molar concentration. Although the fitted results gave confidence to this approach, there were appreciable changes in relative abundance. The dissolution and partitioning processes are discussed in relation to the lower-solubility TBA becoming dominant in pumped groundwater over time, despite its known rapid transformation to TriBE. These processes are also related to the architecture of the depleting source as revealed by multi-level groundwater sampling under reversed pumping/injection conditions.

  7. EEG functional connectivity is partially predicted by underlying white matter connectivity

    PubMed Central

    Chu, CJ; Tanaka, N; Diaz, J; Edlow, BL; Wu, O; Hämäläinen, M; Stufflebeam, S; Cash, SS; Kramer, MA.

    2015-01-01

    Over the past decade, networks have become a leading model to illustrate both the anatomical relationships (structural networks) and the coupling of dynamic physiology (functional networks) linking separate brain regions. The relationship between these two levels of description remains incompletely understood and an area of intense research interest. In particular, it is unclear how cortical currents relate to underlying brain structural architecture. In addition, although theory suggests that brain communication is highly frequency dependent, how structural connections influence overlying functional connectivity in different frequency bands has not been previously explored. Here we relate functional networks inferred from statistical associations between source imaging of EEG activity and underlying cortico-cortical structural brain connectivity determined by probabilistic white matter tractography. We evaluate spontaneous fluctuating cortical brain activity over a long time scale (minutes) and relate inferred functional networks to underlying structural connectivity for broadband signals, as well as in seven distinct frequency bands. We find that cortical networks derived from source EEG estimates partially reflect both direct and indirect underlying white matter connectivity in all frequency bands evaluated. In addition, we find that when structural support is absent, functional connectivity is significantly reduced for high frequency bands compared to low frequency bands. The association between cortical currents and underlying white matter connectivity highlights the obligatory interdependence of functional and structural networks in the human brain. The increased dependence on structural support for the coupling of higher frequency brain rhythms provides new evidence for how underlying anatomy directly shapes emergent brain dynamics at fast time scales. PMID:25534110

  8. On the Assessment of Acoustic Scattering and Shielding by Time Domain Boundary Integral Equation Solutions

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.

    2016-01-01

    Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).

  9. A deeper look at the X-ray point source population of NGC 4472

    NASA Astrophysics Data System (ADS)

    Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.

    2017-10-01

    In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.

  10. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  11. Virtual plane-wave imaging via Marchenko redatuming

    NASA Astrophysics Data System (ADS)

    Meles, Giovanni Angelo; Wapenaar, Kees; Thorbecke, Jan

    2018-04-01

    Marchenko redatuming is a novel scheme used to retrieve up- and down-going Green's functions in an unknown medium. Marchenko equations are based on reciprocity theorems and are derived on the assumption of the existence of functions exhibiting space-time focusing properties once injected in the subsurface. In contrast to interferometry but similarly to standard migration methods, Marchenko redatuming only requires an estimate of the direct wave from the virtual source (or to the virtual receiver), illumination from only one side of the medium, and no physical sources (or receivers) inside the medium. In this contribution we consider a different time-focusing condition within the frame of Marchenko redatuming that leads to the retrieval of virtual plane-wave responses. As a result, it allows multiple-free imaging using only a one-dimensional sampling of the targeted model at a fraction of the computational cost of standard Marchenko schemes. The potential of the new method is demonstrated on 2D synthetic models.

  12. The 2017 North Korea M6 seismic sequence: moment tensor, source time function, and aftershocks

    NASA Astrophysics Data System (ADS)

    Ni, S.; Zhan, Z.; Chu, R.; He, X.

    2017-12-01

    On September 3rd, 2017, an M6 seismic event occurred in North Korea, with location near previous nuclear test sites. The event features strong P waves and short period Rayleigh waves are observed in contrast to weak S waves, suggesting mostly explosion mechanism. We performed joint inversion for moment tensor and depth with both local and teleseismic waveforms, and find that the event is shallow with mostly isotropic yet substantial non-isotropic components. Deconvolution of seismic waveforms of this event with respect to previous nuclear test events shows clues of complexity in source time function. The event is followed by smaller earthquakes, as early as 8.5 minutes and lasted at least to October. The later events occurred in a compact region, and show clear S waves, suggesting double couple focal mechanism. Via analyzing Rayleigh wave spectrum, these smaller events are found to be shallow. Relative locations, difference in waveforms of the events are used to infer their possible links and generation mechanism.

  13. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  14. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  15. Kinetic Analysis of Dynamic Positron Emission Tomography Data using Open-Source Image Processing and Statistical Inference Tools.

    PubMed

    Hawe, David; Hernández Fernández, Francisco R; O'Suilleabháin, Liam; Huang, Jian; Wolsztynski, Eric; O'Sullivan, Finbarr

    2012-05-01

    In dynamic mode, positron emission tomography (PET) can be used to track the evolution of injected radio-labelled molecules in living tissue. This is a powerful diagnostic imaging technique that provides a unique opportunity to probe the status of healthy and pathological tissue by examining how it processes substrates. The spatial aspect of PET is well established in the computational statistics literature. This article focuses on its temporal aspect. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue. In statistical terms, the residue function is essentially a survival function - a familiar life-time data construct. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as flow, flux, volume of distribution and transit time summaries. This review emphasises a nonparametric approach to the estimation of the residue based on a piecewise linear form. Rapid implementation of this by quadratic programming is described. The approach provides a reference for statistical assessment of widely used one- and two-compartmental model forms. We illustrate the method with data from two of the most well-established PET radiotracers, (15)O-H(2)O and (18)F-fluorodeoxyglucose, used for assessment of blood perfusion and glucose metabolism respectively. The presentation illustrates the use of two open-source tools, AMIDE and R, for PET scan manipulation and model inference.

  16. A strategy to unveil transient sources of ultra-high-energy cosmic rays

    NASA Astrophysics Data System (ADS)

    Takami, Hajime

    2013-06-01

    Transient generation of ultra-high-energy cosmic rays (UHECRs) has been motivated from promising candidates of UHECR sources such as gamma-ray bursts, flares of active galactic nuclei, and newly born neutron stars and magnetars. Here we propose a strategy to unveil transient sources of UHECRs from UHECR experiments. We demonstrate that the rate of UHECR bursts and/or flares is related to the apparent number density of UHECR sources, which is the number density estimated on the assumption of steady sources, and the time-profile spread of the bursts produced by cosmic magnetic fields. The apparent number density strongly depends on UHECR energies under a given rate of the bursts, which becomes observational evidence of transient sources. It is saturated at the number density of host galaxies of UHECR sources. We also derive constraints on the UHECR burst rate and/or energy budget of UHECRs per source as a function of the apparent source number density by using models of cosmic magnetic fields. In order to obtain a precise constraint of the UHECR burst rate, high event statistics above ˜ 1020 eV for evaluating the apparent source number density at the highest energies and better knowledge on cosmic magnetic fields by future observations and/or simulations to better estimate the time-profile spread of UHECR bursts are required. The estimated rate allows us to constrain transient UHECR sources by being compared with the occurrence rates of known energetic transient phenomena.

  17. Information-Theoretical Analysis of EEG Microstate Sequences in Python.

    PubMed

    von Wegner, Frederic; Laufs, Helmut

    2018-01-01

    We present an open-source Python package to compute information-theoretical quantities for electroencephalographic data. Electroencephalography (EEG) measures the electrical potential generated by the cerebral cortex and the set of spatial patterns projected by the brain's electrical potential on the scalp surface can be clustered into a set of representative maps called EEG microstates. Microstate time series are obtained by competitively fitting the microstate maps back into the EEG data set, i.e., by substituting the EEG data at a given time with the label of the microstate that has the highest similarity with the actual EEG topography. As microstate sequences consist of non-metric random variables, e.g., the letters A-D, we recently introduced information-theoretical measures to quantify these time series. In wakeful resting state EEG recordings, we found new characteristics of microstate sequences such as periodicities related to EEG frequency bands. The algorithms used are here provided as an open-source package and their use is explained in a tutorial style. The package is self-contained and the programming style is procedural, focusing on code intelligibility and easy portability. Using a sample EEG file, we demonstrate how to perform EEG microstate segmentation using the modified K-means approach, and how to compute and visualize the recently introduced information-theoretical tests and quantities. The time-lagged mutual information function is derived as a discrete symbolic alternative to the autocorrelation function for metric time series and confidence intervals are computed from Markov chain surrogate data. The software package provides an open-source extension to the existing implementations of the microstate transform and is specifically designed to analyze resting state EEG recordings.

  18. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  19. Determination of stress glut moments of total degree 2 from teleseismic surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Bukchin, B. G.

    1995-08-01

    A special case of the seismic source, where the stress glut tensor can be expressed as a product of a uniform moment tensor and a scalar function of spatial coordinates and time, is considered. For such a source, a technique of determining stress glut moments of total degree 2 from surface wave amplitude spectra is described. The results of application of this technique for the estimation of spatio-temporal characteristics of the Georgian earthquake, 29.04.91 are presented.

  20. Very-High-Energy γ-Ray Observations of the Blazar 1ES 2344+514 with VERITAS

    NASA Astrophysics Data System (ADS)

    Allen, C.; Archambault, S.; Archer, A.; Benbow, W.; Bird, R.; Bourbeau, E.; Brose, R.; Buchovecky, M.; Buckley, J. H.; Bugaev, V.; Cardenzana, J. V.; Cerruti, M.; Chen, X.; Christiansen, J. L.; Connolly, M. P.; Cui, W.; Daniel, M. K.; Eisch, J. D.; Falcone, A.; Feng, Q.; Fernandez-Alonso, M.; Finley, J. P.; Fleischhack, H.; Flinders, A.; Fortson, L.; Furniss, A.; Gillanders, G. H.; Griffin, S.; Grube, J.; Hütten, M.; Håkansson, N.; Hanna, D.; Hervet, O.; Holder, J.; Hughes, G.; Humensky, T. B.; Johnson, C. A.; Kaaret, P.; Kar, P.; Kelley-Hoskins, N.; Kertzman, M.; Kieda, D.; Krause, M.; Krennrich, F.; Kumar, S.; Lang, M. J.; Maier, G.; McArthur, S.; McCann, A.; Meagher, K.; Moriarty, P.; Mukherjee, R.; Nguyen, T.; Nieto, D.; O'Brien, S.; de Bhróithe, A. O'Faoláin; Ong, R. A.; Otte, A. N.; Park, N.; Petrashyk, A.; Pichel, A.; Pohl, M.; Popkow, A.; Pueschel, E.; Quinn, J.; Ragan, K.; Reynolds, P. T.; Richards, G. T.; Roache, E.; Rovero, A. C.; Rulten, C.; Sadeh, I.; Santander, M.; Sembroski, G. H.; Shahinyan, K.; Telezhinsky, I.; Tucci, J. V.; Tyler, J.; Wakely, S. P.; Weinstein, A.; Wilhelm, A.; Williams, D. A.

    2017-10-01

    We present very-high-energy γ-ray observations of the BL Lac object 1ES 2344+514 taken by the Very Energetic Radiation Imaging Telescope Array System between 2007 and 2015. 1ES 2344+514 is detected with a statistical significance above the background of 20.8σ in 47.2 h (livetime) of observations, making this the most comprehensive very-high-energy study of 1ES 2344+514 to date. Using these observations, the temporal properties of 1ES 2344+514 are studied on short and long times-scales. We fit a constant-flux model to nightly and seasonally binned light curves and apply a fractional variability test to determine the stability of the source on different time-scales. We reject the constant-flux model for the 2007-2008 and 2014-2015 nightly binned light curves and for the long-term seasonally binned light curve at the >3σ level. The spectra of the time-averaged emission before and after correction for attenuation by the extragalactic background light are obtained. The observed time-averaged spectrum above 200 GeV is satisfactorily fitted (χ2/NDF = 7.89/6) by a power-law function with an index Γ = 2.46 ± 0.06stat ± 0.20sys and extends to at least 8 TeV. The extragalactic-background-light-deabsorbed spectrum is adequately fit (χ2/NDF = 6.73/6) by a power-law function with an index Γ = 2.15 ± 0.06stat ± 0.20sys while an F-test indicates that the power law with an exponential cut-off function provides a marginally better fit (χ2/NDF = 2.56/5) at the 2.1σ level. The source location is found to be consistent with the published radio location and its spatial extent is consistent with a point source.

  1. Time-reversal in geophysics: the key for imaging a seismic source, generating a virtual source or imaging with no source (Invited)

    NASA Astrophysics Data System (ADS)

    Tourin, A.; Fink, M.

    2010-12-01

    The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake

  2. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  3. Planetary geomorphology field studies: Washington and Alaska

    NASA Technical Reports Server (NTRS)

    Malin, M. C.

    1984-01-01

    Field studies of terrestrial landforms and the processes that shape them provide new directions to the study of planetary features. Investigations discussed address principally mudflow phenomena and drainage development. At the Valley of 10,000 Smokes (Katmai, AK) and Mount St. Helens, WA, studies of the development of erosional landforms (in particular, drainage) on fresh, new surfaces permitted analysis of the result of competition between geomorphic processes. Of specific interest is the development of stream pattern as a function of the competition between perennial seepage overland flow (from glacial or groundwater sources), ephemeral overland flow (from pluvial or seasonal melt sources), and ephemeral/perennial groundwater sapping, as a function of time since initial resurfacing, material properties, and seasonal/annual environmental conditions.

  4. Self-consistent current sheet structures in the quiet-time magnetotail

    NASA Technical Reports Server (NTRS)

    Holland, Daniel L.; Chen, James

    1993-01-01

    The structure of the quiet-time magnetotail is studied using a test particle simulation. Vlasov equilibria are obtained in the regime where v(D) = E(y) c/B(z) is much less than the ion thermal velocity and are self-consistent in that the current and magnetic field satisfy Ampere's law. Force balance between the plasma and magnetic field is satisfied everywhere. The global structure of the current sheet is found to be critically dependent on the source distribution function. The pressure tensor is nondiagonal in the current sheet with anisotropic temperature. A kinetic mechanism is proposed whereby changes in the source distribution results in a thinning of the current sheet.

  5. Seismic source functions from free-field ground motions recorded on SPE: Implications for source models of small, shallow explosions

    NASA Astrophysics Data System (ADS)

    Rougier, Esteban; Patton, Howard J.

    2015-05-01

    Reduced displacement potentials (RDPs) for chemical explosions of the Source Physics Experiments (SPE) in granite at the Nevada Nuclear Security Site are estimated from free-field ground motion recordings. Far-field P wave source functions are proportional to the time derivative of RDPs. Frequency domain comparisons between measured source functions and model predictions show that high-frequency amplitudes roll off as ω- 2, but models fail to predict the observed seismic moment, corner frequency, and spectral overshoot. All three features are fit satisfactorily for the SPE-2 test after cavity radius Rc is reduced by 12%, elastic radius is reduced by 58%, and peak-to-static pressure ratio on the elastic radius is increased by 100%, all with respect to the Mueller-Murphy model modified with the Denny-Johnson Rc scaling law. A large discrepancy is found between the cavity volume inferred from RDPs and the volume estimated from laser scans of the emplacement hole. The measurements imply a scaled Rc of ~5 m/kt1/3, more than a factor of 2 smaller than nuclear explosions. Less than 25% of the seismic moment can be attributed to cavity formation. A breakdown of the incompressibility assumption due to shear dilatancy of the source medium around the cavity is the likely explanation. New formulas are developed for volume changes due to medium bulking (or compaction). A 0.04% decrease of average density inside the elastic radius accounts for the missing volumetric moment. Assuming incompressibility, established Rc scaling laws predicted the moment reasonable well, but it was only fortuitous because dilation of the source medium compensated for the small cavity volume.

  6. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  7. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  8. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  9. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  10. Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.

  11. Temporal Characterization of Aircraft Noise Sources

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Sullivan, Brenda M.; Rizzi, Stephen A.

    2004-01-01

    Current aircraft source noise prediction tools yield time-independent frequency spectra as functions of directivity angle. Realistic evaluation and human assessment of aircraft fly-over noise require the temporal characteristics of the noise signature. The purpose of the current study is to analyze empirical data from broadband jet and tonal fan noise sources and to provide the temporal information required for prediction-based synthesis. Noise sources included a one-tenth-scale engine exhaust nozzle and a one-fifth scale scale turbofan engine. A methodology was developed to characterize the low frequency fluctuations employing the Short Time Fourier Transform in a MATLAB computing environment. It was shown that a trade-off is necessary between frequency and time resolution in the acoustic spectrogram. The procedure requires careful evaluation and selection of the data analysis parameters, including the data sampling frequency, Fourier Transform window size, associated time period and frequency resolution, and time period window overlap. Low frequency fluctuations were applied to the synthesis of broadband noise with the resulting records sounding virtually indistinguishable from the measured data in initial subjective evaluations. Amplitude fluctuations of blade passage frequency (BPF) harmonics were successfully characterized for conditions equivalent to take-off and approach. Data demonstrated that the fifth harmonic of the BPF varied more in frequency than the BPF itself and exhibited larger amplitude fluctuations over the duration of the time record. Frequency fluctuations were found to be not perceptible in the current characterization of tonal components.

  12. A global time-dependent model of thunderstorm electricity. I - Mathematical properties of the physical and numerical models

    NASA Technical Reports Server (NTRS)

    Browning, G. L.; Tzur, I.; Roble, R. G.

    1987-01-01

    A time-dependent model is introduced that can be used to simulate the interaction of a thunderstorm with its global electrical environment. The model solves the continuity equation of the Maxwell current, which is assumed to be composed of the conduction, displacement, and source currents. Boundary conditions which can be used in conjunction with the continuity equation to form a well-posed initial-boundary value problem are determined. Properties of various components of solutions of the initial-boundary value problem are analytically determined. The results indicate that the problem has two time scales, one determined by the background electrical conductivity and the other by the time variation of the source function. A numerical method for obtaining quantitative results is introduced, and its properties are studied. Some simulation results on the evolution of the displacement and conduction currents during the electrification of a storm are presented.

  13. Here Be Dragons: Effective (X-ray) Timing with the Cospectrum

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela; Bachetti, Matteo

    2018-01-01

    In recent years, the cross spectrum has received considerable attention as a means of characterising the variability of astronomical sources as a function of wavelength. While much has been written about the statistics of time and phase lags, the cospectrum—the real part of the cross spectrum—has only recently been understood as means of mitigating instrumental effects dependent on temporal frequency in astronomical detectors, as well as a method of characterizing the coherent variability in two wavelength ranges on different time scales. In this talk, I will present recent advances made in understanding the statistical properties of cospectra, leading to much improved inferences for periodic and quasi-periodic signals. I will also present a new method to reliably mitigate instrumental effects such as dead time in X-ray detectors, and show how we can use the cospectrum to model highly variable sources such as X-ray binaries or Active Galactic Nuclei.

  14. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  15. Effect of Field Spread on Resting-State Magneto Encephalography Functional Network Analysis: A Computational Modeling Study.

    PubMed

    Silva Pereira, Silvana; Hindriks, Rikkert; Mühlberg, Stefanie; Maris, Eric; van Ede, Freek; Griffa, Alessandra; Hagmann, Patric; Deco, Gustavo

    2017-11-01

    A popular way to analyze resting-state electroencephalography (EEG) and magneto encephalography (MEG) data is to treat them as a functional network in which sensors are identified with nodes and the interaction between channel time series and the network connections. Although conceptually appealing, the network-theoretical approach to sensor-level EEG and MEG data is challenged by the fact that EEG and MEG time series are mixtures of source activity. It is, therefore, of interest to assess the relationship between functional networks of source activity and the ensuing sensor-level networks. Since these topological features are of high interest in experimental studies, we address the question of to what extent the network topology can be reconstructed from sensor-level functional connectivity (FC) measures in case of MEG data. Simple simulations that consider only a small number of regions do not allow to assess network properties; therefore, we use a diffusion magnetic resonance imaging-constrained whole-brain computational model of resting-state activity. Our motivation lies behind the fact that still many contributions found in the literature perform network analysis at sensor level, and we aim at showing the discrepancies between source- and sensor-level network topologies by using realistic simulations of resting-state cortical activity. Our main findings are that the effect of field spread on network topology depends on the type of interaction (instantaneous or lagged) and leads to an underestimation of lagged FC at sensor level due to instantaneous mixing of cortical signals, instantaneous interaction is more sensitive to field spread than lagged interaction, and discrepancies are reduced when using planar gradiometers rather than axial gradiometers. We, therefore, recommend using lagged interaction measures on planar gradiometer data when investigating network properties of resting-state sensor-level MEG data.

  16. Aeromicrobiology/air quality

    USGS Publications Warehouse

    Andersen, Gary L.; Frisch, A.S.; Kellogg, Christina A.; Levetin, E.; Lighthart, Bruce; Paterno, D.

    2009-01-01

    The most prevalent microorganisms, viruses, bacteria, and fungi, are introduced into the atmosphere from many anthropogenic sources such as agricultural, industrial and urban activities, termed microbial air pollution (MAP), and natural sources. These include soil, vegetation, and ocean surfaces that have been disturbed by atmospheric turbulence. The airborne concentrations range from nil to great numbers and change as functions of time of day, season, location, and upwind sources. While airborne, they may settle out immediately or be transported great distances. Further, most viable airborne cells can be rendered nonviable due to temperature effects, dehydration or rehydration, UV radiation, and/or air pollution effects. Mathematical microbial survival models that simulate these effects have been developed.

  17. A Near Real-Time Seismic Exploration and Monitoring (i.e., Ambient Seismic Noise Interferometry) Solution Based Upon a Novel "At the Edge" Approach that Leverages Commercially Available Digitizers, Embedded Systems, and an Open-Source Big Data Architecture

    NASA Astrophysics Data System (ADS)

    Sepulveda, F.; Thangraj, J. S.; Quiros, D.; Pulliam, J.; Queen, J. H.; Queen, M.; Iovenitti, J. L.

    2017-12-01

    Seismic interferometry that makes use of ambient noise requires that cross-correlations of data recorded at two or more stations be stacked over a "long enough" time interval that off-axis sources cancel and the estimated inter-station Green's function converges to the actual function. However, the optimal length of the recording period depends on the characteristics of ambient noise at the site, which vary over time and are therefore not known before data acquisition. Data acquisition parameters cannot be planned in ways that will ensure success while minimizing cost and effort. Experiment durations are typically either too long or too short. Automated, in-field processing can provide inter-station Green's functions in near-real-time, allowing for the immediate evaluation of results and enabling operators to alter data acquisition parameters before demobilizing. We report on the design, system integration, and testing of a strategy for the automation of data acquisition, distribution, and processing of ambient noise using industry-standard, widely-available instrumentation (Reftek 130-01 digitizers and 4.5 Hz geophones). Our solution utilizes an inexpensive embedded system (Raspberry Pi 3), which is configured to acquire data from the Reftek and insert it into a big data store called Apache Cassandra. Cassandra distributes and maintains up-to-date copies of the data, through a WiFi network, as defined by tunable consistency levels and replication factors thus allowing for efficient multi-station computations. At regular intervals, data is extracted from Cassandra and is used to compute Green's functions for all receiver pairs. Results are reviewed and progress toward convergence can be assessed. We successfully tested a 20-node prototype of what we call the "Raspberry Pi-Enhanced Reftek" (RaPiER) array at the Soda Lake Geothermal Field in Nevada in June 2017. While intermittent problems with the WiFi network interfered with the real-time data delivery from some stations, the system performed robustly overall and produced hourly sets of steadily improving virtual source gathers. Most importantly, the effects of data shortfalls on results can be assessed immediately, in the field, so the array's acquisition parameters can be modified and the deployment duration extended as necessary.

  18. iQIST v0.7: An open source continuous-time quantum Monte Carlo impurity solver toolkit

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2017-12-01

    In this paper, we present a new version of the iQIST software package, which is capable of solving various quantum impurity models by using the hybridization expansion (or strong coupling expansion) continuous-time quantum Monte Carlo algorithm. In the revised version, the software architecture is completely redesigned. New basis (intermediate representation or singular value decomposition representation) for the single-particle and two-particle Green's functions is introduced. A lot of useful physical observables are added, such as the charge susceptibility, fidelity susceptibility, Binder cumulant, and autocorrelation time. Especially, we optimize measurement for the two-particle Green's functions. Both the particle-hole and particle-particle channels are supported. In addition, the block structure of the two-particle Green's functions is exploited to accelerate the calculation. Finally, we fix some known bugs and limitations. The computational efficiency of the code is greatly enhanced.

  19. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 1; JeNo Technical Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Georgiadis, Nicholas

    2005-01-01

    The model-based approach, used by the JeNo code to predict jet noise spectral directivity, is described. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. A Reynolds-averaged Navier-Stokes (RANS) solution yields the required mean flow for the solution of the propagation Green s function in a locally parallel flow. The RANS solution also produces time- and length-scales needed to model the non-compact source, the turbulent velocity correlation tensor, with exponential temporal and spatial functions. It is shown that while an exact non-causal Green s function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at low to moderate Mach numbers, the polar directivity of radiated sound is not entirely captured by this Green s function at high subsonic and supersonic acoustic Mach numbers. Results presented for unheated jets in the Mach number range of 0.51 to 1.8 suggest that near the peak radiation angle of high-speed jets, a different source/Green s function convolution integral may be required in order to capture the peak observed directivity of jet noise. A sample Mach 0.90 heated jet is also discussed that highlights the requirements for a comprehensive jet noise prediction model.

  20. A generalized formulation for noise-based seismic velocity change measurements

    NASA Astrophysics Data System (ADS)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N.; Droznin, D.; Droznina, S.; Senyukov, S.; Gordeev, E.

    2017-12-01

    The observation of continuous seismic velocity changes is a powerful tool for detecting seasonal variations in crustal structure, volcanic unrest, co- and post-seismic evolution of stress in fault areas or the effects of fluid injection. The standard approach for measuring such velocity changes relies on comparison of travel times in the coda of a set of seismic signals, usually noise-based cross-correlations retrieved at different dates, and a reference trace, usually a averaged function over dates. A good stability in both space and time of the noise sources is then the main assumption for reliable measurements. Unfortunately, these conditions are often not fulfilled, as it happens when ambient-noise sources are non-stationary, such as the emissions of low-frequency volcanic tremors.We propose a generalized formulation for retrieving continuous time series of noise-based seismic velocity changes without any arbitrary reference cross-correlation function. We set up a general framework for future applications of this technique performing synthetic tests. In particular, we study the reliability of the retrieved velocity changes in case of seasonal-type trends, transient effects (similar to those produced as a result of an earthquake or a volcanic eruption) and sudden velocity drops and recoveries as the effects of transient local source emissions. Finally, we apply this approach to a real dataset of noise cross-correlations. We choose the Klyuchevskoy volcanic group (Kamchatka) as a case study where the recorded wavefield is hampered by loss of data and dominated by strongly localized volcanic tremor sources. Despite the mentioned wavefield contaminations, we retrieve clear seismic velocity drops associated with the eruptions of the Klyuchevskoy an the Tolbachik volcanoes in 2010 and 2012, respectively.

  1. Development open source microcontroller based temperature data logger

    NASA Astrophysics Data System (ADS)

    Abdullah, M. H.; Che Ghani, S. A.; Zaulkafilai, Z.; Tajuddin, S. N.

    2017-10-01

    This article discusses the development stages in designing, prototyping, testing and deploying a portable open source microcontroller based temperature data logger for use in rough industrial environment. The 5V powered prototype of data logger is equipped with open source Arduino microcontroller for integrating multiple thermocouple sensors with their module, secure digital (SD) card storage, liquid crystal display (LCD), real time clock and electronic enclosure made of acrylic. The program for the function of the datalogger is programmed so that 8 readings from the thermocouples can be acquired within 3 s interval and displayed on the LCD simultaneously. The recorded temperature readings at four different points on both hydrodistillation show similar profile pattern and highest yield of extracted oil was achieved on hydrodistillation 2 at 0.004%. From the obtained results, this study achieved the objective of developing an inexpensive, portable and robust eight channels temperature measuring module with capabilities to monitor and store real time data.

  2. Infrasonic emissions from local meteorological events: A summary of data taken throughout 1984

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.

    1986-01-01

    Records of infrasonic signals, propagating through the Earth's atmosphere in the frequency band 2 to 16 Hz, were gathered on a three microphone array at Langley Research Center throughout the year 1984. Digital processing of these records fulfilled three functions: time delay estimation, based on an adaptive filter; source location, determined from the time delay estimates; and source identification, based on spectral analysis. Meteorological support was provided by significant meteorological advisories, lightning locator plots, and daily reports from the Air Weather Service. The infrasonic data are organized into four characteristic signatures, one of which is believed to contain emissions from local meteorological sources. This class of signature prevailed only on those days when major global meteorological events appeared in or near to eastern United States. Eleven case histories are examined. Practical application of the infrasonic array in a low level wing shear alert system is discussed.

  3. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  4. Combined corona discharge and UV photoionization source for ion mobility spectrometry.

    PubMed

    Bahrami, Hamed; Tabrizchi, Mahmoud

    2012-08-15

    An ion mobility spectrometer is described which is equipped with two non-radioactive ion sources, namely an atmospheric pressure photoionization and a corona discharge ionization source. The two sources cannot only run individually but are additionally capable of operating simultaneously. For photoionization, a UV lamp was mounted parallel to the axis of the ion mobility cell. The corona discharge electrode was mounted perpendicular to the UV radiation. The total ion current from the photoionization source was verified as a function of lamp current, sample flow rate, and drift field. Simultaneous operation of the two ionization sources was investigated by recording ion mobility spectra of selected samples. The design allows one to observe peaks from either the corona discharge or photoionization individually or simultaneously. This makes it possible to accurately compare peaks in the ion mobility spectra from each individual source. Finally, the instrument's capability for discriminating two peaks appearing in approximately identical drift times using each individual ionization source is demonstrated. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Development of High-Resolution Dynamic Dust Source Function - A Case Study with a Strong Dust Storm in a Regional Model

    NASA Technical Reports Server (NTRS)

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2017-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 0203 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  6. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model

    PubMed Central

    Kim, Dongchul; Chin, Mian; Kemp, Eric M.; Tao, Zhining; Peters-Lidard, Christa D.; Ginoux, Paul

    2018-01-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events. PMID:29632432

  7. Development of High-Resolution Dynamic Dust Source Function -A Case Study with a Strong Dust Storm in a Regional Model.

    PubMed

    Kim, Dongchul; Chin, Mian; Kemp, Eric M; Tao, Zhining; Peters-Lidard, Christa D; Ginoux, Paul

    2017-06-01

    A high-resolution dynamic dust source has been developed in the NASA Unified-Weather Research and Forecasting (NU-WRF) model to improve the existing coarse static dust source. In the new dust source map, topographic depression is in 1-km resolution and surface bareness is derived using the Normalized Difference Vegetation Index (NDVI) data from Moderate Resolution Imaging Spectroradiometer (MODIS). The new dust source better resolves the complex topographic distribution over the Western United States where its magnitude is higher than the existing, coarser resolution static source. A case study is conducted with an extreme dust storm that occurred in Phoenix, Arizona in 02-03 UTC July 6, 2011. The NU-WRF model with the new high-resolution dynamic dust source is able to successfully capture the dust storm, which was not achieved with the old source identification. However the case study also reveals several challenges in reproducing the time evolution of the short-lived, extreme dust storm events.

  8. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  9. Seasonal variations and source apportionment of atmospheric PM2.5-bound polycyclic aromatic hydrocarbons in a mixed multi-function area of Hangzhou, China.

    PubMed

    Lu, Hao; Wang, Shengsheng; Li, Yun; Gong, Hui; Han, Jingyi; Wu, Zuliang; Yao, Shuiliang; Zhang, Xuming; Tang, Xiujuan; Jiang, Boqiong

    2017-07-01

    To reveal the seasonal variations and sources of PM 2.5 -bound polycyclic aromatic hydrocarbons (PAHs) during haze and non-haze episodes, daily PM 2.5 samples were collected from March 2015 to February 2016 in a mixed multi-function area in Hangzhou, China. Ambient concentrations of 16 priority-controlled PAHs were determined. The sums of PM 2.5 -bound PAH concentrations during the haze episodes were 4.52 ± 3.32 and 13.6 ± 6.29 ng m -3 in warm and cold seasons, respectively, which were 1.99 and 1.49 times those during the non-haze episodes. Four PAH sources were identified using the positive matrix factorization model and conditional probability function, which were vehicular emissions (45%), heavy oil combustion (23%), coal and natural gas combustion (22%), and biomass combustion (10%). The four source concentrations of PAHs consistently showed higher levels in the cold season, compared with those in the warm season. Vehicular emissions were the most considerable sources that result in the increase of PM 2.5 -bound PAH levels during the haze episodes, and heavy oil combustion played an important role in the aggravation of haze pollution. The analysis of air mass back trajectories indicated that air mass transport had an influence on the PM 2.5 -bound PAH pollution, especially on the increased contributions from coal combustion and vehicular emissions in the cold season.

  10. Automated real-time software development

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.; Walker, Carrie K.; Turkovich, John J.

    1993-01-01

    A Computer-Aided Software Engineering (CASE) system has been developed at the Charles Stark Draper Laboratory (CSDL) under the direction of the NASA Langley Research Center. The CSDL CASE tool provides an automated method of generating source code and hard copy documentation from functional application engineering specifications. The goal is to significantly reduce the cost of developing and maintaining real-time scientific and engineering software while increasing system reliability. This paper describes CSDL CASE and discusses demonstrations that used the tool to automatically generate real-time application code.

  11. Class of self-limiting growth models in the presence of nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Kar, Sandip; Banik, Suman Kumar; Ray, Deb Shankar

    2002-06-01

    The source term in a reaction-diffusion system, in general, does not involve explicit time dependence. A class of self-limiting growth models dealing with animal and tumor growth and bacterial population in a culture, on the other hand, are described by kinetics with explicit functions of time. We analyze a reaction-diffusion system to study the propagation of spatial front for these models.

  12. High-throughput automated home-cage mesoscopic functional imaging of mouse cortex

    PubMed Central

    Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.

    2016-01-01

    Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514

  13. The Chaotic Light Curves of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Kazanas, Demosthenes

    2007-01-01

    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.

  14. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  15. Mitigating artifacts in back-projection source imaging with implications for frequency-dependent properties of the Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao

    2012-12-01

    Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.

  16. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  17. Second generation stationary digital breast tomosynthesis system with faster scan time and wider angular span.

    PubMed

    Calliste, Jabari; Wu, Gongting; Laganis, Philip E; Spronk, Derrek; Jafari, Houman; Olson, Kyle; Gao, Bo; Lee, Yueh Z; Zhou, Otto; Lu, Jianping

    2017-09-01

    The aim of this study was to characterize a new generation stationary digital breast tomosynthesis system with higher tube flux and increased angular span over a first generation system. The linear CNT x-ray source was designed, built, and evaluated to determine its performance parameters. The second generation system was then constructed using the CNT x-ray source and a Hologic gantry. Upon construction, test objects and phantoms were used to characterize system resolution as measured by the modulation transfer function (MTF), and artifact spread function (ASF). The results indicated that the linear CNT x-ray source was capable of stable operation at a tube potential of 49 kVp, and measured focal spot sizes showed source-to-source consistency with a nominal focal spot size of 1.1 mm. After construction, the second generation (Gen 2) system exhibited entrance surface air kerma rates two times greater the previous s-DBT system. System in-plane resolution as measured by the MTF is 7.7 cycles/mm, compared to 6.7 cycles/mm for the Gen 1 system. As expected, an increase in the z-axis depth resolution was observed, with a decrease in the ASF from 4.30 mm to 2.35 mm moving from the Gen 1 system to the Gen 2 system as result of an increased angular span. The results indicate that the Gen 2 stationary digital breast tomosynthesis system, which has a larger angular span, increased entrance surface air kerma, and faster image acquisition time over the Gen 1 s-DBT system, results in higher resolution images. With the detector operating at full resolution, the Gen 2 s-DBT system can achieve an in-plane resolution of 7.7 cycles per mm, which is better than the current commercial DBT systems today, and may potentially result in better patient diagnosis. © 2017 American Association of Physicists in Medicine.

  18. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    NASA Astrophysics Data System (ADS)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-06-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the downdip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multiscale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multiscale mechanisms of slow earthquakes generation.

  19. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    NASA Astrophysics Data System (ADS)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-02-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multi-scale mechanisms of slow earthquakes generation.

  20. Real-time determination of the worst tsunami scenario based on Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya

    2016-04-01

    In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.

  1. Simultaneous measurements of work function and H‒ density including caesiation of a converter surface

    NASA Astrophysics Data System (ADS)

    Cristofaro, S.; Friedl, R.; Fantz, U.

    2017-08-01

    Negative hydrogen ion sources rely on the surface conversion of neutral atomic hydrogen and positive hydrogen ions to H-. The efficiency of this process depends on the actual work function of the converter surface. By introducing caesium into the source the work function decreases, enhancing the negative ion yield. In order to study the impact of the work function on the H- surface production at similar conditions to the ones in ion sources for fusion devices like ITER and DEMO, fundamental investigations are performed in a flexible laboratory experiment. The work function of the converter surface can be absolutely measured by photoelectric effect, while a newly installed cavity ring-down spectroscopy system (CRDS) measures the H- density. The CRDS is firstly tested and characterized by investigations on H- volume production. Caesiation of a stainless steel sample is then performed in vacuum and the plasma effect on the Cs layer is investigated also for long plasma-on times. A minimum work function of (1.9±0.1) eV is reached after some minutes of plasma treatment, resulting in a reduction by a value of 0.8 eV compared to vacuum measurements. The H- density above the surface is (2.1±0.5)×1015 m-3. With further plasma exposure of the caesiated surface, the work function increases up to 3.75 eV, due to the impinging plasma particles which gradually remove the Cs layer. As a result, the H- density decreases by a factor of at least 2.

  2. ObsPy: Establishing and maintaining an open-source community package

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Megies, T.; Barsch, R.

    2017-12-01

    Python's ecosystem evolved into one of the most powerful and productive research environment across disciplines. ObsPy (https://obspy.org) is a fully community driven, open-source project dedicated to provide a bridge for seismology into that ecosystem. It does so by offering Read and write support for essentially every commonly used data format in seismology, Integrated access to the largest data centers, web services, and real-time data streams, A powerful signal processing toolbox tuned to the specific needs of seismologists, and Utility functionality like travel time calculations, geodetic functions, and data visualizations. ObsPy has been in constant unfunded development for more than eight years and is developed and used by scientists around the world with successful applications in all branches of seismology. By now around 70 people directly contributed code to ObsPy and we aim to make it a self-sustaining community project.This contributions focusses on several meta aspects of open-source software in science, in particular how we experienced them. During the panel we would like to discuss obvious questions like long-term sustainability with very limited to no funding, insufficient computer science training in many sciences, and gaining hard scientific credits for software development, but also the following questions: How to best deal with the fact that a lot of scientific software is very specialized thus usually solves a complex problem but at the same time can only ever reach a limited pool of developers and users by virtue of it being so specialized? Therefore the "many eyes on the code" approach to develop and improve open-source software only applies in a limited fashion. An initial publication for a significant new scientific software package is fairly straightforward. How to on-board and motivate potential new contributors when they can no longer be lured by a potential co-authorship? When is spending significant time and effort on reusable scientific open-source development a reasonable choice for young researchers? The effort to go from purpose tailored code for a single application resulting in a scientific publication is significantly less compared to generalising and engineering it well enough so it can be used by others.

  3. SU-F-T-12: Monte Carlo Dosimetry of the 60Co Bebig High Dose Rate Source for Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, L T; Almeida, C E V de

    Purpose: The purpose of this work is to obtain the dosimetry parameters in accordance with the AAPM TG-43U1 formalism with Monte Carlo calculations regarding the BEBIG 60Co high-dose-rate brachytherapy. The geometric design and material details of the source was provided by the manufacturer and was used to define the Monte Carlo geometry. Methods: The dosimetry studies included the calculation of the air kerma strength Sk, collision kerma in water along the transverse axis with an unbounded phantom, dose rate constant and radial dose function. The Monte Carlo code system that was used was EGSnrc with a new cavity code, whichmore » is a part of EGS++ that allows calculating the radial dose function around the source. The XCOM photon cross-section library was used. Variance reduction techniques were used to speed up the calculation and to considerably reduce the computer time. To obtain the dose rate distributions of the source in an unbounded liquid water phantom, the source was immersed at the center of a cube phantom of 100 cm3. Results: The obtained dose rate constant for the BEBIG 60Co source was 1.108±0.001 cGyh-1U-1, which is consistent with the values in the literature. The radial dose functions were compared with the values of the consensus data set in the literature, and they are consistent with the published data for this energy range. Conclusion: The dose rate constant is consistent with the results of Granero et al. and Selvam and Bhola within 1%. Dose rate data are compared to GEANT4 and DORZnrc Monte Carlo code. However, the radial dose function is different by up to 10% for the points that are notably near the source on the transversal axis because of the high-energy photons from 60Co, which causes an electronic disequilibrium at the interface between the source capsule and the liquid water for distances up to 1 cm.« less

  4. Imaging strategies using focusing functions with applications to a North Sea field

    NASA Astrophysics Data System (ADS)

    da Costa Filho, C. A.; Meles, G. A.; Curtis, A.; Ravasi, M.; Kritski, A.

    2018-04-01

    Seismic methods are used in a wide variety of contexts to investigate subsurface Earth structures, and to explore and monitor resources and waste-storage reservoirs in the upper ˜100 km of the Earth's subsurface. Reverse-time migration (RTM) is one widely used seismic method which constructs high-frequency images of subsurface structures. Unfortunately, RTM has certain disadvantages shared with other conventional single-scattering-based methods, such as not being able to correctly migrate multiply scattered arrivals. In principle, the recently developed Marchenko methods can be used to migrate all orders of multiples correctly. In practice however, using Marchenko methods are costlier to compute than RTM—for a single imaging location, the cost of performing the Marchenko method is several times that of standard RTM, and performing RTM itself requires dedicated use of some of the largest computers in the world for individual data sets. A different imaging strategy is therefore required. We propose a new set of imaging methods which use so-called focusing functions to obtain images with few artifacts from multiply scattered waves, while greatly reducing the number of points across the image at which the Marchenko method need be applied. Focusing functions are outputs of the Marchenko scheme: they are solutions of wave equations that focus in time and space at particular surface or subsurface locations. However, they are mathematical rather than physical entities, being defined only in reference media that equal to the true Earth above their focusing depths but are homogeneous below. Here, we use these focusing functions as virtual source/receiver surface seismic surveys, the upgoing focusing function being the virtual received wavefield that is created when the downgoing focusing function acts as a spatially distributed source. These source/receiver wavefields are used in three imaging schemes: one allows specific individual reflectors to be selected and imaged. The other two schemes provide either targeted or complete images with distinct advantages over current RTM methods, such as fewer artifacts and artifacts that occur in different locations. The latter property allows the recently published `combined imaging' method to remove almost all artifacts. We show several examples to demonstrate the methods: acoustic 1-D and 2-D synthetic examples, and a 2-D line from an ocean bottom cable field data set. We discuss an extension to elastic media, which is illustrated by a 1.5-D elastic synthetic example.

  5. Development of type transfer functions for regional-scale nonpoint source groundwater vulnerability assessments

    NASA Astrophysics Data System (ADS)

    Stewart, Iris T.; Loague, Keith

    2003-12-01

    Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.

  6. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  7. Near-field observations of microearthquake source physics using dense array

    NASA Astrophysics Data System (ADS)

    Chen, X.; Nakata, N.; Abercrombie, R. E.

    2017-12-01

    The recorded waveform includes contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of near-site attenuation effects. Fortunately, this problem can be remedied by dense near-field recordings at high frequency, and large databases with wide magnitude range. The 2016 IRIS wavefield experiment provides high-quality recordings of earthquake sequences in north-central Oklahoma with about 400 sensors in 15 km area. Preliminary processing of the IRIS wavefield array resulted with about 20,000 microearthquakes ranging from M-1 to M2, while only 2 earthquakes are listed in the catalog during the same time period. A preliminary examination of the catalog reveals three similar magnitude earthquakes (M 2) occurred at similar locations within 9 seconds of each other. Utilizing this catalog, we will combine individual empirical Green's function (EGF) analysis and stacking over multiple EGFs to examine if there are any systematic variations of source time functions and spectral ratios across the array, which will provide constrains of rupture complexity, directivity and earthquake interactions. For example, this would help us to understand if these three earthquakes rupture overlapping fault patches from cascading failure, or from repeated rupture at the same slip patch due to external stress loading. Deciphering the interaction at smaller scales with near-field observations is important for a controlled earthquake experiment.

  8. Estimation of source locations of total gaseous mercury measured in New York State using trajectory-based models

    NASA Astrophysics Data System (ADS)

    Han, Young-Ji; Holsen, Thomas M.; Hopke, Philip K.

    Ambient gaseous phase mercury concentrations (TGM) were measured at three locations in NY State including Potsdam, Stockton, and Sterling from May 2000 to March 2005. Using these data, three hybrid receptor models incorporating backward trajectories were used to identify source areas for TGM. The models used were potential source contribution function (PSCF), residence time weighted concentration (RTWC), and simplified quantitative transport bias analysis (SQTBA). Each model was applied using multi-site measurements to resolve the locations of important mercury sources for New York State. PSCF results showed that southeastern New York, Ohio, Indiana, Tennessee, Louisiana, and Virginia were important TGM source areas for these sites. RTWC identified Canadian sources including the metal production facilities in Ontario and Quebec, but US regional sources including the Ohio River Valley were also resolved. Sources in southeastern NY, Massachusetts, western Pennsylvania, Indiana, and northern Illinois were identified to be significant by SQTBA. The three modeling results were combined to locate the most important probable source locations, and those are Ohio, Indiana, Illinois, and Wisconsin. The Atlantic Ocean was suggested to be a possible source as well.

  9. Use of acoustic wave travel-time measurements to probe the near-surface layers of the Sun

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Osaki, Y.; Shibahashi, H.; Duvall, T. L., Jr.; Harvey, J. W.; Pomerantz, M. A.

    1994-01-01

    The variation of solar p-mode travel times with cyclic frequency nu is shown to provide information on both the radial variation of the acoustic potential and the depth of the effective source of the oscillations. Observed travel-time data for waves with frequency lower than the acoustic cutoff frequency for the solar atmosphere (approximately equals 5.5 mHz) are inverted to yield the local acoustic cutoff frequency nu(sub c) as a function of depth in the outer convection zone and lower atmosphere of the Sun. The data for waves with nu greater than 5.5 mHz are used to show that the source of the p-mode oscillations lies approximately 100 km beneath the base of the photosphere. This depth is deeper than that determined using a standard mixing-length calculation.

  10. DISTRIBUTION OF PESTICIDES AND POLYCYCLIC AROMATIC HYDROCARBONS IN HOUSE DUST AS A FUNCTION OF PARTICLE SIZE

    EPA Science Inventory

    House dust is a repository for environmental pollutants that may accumulate indoors from both internal and external sources over long periods of time. Dust and tracked-in soil accumulate most efficiently in carpets, and the pollutants associated with it may present an exposure...

  11. Time functions of deep earthquakes from broadband and short-period stacks

    USGS Publications Warehouse

    Houston, H.; Benz, H.M.; Vidale, J.E.

    1998-01-01

    To constrain dynamic source properties of deep earthquakes, we have systematically constructed broadband time functions of deep earthquakes by stacking and scaling teleseismic P waves from U.S. National Seismic Network, TERRAscope, and Berkeley Digital Seismic Network broadband stations. We examined 42 earthquakes with depths from 100 to 660 km that occurred between July 1, 1992 and July 31, 1995. To directly compare time functions, or to group them by size, depth, or region, it is essential to scale them to remove the effect of moment, which varies by more than 3 orders of magnitude for these events. For each event we also computed short-period stacks of P waves recorded by west coast regional arrays. The comparison of broadband with short-period stacks yields a considerable advantage, enabling more reliable measurement of event duration. A more accurate estimate of the duration better constrains the scaling procedure to remove the effect of moment, producing scaled time functions with both correct timing and amplitude. We find only subtle differences in the broadband time-function shape with moment, indicating successful scaling and minimal effects of attenuation at the periods considered here. The average shape of the envelopes of the short-period stacks is very similar to the average broadband time function. The main variations seen with depth are (1) a mild decrease in duration with increasing depth, (2) greater asymmetry in the time functions of intermediate events compared to deep ones, and (3) unexpected complexity and late moment release for events between 350 and 550 km, with seven of the eight events in that depth interval displaying markedly more complicated time functions with more moment release late in the rupture than most events above or below. The first two results are broadly consistent with our previous studies, while the third is reported here for the first time. The greater complexity between 350 and 550 km suggests greater heterogeneity in the failure process in that depth range. Copyright 1998 by the American Geophysical Union.

  12. Magma chamber cooling by episodic volatile expulsion as constrained by mineral vein distributions in the Butte, Montana Cu-Mo porphyry deposit

    NASA Astrophysics Data System (ADS)

    Daly, K.; Karlstrom, L.; Reed, M. H.

    2016-12-01

    The role of hydrothermal systems in the thermal evolution of magma chambers is poorly constrained yet likely significant. We analyze trends in mineral composition, vein thickness and overall volumetric fluid flux of the Butte, Montana porphyry Cu-Mo deposit to constrain the role of episodic volatile discharge in the crystallization of the source magma chamber ( 300 km3of silicic magma). An aqueous fluid sourced from injection of porphyritic dikes formed the Butte porphyry Cu network of veins. At least three separate pulses of fluid through the system are defined by alteration envelopes of [1] gray sericite (GS); [2] early-dark micaceous (EDM), pale-green sericite (PGS), and dark-green sericite (DGS); and [3] quartz-molybdenite (Qmb) and barren-quartz. Previous research using geothermometers and geobarometers has found that vein mineral composition, inferred temperatures and inferred pressures vary systematically with depth. Later fluid pulses are characterized by lower temperatures, consistent with progressive cooling of the source. We have digitized previously unused structural data from Butte area drill cores, and applied thermomechanical modeling of fluid release from the source magma chamber through time. Vein number density and vein thickness increase with depth as a clear function of mineralogy and thus primary temperature and pressure. We identify structural trends in the three fluid pulses which seem to imply time evolution of average vein characteristics. Pulses of Qmb-barren quartz and EDM-PGS-DGS (1st and 2nd in time) exhibit increasing vein number density (157 & 95 veins/50m, respectively) and thickness (300mm & 120mm, respectively) as a function of depth. EDM-PGS-DGS has a shallower peak in vein density (800m) than Qmb-barren quartz (>1600m). These data provide the basis for idealized mechanical models of hydrofractures, to predict driving pressures and to compare with existing source temperatures and total fluid volumes in order to estimate the total enthalpy of each fluid pulse. We then compare with models for conductive cooling and crystallization of the source magma chamber to estimate the importance of hydrothermal fluid expulsion in the total heat budget. Such models should also provide constraints on the timing and ultimately the origin of pulsed volatile release at Butte.

  13. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  14. An automated digital data collection and analysis system for the Charpy Impact Tester

    NASA Technical Reports Server (NTRS)

    Kohne, Glenn S.; Spiegel, F. Xavier

    1994-01-01

    The standard Charpy Impact Tester has been modified by the addition of a system of hardware and software to improve the accuracy and consistency of measurements made during specimen fracturing experiments. An optical disc, light source, and detector generate signals that indicate the pendulum position as a function of time. These signals are used by a computer to calculate the velocity and kinetic energy of the pendulum as a function of its position.

  15. Extracting Date/Time Expressions in Super-Function Based Japanese-English Machine Translation

    NASA Astrophysics Data System (ADS)

    Sasayama, Manabu; Kuroiwa, Shingo; Ren, Fuji

    Super-Function Based Machine Translation(SFBMT) which is a type of Example-Based Machine Translation has a feature which makes it possible to expand the coverage of examples by changing nouns into variables, however, there were problems extracting entire date/time expressions containing parts-of-speech other than nouns, because only nouns/numbers were changed into variables. We describe a method for extracting date/time expressions for SFBMT. SFBMT uses noun determination rules to extract nouns and a bilingual dictionary to obtain correspondence of the extracted nouns between the source and the target languages. In this method, we add a rule to extract date/time expressions and then extract date/time expressions from a Japanese-English bilingual corpus. The evaluation results shows that the precision of this method for Japanese sentences is 96.7%, with a recall of 98.2% and the precision for English sentences is 94.7%, with a recall of 92.7%.

  16. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  17. Extending compile-time reverse mode and exploiting partial separability in ADIFOR. ADIFOR Working Note No. 7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; El-Khadiri, M.

    1992-10-01

    The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less

  18. Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function

    NASA Astrophysics Data System (ADS)

    Peidro, D.; Vasant, P.

    2009-08-01

    In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobulnicky, Henry A.; Alexander, Michael J.; Babler, Brian L.

    We characterize the completeness of point source lists from Spitzer Space Telescope surveys in the four Infrared Array Camera (IRAC) bandpasses, emphasizing the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) programs (GLIMPSE I, II, 3D, 360; Deep GLIMPSE) and their resulting point source Catalogs and Archives. The analysis separately addresses effects of incompleteness resulting from high diffuse background emission and incompleteness resulting from point source confusion (i.e., crowding). An artificial star addition and extraction analysis demonstrates that completeness is strongly dependent on local background brightness and structure, with high-surface-brightness regions suffering up to five magnitudes of reduced sensitivity to pointmore » sources. This effect is most pronounced at the IRAC 5.8 and 8.0 {mu}m bands where UV-excited polycyclic aromatic hydrocarbon emission produces bright, complex structures (photodissociation regions). With regard to diffuse background effects, we provide the completeness as a function of stellar magnitude and diffuse background level in graphical and tabular formats. These data are suitable for estimating completeness in the low-source-density limit in any of the four IRAC bands in GLIMPSE Catalogs and Archives and some other Spitzer IRAC programs that employ similar observational strategies and are processed by the GLIMPSE pipeline. By performing the same analysis on smoothed images we show that the point source incompleteness is primarily a consequence of structure in the diffuse background emission rather than photon noise. With regard to source confusion in the high-source-density regions of the Galactic Plane, we provide figures illustrating the 90% completeness levels as a function of point source density at each band. We caution that completeness of the GLIMPSE 360/Deep GLIMPSE Catalogs is suppressed relative to the corresponding Archives as a consequence of rejecting stars that lie in the point-spread function wings of saturated sources. This effect is minor in regions of low saturated star density, such as toward the Outer Galaxy; this effect is significant along sightlines having a high density of saturated sources, especially for Deep GLIMPSE and other programs observing closer to the Galactic center using 12 s or longer exposure times.« less

  20. The Bivariate Luminosity--HI Mass Distribution Function of Galaxies based on the NIBLES Survey

    NASA Astrophysics Data System (ADS)

    Butcher, Zhon; Schneider, Stephen E.; van Driel, Wim; Lehnert, Matt

    2016-01-01

    We use 21cm HI line observations for 2610 galaxies from the Nançay Interstellar Baryons Legacy Extragalactic Survey (NIBLES) to derive a bivariate luminosity--HI mass distribution function. Our HI survey was selected to randomly probe the local (900 < cz < 12,000 km/s) galaxy population in each 0.5 mag wide bin for the absolute z-band magnitude range of -13.5 < Mz < -24 without regard to morphology or color. This targeted survey allowed more on-source integration time for weak and non-detected sources, enabling us to probe lower HI mass fractions and apply lower upper limits for non-detections than would be possible with the larger blind HI surveys. Additionally, we obtained a factor of four higher sensitivity follow-up observations at Arecibo of 90 galaxies from our non-detected and marginally detected categories to quantify the underlying HI distribution of sources not detected at Nançay. Using the optical luminosity function and our higher sensitivity follow up observations as priors, we use a 2D stepwise maximum likelihood technique to derive the two dimensional volume density distribution of luminosity and HI mass in each SDSS band.

  1. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  2. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen

    2015-11-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.

  3. Effects of Detrital Subsidies on Soft-Sediment Ecosystem Function Are Transient and Source-Dependent.

    PubMed

    Gladstone-Gallagher, Rebecca V; Lohrer, Andrew M; Lundquist, Carolyn J; Pilditch, Conrad A

    2016-01-01

    Detrital subsidies from marine macrophytes are prevalent in temperate estuaries, and their role in structuring benthic macrofaunal communities is well documented, but the resulting impact on ecosystem function is not understood. We conducted a field experiment to test the effects of detrital decay on soft-sediment primary production, community metabolism and nutrient regeneration (measures of ecosystem function). Twenty four (2 m2) plots were established on an intertidal sandflat, to which we added 0 or 220 g DW m-2 of detritus from either mangroves (Avicennia marina), seagrass (Zostera muelleri), or kelp (Ecklonia radiata) (n = 6 plots per treatment). Then, after 4, 17 and 46 d we measured ecosystem function, macrofaunal community structure and sediment properties. We hypothesized that (1) detrital decay would stimulate benthic primary production either by supplying nutrients to the benthic macrophytes, or by altering the macrofaunal community; and (2) ecosystem responses would depend on the stage and rate of macrophyte decay (a function of source). Avicennia detritus decayed the slowest with a half-life (t50) of 46 d, while Zostera and Ecklonia had t50 values of 28 and 2.6 d, respectively. However, ecosystem responses were not related to these differences. Instead, we found transient effects (up to 17 d) of Avicennia and Ecklonia detritus on benthic primary production, where initially (4 d) these detrital sources suppressed primary production, but after 17 d, primary production was stimulated in Avicennia plots relative to controls. Other ecosystem function response variables and the macrofaunal community composition were not altered by the addition of detritus, but did vary with time. By sampling ecosystem function temporally, we were able to capture the in situ transient effects of detrital subsidies on important benthic ecosystem functions.

  4. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  5. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  6. A study of an advanced confined linear energy source

    NASA Technical Reports Server (NTRS)

    Anderson, M. C.; Heidemann, W. B.

    1971-01-01

    A literature survey and a test program to develop and evaluate an advanced confined linear energy source were conducted. The advanced confined linear energy source is an explosive or pyrotechnic X-Cord (mild detonating fuse) supported inside a confining tube capable of being hermetically sealed and retaining all products of combustion. The energy released by initiation of the X-Cord is transmitted through the support material to the walls of the confining tube causing an appreciable change in cross sectional configuration and expansion of the tube. When located in an assembly that can accept and use the energy of the tube expansion, useful work is accomplished through fracture of a structure, movement of a load, reposition of a pin, release of a restraint, or similar action. The tube assembly imparts that energy without release of debris or gases from the device itself. This facet of the function is important to the protection of men or equipment located in close proximity to the system during the time of function.

  7. The analysis and interpretation of very-long-period seismic signals on volcanoes

    NASA Astrophysics Data System (ADS)

    Sindija, Dinko; Neuberg, Jurgen; Smith, Patrick

    2017-04-01

    The study of very long period (VLP) seismic signals became possible with the widespread use of broadband instruments. VLP seismic signals are caused by transients of pressure in the volcanic edifice and have periods ranging from several seconds to several minutes. For the VLP events recorded in March 2012 and 2014 at Soufriere Hills Volcano, Montserrat, we model the ground displacement using several source time functions: a step function using Richards growth equation, Küpper wavelet, and a damped sine wave to which an instrument response is then applied. This way we get a synthetic velocity seismogram which is directly comparable to the data. After the full vector field of ground displacement is determined, we model the source mechanism to determine the relationship between the source mechanism and the observed VLP waveforms. Emphasis of the research is on how different VLP waveforms are related to the volcano environment and the instrumentation used and on the processing steps in this low frequency band to get most out of broadband instruments.

  8. Design and Deployment of a General Purpose, Open Source LoRa to Wi-Fi Hub and Data Logger

    NASA Astrophysics Data System (ADS)

    DeBell, T. C.; Udell, C.; Kwon, M.; Selker, J. S.; Lopez Alcala, J. M.

    2017-12-01

    Methods and technologies facilitating internet connectivity and near-real-time status updates for in site environmental sensor data are of increasing interest in Earth Science. However, Open Source, Do-It-Yourself technologies that enable plug and play functionality for web-connected sensors and devices remain largely inaccessible for typical researchers in our community. The Openly Published Environmental Sensing Lab at Oregon State University (OPEnS Lab) constructed an Open Source 900 MHz Long Range Radio (LoRa) receiver hub with SD card data logger, Ethernet and Wi-Fi shield, and 3D printed enclosure that dynamically uploads transmissions from multiple wirelessly-connected environmental sensing devices. Data transmissions may be received from devices up to 20km away. The hub time-stamps, saves to SD card, and uploads all transmissions to a Google Drive spreadsheet to be accessed in near-real-time by researchers and GeoVisualization applications (such as Arc GIS) for access, visualization, and analysis. This research expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting edge technology. This poster details our methods and evaluates the application of using 3D printing, Arduino Integrated Development Environment (IDE), Adafruit's Open-Hardware Feather development boards, and the WIZNET5500 Ethernet shield for designing this open-source, general purpose LoRa to Wi-Fi data logger.

  9. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  10. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  11. Dynamics of the Wulong landslide revealed by broadband seismic records

    NASA Astrophysics Data System (ADS)

    Li, Zhengyuan; Huang, Xinghui; Xu, Qiang; Yu, Dan; Fan, Junyi; Qiao, Xuejun

    2017-02-01

    The catastrophic Wulong landslide occurred at 14:51 (Beijing time, UTC+8) on 5 June 2009, in Wulong Prefecture, Southwest China. This rockslide occurred in a complex topographic environment. Seismic signals generated by this event were recorded by the seismic network deployed in the surrounding area, and long-period signals were extracted from 8 broadband seismic stations within 250 km to obtain source time functions by inversion. The location of this event was simultaneously acquired using a stepwise refined grid search approach, with an error of 2.2 km. The estimated source time functions reveal that, according to the movement parameters, this landslide could be divided into three stages with different movement directions, velocities, and increasing inertial forces. The sliding mass moved northward, northeastward and northward in the three stages, with average velocities of 6.5, 20.3, and 13.8 m/s, respectively. The maximum movement velocity of the mass reached 35 m/s before the end of the second stage. The basal friction coefficients were relatively small in the first stage and gradually increasing; large in the second stage, accompanied by the largest variability; and oscillating and gradually decreasing to a stable value, in the third stage. Analysis shows that the movement characteristics of these three stages are consistent with the topography of the sliding zone, corresponding to the northward initiation, eastward sliding after being stopped by the west wall, and northward debris flowing after collision with the east slope of the Tiejianggou valley. The maximum movement velocity of the sliding mass results from the largest height difference of the west slope of the Tiejianggou valley. The basal friction coefficients of the three stages represent the thin weak layer in the source zone, the dramatically varying topography of the west slope of the Tiejianggou valley, and characteristics of the debris flow along the Tiejianggou valley. Based on the above results, it is recognized that the inverted source time functions are consistent with the topography of the sliding zone. Special geological and topographic conditions can have a focusing effect on landslides and are key factors in inducing the major disasters, which may follow from them. This landslide was of an unusual nature, and it will be worthwhile to pursue research into its dynamic characteristics more deeply.[Figure not available: see fulltext.

  12. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  13. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    PubMed

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Time reversal imaging and cross-correlations techniques by normal mode theory

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Fink, M.; Capdeville, Y.; Phung, H.; Larmat, C.

    2007-12-01

    Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and recently to seismic waves in seismology for earthquake imaging. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. Generalizing the scalar approach of Draeger and Fink (1999), the theoretical understanding of time-reversal method can be addressed for the 3D- elastic Earth by using normal mode theory. It is shown how to relate time- reversal methods on one hand, with auto-correlation of seismograms for source imaging and on the other hand, with cross-correlation between receivers for structural imaging and retrieving Green function. The loss of information will be discussed. In the case of source imaging, automatic location in time and space of earthquakes and unknown sources is obtained by time reversal technique. In the case of big earthquakes such as the Sumatra-Andaman earthquake of december 2004, we were able to reconstruct the spatio-temporal history of the rupture. We present here some new applications at the global scale of these techniques on synthetic tests and on real data.

  15. Surgeon and type of anesthesia predict variability in surgical procedure times.

    PubMed

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated with variability in surgical procedure times, knowledge of which may ultimately be used to improve surgical scheduling and operating room utilization.

  16. Sanitary protection zoning based on time-dependent vulnerability assessment model - case examples at two different type of aquifers

    NASA Astrophysics Data System (ADS)

    Živanović, Vladimir; Jemcov, Igor; Dragišić, Veselin; Atanacković, Nebojša

    2017-04-01

    Delineation of sanitary protection zones of groundwater source is a comprehensive and multidisciplinary task. Uniform methodology for protection zoning for various type of aquifers is not established. Currently applied methods mostly rely on horizontal groundwater travel time toward the tapping structure. On the other hand, groundwater vulnerability assessment methods evaluate the protective function of unsaturated zone as an important part of groundwater source protection. In some particular cases surface flow might also be important, because of rapid transfer of contaminants toward the zones with intense infiltration. For delineation of sanitary protection zones three major components should be analysed: vertical travel time through unsaturated zone, horizontal travel time through saturated zone and surface water travel time toward intense infiltration zones. Integrating the aforementioned components into one time-dependent model represents a basis of presented method for delineation of groundwater source protection zones in rocks and sediments of different porosity. The proposed model comprises of travel time components of surface water, as well as groundwater (horizontal and vertical component). The results obtained using the model, represent the groundwater vulnerability as the sum of the surface and groundwater travel time and corresponds to the travel time of potential contaminants from the ground surface to the tapping structure. This vulnerability assessment approach do not consider contaminant properties (intrinsic vulnerability) although it can be easily improved for evaluating the specific groundwater vulnerability. This concept of the sanitary protection zones was applied at two different type of aquifers: karstic aquifer of catchment area of Blederija springs and "Beli Timok" source of intergranular shallow aquifer. The first one represents a typical karst hydrogeological system with part of the catchment with allogenic recharge, and the second one, the groundwater source within shallow intergranular alluvial aquifer, dominantly recharged by river bank filtration. For sanitary protection zones delineation, the applied method has shown the importance of introducing all travel time components equally. In the case of the karstic source, the importance of the surface flow toward ponor zones has been emphasized, as a consequence of rapid travel time of water in relation to diffuse infiltration from autogenic part. When it comes to the shallow intergranular aquifer, the character of the unsaturated zone gets more prominent role in the source protection, as important buffer of the vertical movement downward. The applicability of proposed method has been shown regardless of the type of the aquifer, and at the same time intelligible results of the delineated sanitary protection zones are possible to validate with various methods. Key words: groundwater protection zoning, time dependent model, karst aquifer, intergranular aquifer, groundwater source protection

  17. Lower Emittance Lattice for the Advanced Photon Source Upgrade Using Reverse Bending Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.; Berenc, T.; Sun, Y.

    The Advanced Photon Source (APS) is pursuing an upgrade to the storage ring to a hybrid seven-bend-achromat design [1]. The nominal design provides a natural emittance of 67 pm [2]. By adding reverse dipole fields to several quadrupoles [3, 4] we can reduce the natural emittance to 41 pm while simultaneously providing more optimal beta functions in the insertion devices and increasing the dispersion function at the chromaticity sextupole magnets. The improved emittance results from a combination of increased energy loss per turn and a change in the damping partition. At the same time, the nonlinear dynamics performance is verymore » similar, thanks in part to increased dispersion in the sextupoles. This paper describes the properties, optimization, and performance of the new lattice.« less

  18. ON THE CALIBRATION OF DK-02 AND KID DOSIMETERS (in Estonian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehvaert, H.

    1963-01-01

    For the periodic calibration of the DK-02 and WD dosimeters, the rotating stand method which is more advantageous than the usual method is recommended. The calibration can be accomplished in a strong gamma field, reducing considerably the time necessary for calibration. Using a point source, the dose becomes a simple function of time and geometrical parameters. The experimental values are in good agreement with theoretical values. (tr-auth)

  19. Activation Time of Cardiac Tissue In Response to a Linear Array of Spatial Alternating Bipolar Electrodes

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Wikswo, John

    2007-11-01

    Prevailing theories about the response of the heart to high field shocks predict that local regions of high resistivity distributed throughout the heart create multiple small virtual electrodes that hyperpolarize or depolarize tissue and lead to widespread activation. This resetting of bulk tissue is responsible for the successful functioning of cardiac defibrillators. By activating cardiac tissue with regular linear arrays of spatially alternating bipolar currents, we can simulate these potentials locally. We have studied the activation time due to distributed currents in both a 1D Beeler-Reuter model and on the surface of the whole heart, varying the strength of each source and the separation between them. By comparison with activation time data from actual field shock of a whole heart in a bath, we hope to better understand these transient virtual electrodes. Our work was done on rabbit RV using florescent optical imaging and our Phased Array Stimulator for driving the 16 current sources. Our model shows that for a total absolute current delivered to a region of tissue, the entire region activates faster if above-threshold sources are more distributed.

  20. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  1. Source Finding in the Era of the SKA (Precursors): Aegean 2.0

    NASA Astrophysics Data System (ADS)

    Hancock, Paul J.; Trott, Cathryn M.; Hurley-Walker, Natasha

    2018-03-01

    In the era of the SKA precursors, telescopes are producing deeper, larger images of the sky on increasingly small time-scales. The greater size and volume of images place an increased demand on the software that we use to create catalogues, and so our source finding algorithms need to evolve accordingly. In this paper, we discuss some of the logistical and technical challenges that result from the increased size and volume of images that are to be analysed, and demonstrate how the Aegean source finding package has evolved to address these challenges. In particular, we address the issues of source finding on spatially correlated data, and on images in which the background, noise, and point spread function vary across the sky. We also introduce the concept of forced or prioritised fitting.

  2. Simultaneous transcranial direct current stimulation (tDCS) and whole-head magnetoencephalography (MEG): assessing the impact of tDCS on slow cortical magnetic fields.

    PubMed

    Garcia-Cossio, Eliana; Witkowski, Matthias; Robinson, Stephen E; Cohen, Leonardo G; Birbaumer, Niels; Soekadar, Surjo R

    2016-10-15

    Transcranial direct current stimulation (tDCS) can influence cognitive, affective or motor brain functions. Whereas previous imaging studies demonstrated widespread tDCS effects on brain metabolism, direct impact of tDCS on electric or magnetic source activity in task-related brain areas could not be confirmed due to the difficulty to record such activity simultaneously during tDCS. The aim of this proof-of-principal study was to demonstrate the feasibility of whole-head source localization and reconstruction of neuromagnetic brain activity during tDCS and to confirm the direct effect of tDCS on ongoing neuromagnetic activity in task-related brain areas. Here we show for the first time that tDCS has an immediate impact on slow cortical magnetic fields (SCF, 0-4Hz) of task-related areas that are identical with brain regions previously described in metabolic neuroimaging studies. 14 healthy volunteers performed a choice reaction time (RT) task while whole-head magnetoencephalography (MEG) was recorded. Task-related source-activity of SCFs was calculated using synthetic aperture magnetometry (SAM) in absence of stimulation and while anodal, cathodal or sham tDCS was delivered over the right primary motor cortex (M1). Source reconstruction revealed task-related SCF modulations in brain regions that precisely matched prior metabolic neuroimaging studies. Anodal and cathodal tDCS had a polarity-dependent impact on RT and SCF in primary sensorimotor and medial centro-parietal cortices. Combining tDCS and whole-head MEG is a powerful approach to investigate the direct effects of transcranial electric currents on ongoing neuromagnetic source activity, brain function and behavior. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Transient shifts in frontal and parietal circuits scale with enhanced visual feedback and changes in force variability and error

    PubMed Central

    Poon, Cynthia; Coombes, Stephen A.; Corcos, Daniel M.; Christou, Evangelos A.

    2013-01-01

    When subjects perform a learned motor task with increased visual gain, error and variability are reduced. Neuroimaging studies have identified a corresponding increase in activity in parietal cortex, premotor cortex, primary motor cortex, and extrastriate visual cortex. Much less is understood about the neural processes that underlie the immediate transition from low to high visual gain within a trial. This study used 128-channel electroencephalography to measure cortical activity during a visually guided precision grip task, in which the gain of the visual display was changed during the task. Force variability during the transition from low to high visual gain was characterized by an inverted U-shape, whereas force error decreased from low to high gain. Source analysis identified cortical activity in the same structures previously identified using functional magnetic resonance imaging. Source analysis also identified a time-varying shift in the strongest source activity. Superior regions of the motor and parietal cortex had stronger source activity from 300 to 600 ms after the transition, whereas inferior regions of the extrastriate visual cortex had stronger source activity from 500 to 700 ms after the transition. Force variability and electrical activity were linearly related, with a positive relation in the parietal cortex and a negative relation in the frontal cortex. Force error was nonlinearly related to electrical activity in the parietal cortex and frontal cortex by a quadratic function. This is the first evidence that force variability and force error are systematically related to a time-varying shift in cortical activity in frontal and parietal cortex in response to enhanced visual gain. PMID:23365186

  4. Simultaneous transcranial direct current stimulation (tDCS) and whole-head magnetoencephalography (MEG): assessing the impact of tDCS on slow cortical magnetic fields

    PubMed Central

    Garcia-Cossio, Eliana; Witkowski, Matthias; Robinson, Stephen E.; Cohen, Leonardo G.; Birbaumer, Niels; Soekadar, Surjo R.

    2016-01-01

    Transcranial direct current stimulation (tDCS) can influence cognitive, affective or motor brain functions. Whereas previous imaging studies demonstrated widespread tDCS effects on brain metabolism, direct impact of tDCS on electric or magnetic source activity in task-related brain areas could not be confirmed due to the difficulty to record such activity simultaneously during tDCS. The aim of this proof-of-principal study was to demonstrate the feasibility of whole-head source localization and reconstruction of neuromagnetic brain activity during tDCS and to confirm the direct effect of tDCS on ongoing neuromagnetic activity in task-related brain areas. Here we show for the first time that tDCS has an immediate impact on slow cortical magnetic fields (SCF, 0–4 Hz) of task-related areas that are identical with brain regions previously described in metabolic neuroimaging studies. 14 healthy volunteers performed a choice reaction time (RT) task while whole-head magnetoencephalography (MEG) was recorded. Task-related source-activity of SCFs was calculated using synthetic aperture magnetometry (SAM) in absence of stimulation and while anodal, cathodal or sham tDCS was delivered over the right primary motor cortex (M1). Source reconstruction revealed task-related SCF modulations in brain regions that precisely matched prior metabolic neuroimaging studies. Anodal and cathodal tDCS had a polarity-dependent impact on RT and SCF in primary sensorimotor and medial centro-parietal cortices. Combining tDCS and whole-head MEG is a powerful approach to investigate the direct effects of transcranial electric currents on ongoing neuromagnetic source activity, brain function and behavior. PMID:26455796

  5. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  6. Effect of source and particle size of supplemental phosphate on rumen function of steers fed high concentrate diets.

    PubMed

    Murphy, M R; Whetstone, H D; Davis, C L

    1983-12-01

    We examined effects of source and particle size of supplemental defluorinated rock phosphate, to meet phosphorus requirements, on rumen function of 195-kg Holstein steers fed high concentrate. Two sources and two particle sizes of each source were evaluated in a 5 X 5 Latin square with 14-day periods. There was no effect of source on ruminal mH [- log (mean (H+)]; however, ruminal mH was higher in animals fed supplements of larger particle size. This effect was also evident when rumen pH versus time curves were integrated below pH 6. Animals fed supplements of larger particle size had less area below pH 6 than those fed supplements of smaller size. Ruminal buffering capacity at pH 7 was affected by diet; however, orthogonal comparisons between treatment means were not significant. Neither source nor particle size of the supplement affected ruminal fluid osmolality, total volatile fatty acid concentration, or fecal starch. Water intake and ruminal dry matter on HyCal supplemented diets; however, there was also a trend toward increasing rumen fluid volume. The net effect was little change of dilution rate of ruminal fluid. This may explain why rumen fermentation was not affected greatly. Conventional phosphate supplements may have potential as rumen buffering agents, but higher levels of feeding should be studied.

  7. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that our relocation process significantly improves the hypocentre locations compared to standard locations.

  8. Feasibility of Active Monitoring for Plate Coupling Using ACROSS

    NASA Astrophysics Data System (ADS)

    Yamaoka, K.; Watanabe, T.; Ikuta, R.

    2004-12-01

    Detectability of temporal changes in reflected wave from the boundary of subducting plates in Tokai district with active sources are studied. Based on rock experiments the change in the intensity of reflection wave can be caused by change in coupling between subducting and overriding plates. ACROSS (Accurately-Controlled Rountine-Operated Signal System) consists of sinusoidal vibration sources and receivers is proved to provide a data of excellent signal resolution. The following technical issues should be overcome to monitor the returned signal from boundaries of subducting plates. (1) Long term operation of the source. (2) Detection of temporal change. (3) Accurate estimation of source functions and their temporal change. First two issues have already overcome. We have already succeeded a long-term operation experiment with the ACROSS system in Awaji, Japan. The operation was carried out for 15 months with only minor troubles. Continuous signal during the experiment are successfully obtained. In the experiment we developed a technique to monitor the temporal change of travel time with a resolution of several tens of microseconds. The third issue is one of the most difficult problem for practical monitoring using artificial sources. In the 15-month experiment we correct the source function using the record of seismometers that were deployed around the source We also estimate the efficiency of the reflected wave detection using ACROSS system. We use a data of seismic exploration experiment by blasts that carried out above subducting plate in Tokai district. Clear reflection from the surface of the Philippine Sea plate is observed in the waveform. Assuming that the ACROSS source is installed at the same place of the blast source, the detectability of temporal variation of reflection wave can be estimated. As we have measured the variation of signal amplitude that depends on the distance from an ACROSS source, ground noise at seismic stations (receivers) provide us the signal-to-noise ratio for the signal from ACROSS. The resolution can be estimated only by the signal-to-noise ratio. We surveyed the noise level at the place where reflection from the boundary of subducting Philippine Sea Plate can be detected. The results show that the resolution will be better than 1% in amplitude and 0.1milisecond in travel time for the stacking of one week using three-unit source and ten-elements receiver arrays.

  9. Dissipative Intraplate Faulting During the 2016 Mw 6.2 Tottori, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Ross, Zachary E.; Kanamori, Hiroo; Hauksson, Egill; Aso, Naofumi

    2018-02-01

    The 2016 Mw 6.2 Tottori earthquake occurred on 21 October 2016 and produced thousands of aftershocks. Here we analyze high-resolution-relocated seismicity together with source properties of the mainshock to better understand the rupture process and energy budget. We use a matched-filter algorithm to detect and precisely locate >10,000 previously unidentified aftershocks, which delineate a network of sharp subparallel lineations exhibiting significant branching and segmentation. Seismicity below 8 km depth forms highly localized fault structures subparallel to the mainshock strike. Shallow seismicity near the main rupture plane forms more diffuse clusters and lineations that often are at a high angle (in map view) to the mainshock strike. An empirical Green's function technique is used to derive apparent source time functions for the mainshock, which show a large amplitude pulse 2-4 s long. We invert the apparent source time functions for a slip distribution and observe a 16 km2 patch with average slip 3.2 m. 93% of the seismic moment is below 8 km depth, which is approximately the depth below which the seismicity becomes very localized. These observations suggest that the mainshock rupture area was entirely within the lower half of the seismogenic zone. The radiated seismic energy is estimated to be 5.7 × 1013 J, while the static stress drop is estimated to be 18-27 MPa. These values yield a radiation efficiency of 5-7%, which indicates that the Tottori mainshock was extremely dissipative. We conclude that this inefficiency in energy radiation is likely a product of the immature intraplate environment and the underlying geometric complexity.

  10. Time Manager Software for a Flight Processor

    NASA Technical Reports Server (NTRS)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  11. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  12. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2018-02-01

    A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information.

  13. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  14. Theory and Performance of AIMS for Active Interrogation

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza

    2014-06-01

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.

  15. Microscopic Sources of Paramagnetic Noise on α-Al2O3 Substrates for Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Dubois, Jonathan; Lee, Donghwa; Lordi, Vince

    2014-03-01

    Superconducting qubits (SQs) represent a promising route to achieving a scalable quantum computer. However, the coupling between electro-dynamic qubits and (as yet largely unidentified) ambient parasitic noise sources has so far limited the functionality of current SQs by limiting coherence times of the quantum states below a practical threshold for measurement and manipulation. Further improvement can be enabled by a detailed understanding of the various noise sources afflicting SQs. In this work, first principles density functional theory (DFT) calculations are employed to identify the microscopic origins of magnetic noise sources in SQs on an α-Al2O3 substrate. The results indicate that it is unlikely that the existence of intrinsic point defects and defect complexes in the substrate are responsible for low frequency noise in these systems. Rather, a comprehensive analysis of extrinsic defects shows that surface aluminum ions interacting with ambient molecules will form a bath of magnetic moments that can couple to the SQ paramagnetically. The microscopic origin of this magnetic noise source is discussed and strategies for ameliorating the effects of these magnetic defects are proposed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

  17. Identifying sources of fugitive emissions in industrial facilities using trajectory statistical methods

    NASA Astrophysics Data System (ADS)

    Brereton, Carol A.; Johnson, Matthew R.

    2012-05-01

    Fugitive pollutant sources from the oil and gas industry are typically quite difficult to find within industrial plants and refineries, yet they are a significant contributor of global greenhouse gas emissions. A novel approach for locating fugitive emission sources using computationally efficient trajectory statistical methods (TSM) has been investigated in detailed proof-of-concept simulations. Four TSMs were examined in a variety of source emissions scenarios developed using transient CFD simulations on the simplified geometry of an actual gas plant: potential source contribution function (PSCF), concentration weighted trajectory (CWT), residence time weighted concentration (RTWC), and quantitative transport bias analysis (QTBA). Quantitative comparisons were made using a correlation measure based on search area from the source(s). PSCF, CWT and RTWC could all distinguish areas near major sources from the surroundings. QTBA successfully located sources in only some cases, even when provided with a large data set. RTWC, given sufficient domain trajectory coverage, distinguished source areas best, but otherwise could produce false source predictions. Using RTWC in conjunction with CWT could overcome this issue as well as reduce sensitivity to noise in the data. The results demonstrate that TSMs are a promising approach for identifying fugitive emissions sources within complex facility geometries.

  18. Groundwater Source Identification Using Backward Fractional-Derivative Models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sun, H.; Zheng, C.

    2017-12-01

    The forward Fractional Advection Dispersion Equation (FADE) provides a useful model for non-Fickian transport in heterogeneous porous media. This presentation introduces the corresponding backward FADE model, to identify groundwater source location and release time. The backward method is developed from the theory of inverse problems, and the resultant backward FADE differs significantly from the traditional backward ADE because the fractional derivative is not self-adjoint and the probability density function for backward locations is highly skewed. Finally, the method is validated using tracer data from well-known field experiments.

  19. Time Delay and Accretion Disk Size Measurements in the Lensed Quasar SBS 0909+532 from Multiwavelength Microlensing Analysis

    DTIC Science & Technology

    2013-09-01

    of the cosmic microwave background dipole velocity onto the lens plane, as done by Kochanek (2004). We compare the simulated light curves to the...observer, the background source, the foreground lens galaxy, and its stars cause uncorrelated variations in the source magnification as a function of...hereafter SBS 0909; αJ2000 = 09h13m01.s05, δJ2000 = +52d59m28.s83) is a doubly-imaged quasar lens sys- tem in which the background quasar has redshift

  20. Development and evaluation of modified envelope correlation method for deep tectonic tremor

    NASA Astrophysics Data System (ADS)

    Mizuno, N.; Ide, S.

    2017-12-01

    We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.

  1. Fluoride exposure and indicators of thyroid functioning in the Canadian population: implications for community water fluoridation

    PubMed Central

    Barberio, Amanda M; Hosein, F Shaun; Quiñonez, Carlos; McLaren, Lindsay

    2017-01-01

    Background There are concerns that altered thyroid functioning could be the result of ingesting too much fluoride. Community water fluoridation (CWF) is an important source of fluoride exposure. Our objectives were to examine the association between fluoride exposure and (1) diagnosis of a thyroid condition and (2) indicators of thyroid functioning among a national population-based sample of Canadians. Methods We analysed data from Cycles 2 and 3 of the Canadian Health Measures Survey (CHMS). Logistic regression was used to assess associations between fluoride from urine and tap water samples and the diagnosis of a thyroid condition. Multinomial logistic regression was used to examine the relationship between fluoride exposure and thyroid-stimulating hormone (TSH) level (low/normal/high). Other available variables permitted additional exploratory analyses among the subset of participants for whom we could discern some fluoride exposure from drinking water and/or dental products. Results There was no evidence of a relationship between fluoride exposure (from urine and tap water) and the diagnosis of a thyroid condition. There was no statistically significant association between fluoride exposure and abnormal (low or high) TSH levels relative to normal TSH levels. Rerunning the models with the sample constrained to the subset of participants for whom we could discern some source(s) of fluoride exposure from drinking water and/or dental products revealed no significant associations. Conclusion These analyses suggest that, at the population level, fluoride exposure is not associated with impaired thyroid functioning in a time and place where multiple sources of fluoride exposure, including CWF, exist. PMID:28839078

  2. On the VHF Source Retrieval Errors Associated with Lightning Mapping Arrays (LMAs)

    NASA Technical Reports Server (NTRS)

    Koshak, W.

    2016-01-01

    This presentation examines in detail the standard retrieval method: that of retrieving the (x, y, z, t) parameters of a lightning VHF point source from multiple ground-based Lightning Mapping Array (LMA) time-of-arrival (TOA) observations. The solution is found by minimizing a chi-squared function via the Levenberg-Marquardt algorithm. The associated forward problem is examined to illustrate the importance of signal-to-noise ratio (SNR). Monte Carlo simulated retrievals are used to assess the benefits of changing various LMA network properties. A generalized retrieval method is also introduced that, in addition to TOA data, uses LMA electric field amplitude measurements to retrieve a transient VHF dipole moment source.

  3. Using special functions to model the propagation of airborne diseases

    NASA Astrophysics Data System (ADS)

    Bolaños, Daniela

    2014-06-01

    Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.

  4. Source characteristics of the 3 September 2017, North Korean nuclear test (mb = 6.3) inferred from teleseismic forward modeling and regional waveform deconvolution of broadband P and Pn waves

    NASA Astrophysics Data System (ADS)

    Chaves, E. J.; Lay, T.; Voytan, D. P.

    2017-12-01

    On 3 September 2017, the Republic of North Korea conducted the sixth and largest declared underground nuclear test at the Punggye-ri test site. Estimates of yield (W) based on magnitude-yield calibrations for other test sites result in a wide range of yield estimates for the North Korean tests, due to uncertainty in the effects of site-specific coupling, likely overburial of the events, and poorly constrained crustal and mantle attenuation for the test site. The event produced good signal-to-noise broadband (BB) teleseismic P wave recordings at hundreds of stations along with high quality regional recordings. When using teleseismic data, robust estimation of W and depth of burial (DOB) must account for the biasing effects of laterally varying upper mantle attenuation (t*) on P waves, so we empirically determine a best choice of average t* by modeling remote observations. We assume a Mueller-Murphy source model for a granite medium to address the coupling issue. We compute synthetic Reduced Velocity Potential (RVP) seismograms for varying combinations of W and DOB for the 2017 event for a simple half-space case to account for possible overburial effects. RVPs are convolved with Futterman, constant operators, corrected for geometric spreading and receiver function, and then compared with teleseismic P wave displacement records from 435 BB seismic stations, pre-stacked in 26 azimuth and distance bins to suppress station effects. Our preliminary results for half-space modeling give high average cross-correlations and low waveform misfit errors between synthetic and observed waveforms for W of 110-130 kt with DOB 700-800 m and a preferred t* = 0.98 s. For the Mueller-Murphy model we find that frequency-dependent absorption band models are not preferred for this test site. Ongoing analysis is exploring effects of receiver crustal layering. Furthermore, we characterize the explosion source time function using the vertical component Pn-waves from regional BB recordings. We correct for attenuation, site and path effects using the lower yield nuclear tests carried out in 2016, 2013 and 2009 as empirical Green's functions. The deconvolved relative source functions exhibit a complex time sequence, with a second peak possibly related to a deviatoric source activated during the large explosion.

  5. The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters

    NASA Astrophysics Data System (ADS)

    Li, Xue; Hjorth, Jens; Richard, Johan

    2012-11-01

    Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.

  6. Uranyl adsorption kinetics within silica gel: dependence on flow velocity and concentration

    NASA Astrophysics Data System (ADS)

    Dodd, Brandon M.; Tepper, Gary

    2017-09-01

    Trace quantities of a uranyl dissolved in water were measured using a simple optical method. A dilute solution of uranium nitrate dissolved in water was forced through nanoporous silica gel at fixed and controlled water flow rates. The uranyl ions deposited and accumulated within the silica gel and the uranyl fluorescence within the silica gel was monitored as a function of time using a light emitting diode as the excitation source and a photomultiplier tube detector. It was shown that the response time of the fluorescence output signal at a particular volumetric flow rate or average liquid velocity through the silica gel can be used to quantify the concentration of uranium in water. The response time as a function of concentration decreased with increasing flow velocity.

  7. A two-step super-Gaussian independent component analysis approach for fMRI data.

    PubMed

    Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying

    2015-09-01

    Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.

  8. Efficient source for the production of ultradense deuterium D(-1) for laser-induced fusion (ICF).

    PubMed

    Andersson, Patrik U; Lönn, Benny; Holmlid, Leif

    2011-01-01

    A novel source which simplifies the study of ultradense deuterium D(-1) is now described. This means one step further toward deuterium fusion energy production. The source uses internal gas feed and D(-1) can now be studied without time-of-flight spectral overlap from the related dense phase D(1). The main aim here is to understand the material production parameters, and thus a relatively weak laser with focused intensity ≤10(12) W cm(-2) is employed for analyzing the D(-1) material. The properties of the D(-1) material at the source are studied as a function of laser focus position outside the emitter, deuterium gas feed, laser pulse repetition frequency and laser power, and temperature of the source. These parameters influence the D(-1) cluster size, the ionization mode, and the laser fragmentation patterns.

  9. Reconfigurable Software for Mission Operations

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2014-01-01

    We developed software that provides flexibility to mission organizations through modularity and composability. Modularity enables removal and addition of functionality through the installation of plug-ins. Composability enables users to assemble software from pre-built reusable objects, thus reducing or eliminating the walls associated with traditional application architectures and enabling unique combinations of functionality. We have used composable objects to reduce display build time, create workflows, and build scenarios to test concepts for lunar roving operations. The software is open source, and may be downloaded from https:github.comnasamct.

  10. deFUME: Dynamic exploration of functional metagenomic sequencing data.

    PubMed

    van der Helm, Eric; Geertz-Hansen, Henrik Marcus; Genee, Hans Jasper; Malla, Sailesh; Sommer, Morten Otto Alexander

    2015-07-31

    Functional metagenomic selections represent a powerful technique that is widely applied for identification of novel genes from complex metagenomic sources. However, whereas hundreds to thousands of clones can be easily generated and sequenced over a few days of experiments, analyzing the data is time consuming and constitutes a major bottleneck for experimental researchers in the field. Here we present the deFUME web server, an easy-to-use web-based interface for processing, annotation and visualization of functional metagenomics sequencing data, tailored to meet the requirements of non-bioinformaticians. The web-server integrates multiple analysis steps into one single workflow: read assembly, open reading frame prediction, and annotation with BLAST, InterPro and GO classifiers. Analysis results are visualized in an online dynamic web-interface. The deFUME webserver provides a fast track from raw sequence to a comprehensive visual data overview that facilitates effortless inspection of gene function, clustering and distribution. The webserver is available at cbs.dtu.dk/services/deFUME/and the source code is distributed at github.com/EvdH0/deFUME.

  11. Functional Analysis in Long-Term Operation of High Power UV-LEDs in Continuous Fluoro-Sensing Systems for Hydrocarbon Pollution

    PubMed Central

    Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente

    2016-01-01

    This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated. PMID:26927113

  12. Functional Analysis in Long-Term Operation of High Power UV-LEDs in Continuous Fluoro-Sensing Systems for Hydrocarbon Pollution.

    PubMed

    Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente

    2016-02-26

    This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated.

  13. Neutron Detector Signal Processing to Calculate the Effective Neutron Multiplication Factor of Subcritical Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    2016-06-01

    This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less

  14. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  15. Model-driven development of covariances for spatiotemporal environmental health assessment.

    PubMed

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  16. Rhythmic Components in Extracranial Brain Signals Reveal Multifaceted Task Modulation of Overlapping Neuronal Activity

    PubMed Central

    van Ede, Freek; Maris, Eric

    2016-01-01

    Oscillatory neuronal activity is implicated in many cognitive functions, and its phase coupling between sensors may reflect networks of communicating neuronal populations. Oscillatory activity is often studied using extracranial recordings and compared between experimental conditions. This is challenging, because there is overlap between sensor-level activity generated by different sources, and this can obscure differential experimental modulations of these sources. Additionally, in extracranial data, sensor-level phase coupling not only reflects communicating populations, but can also be generated by a current dipole, whose sensor-level phase coupling does not reflect source-level interactions. We present a novel method, which is capable of separating and characterizing sources on the basis of their phase coupling patterns as a function of space, frequency and time (trials). Importantly, this method depends on a plausible model of a neurobiological rhythm. We present this model and an accompanying analysis pipeline. Next, we demonstrate our approach, using magnetoencephalographic (MEG) recordings during a cued tactile detection task as a case study. We show that the extracted components have overlapping spatial maps and frequency content, which are difficult to resolve using conventional pairwise measures. Because our decomposition also provides trial loadings, components can be readily contrasted between experimental conditions. Strikingly, we observed heterogeneity in alpha and beta sources with respect to whether their activity was suppressed or enhanced as a function of attention and performance, and this happened both in task relevant and irrelevant regions. This heterogeneity contrasts with the common view that alpha and beta amplitude over sensory areas are always negatively related to attention and performance. PMID:27336159

  17. Persistent homology of time-dependent functional networks constructed from coupled time series

    NASA Astrophysics Data System (ADS)

    Stolz, Bernadette J.; Harrington, Heather A.; Porter, Mason A.

    2017-04-01

    We use topological data analysis to study "functional networks" that we construct from time-series data from both experimental and synthetic sources. We use persistent homology with a weight rank clique filtration to gain insights into these functional networks, and we use persistence landscapes to interpret our results. Our first example uses time-series output from networks of coupled Kuramoto oscillators. Our second example consists of biological data in the form of functional magnetic resonance imaging data that were acquired from human subjects during a simple motor-learning task in which subjects were monitored for three days during a five-day period. With these examples, we demonstrate that (1) using persistent homology to study functional networks provides fascinating insights into their properties and (2) the position of the features in a filtration can sometimes play a more vital role than persistence in the interpretation of topological features, even though conventionally the latter is used to distinguish between signal and noise. We find that persistent homology can detect differences in synchronization patterns in our data sets over time, giving insight both on changes in community structure in the networks and on increased synchronization between brain regions that form loops in a functional network during motor learning. For the motor-learning data, persistence landscapes also reveal that on average the majority of changes in the network loops take place on the second of the three days of the learning process.

  18. Spectral convergence in tapping and physiological fluctuations: coupling and independence of 1/f noise in the central and autonomic nervous systems

    PubMed Central

    Rigoli, Lillian M.; Holman, Daniel; Spivey, Michael J.; Kello, Christopher T.

    2014-01-01

    When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior. PMID:25309389

  19. a long, long time ago...

    Treesearch

    Elliot West; Greg Ruark

    2004-01-01

    Riparian areas - lang adjacent to a streambank or other water body - filtering nonpoint source pollution. Unfortunately the riparian areas of today, include only narrow bands of forests, or no woody vegetation. This greatly minimizes their ecological function. In deciding how to manage these areas, knowing the natural riparian makeup before humans settled in the area...

  20. Tracking Poverty Reduction in Bhutan: Income Deprivation Alongside Deprivation in Other Sources of Happiness

    ERIC Educational Resources Information Center

    Santos, Maria Emma

    2013-01-01

    This paper analyses poverty reduction in Bhutan between two points in time--2003 and 2007--from a multidimensional perspective. The measures estimated include consumption expenditure as well as other indicators which are directly (when possible) or indirectly associated to valuable functionings, namely, health, education, access to electricity,…

  1. The X-Ray Background and the AGN Luminosity Function

    NASA Astrophysics Data System (ADS)

    Hasinger, G.

    The deepest X-ray surveys performed with ROSAT were able to resolve as much as 70-80% of the 1-2 keV X-ray background into resolved sources. Optical follow-up observations were able to identify the majority of faint X-ray sources as active galactic nuclei (AGN) out to redshifts of 4.5 as well as a sizeable fraction as groups of galaxies out to redshifts of 0.7. A new population of X-ray luminous, optically innocent narrow emission line galaxies (NELGs) at the faintest X-ray fluxes is still a matter of debate, most likely many of them are also connected to AGN. First deep surveys with the Japanese ASCA satellite give us a glimpse of the harder X-ray background where the bulk of the energy density resides. Future X-ray observatories (XMM and AXAF) will be able to resolve the harder X-ray background. For the first time we are now in a position to study the cosmological evolution of the X-ray luminosity function of AGN, groups of galaxies and galaxies and simultaneously constrain their total luminosity output over cosmic time.

  2. Main functions, recent updates, and applications of Synchrotron Radiation Workshop code

    NASA Astrophysics Data System (ADS)

    Chubar, Oleg; Rakitin, Maksim; Chen-Wiegart, Yu-Chen Karen; Chu, Yong S.; Fluerasu, Andrei; Hidas, Dean; Wiegart, Lutz

    2017-08-01

    The paper presents an overview of the main functions and new application examples of the "Synchrotron Radiation Workshop" (SRW) code. SRW supports high-accuracy calculations of different types of synchrotron radiation, and simulations of propagation of fully-coherent radiation wavefronts, partially-coherent radiation from a finite-emittance electron beam of a storage ring source, and time-/frequency-dependent radiation pulses of a free-electron laser, through X-ray optical elements of a beamline. An extended library of physical-optics "propagators" for different types of reflective, refractive and diffractive X-ray optics with its typical imperfections, implemented in SRW, enable simulation of practically any X-ray beamline in a modern light source facility. The high accuracy of calculation methods used in SRW allows for multiple applications of this code, not only in the area of development of instruments and beamlines for new light source facilities, but also in areas such as electron beam diagnostics, commissioning and performance benchmarking of insertion devices and individual X-ray optical elements of beamlines. Applications of SRW in these areas, facilitating development and advanced commissioning of beamlines at the National Synchrotron Light Source II (NSLS-II), are described.

  3. Portable measurement system for real-time acquisition and analysis of in-vivo spatially resolved reflectance in the subdiffusive regime

    NASA Astrophysics Data System (ADS)

    Naglič, Peter; Ivančič, Matic; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran

    2018-02-01

    A measurement system was developed to acquire and analyze subdiffusive spatially resolved reflectance using an optical fiber probe with short source-detector separations. Since subdiffusive reflectance significantly depends on the scattering phase function, the analysis of the acquired reflectance is based on a novel inverse Monte Carlo model that allows estimation of phase function related parameters in addition to the absorption and reduced scattering coefficients. In conjunction with our measurement system, the model allowed real-time estimation of optical properties, which we demonstrate for a case of dynamically induced changes in human skin by applying pressure with an optical fiber probe.

  4. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etmektzoglou, A; Mishra, P; Svatos, M

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomesmore » available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly translate research ideas into machine readable scripts without programming knowledge. As an open source initiative, it also enables researcher collaboration on future developments. I am a full time employee at Varian Medical Systems, Palo Alto, California.« less

  5. TOPEM: A PET-TOF endorectal probe, compatible with MRI for diagnosis and follow up of prostate cancer

    NASA Astrophysics Data System (ADS)

    Garibaldi, F.; Capuani, S.; Colilli, S.; Cosentino, L.; Cusanno, F.; De Leo, R.; Finocchiaro, P.; Foresta, M.; Giove, F.; Giuliani, F.; Gricia, M.; Loddo, F.; Lucentini, M.; Maraviglia, B.; Meddi, F.; Monno, E.; Musico, P.; Pappalardo, A.; Perrino, R.; Ranieri, A.; Rivetti, A.; Santavenere, F.; Tamma, C.

    2013-02-01

    Prostate cancer is the most common disease in men and the second leading cause of cancer death. Generic large instruments for diagnosis have sensitivity, spatial resolution, and contrast inferior with respect to dedicated prostate imagers. Multimodality imaging can play a significant role merging anatomical and functional details coming from simultaneous PET and MRI. The TOPEM project has the goal of designing, building, and testing an endorectal PET-TOF MRI probe. The performance is dominated by the detector close to the source. Results from simulation show spatial resolution of ∼1.5 mm for source distances up to 80 mm. The efficiency is significantly improved with respect to the external PET. Mini-detectors have been built and tested. We obtained, for the first time, to our best knowledge, timing resolution of <400 ps and at the same time Depth Of Interaction (DOI) resolution of 1 mm or less.

  6. Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Post, John Theodore

    1989-01-01

    When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.

  7. The development of data acquisition and processing application system for RF ion source

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodan; Wang, Xiaoying; Hu, Chundong; Jiang, Caichao; Xie, Yahong; Zhao, Yuanzhe

    2017-07-01

    As the key ion source component of nuclear fusion auxiliary heating devices, the radio frequency (RF) ion source is developed and applied gradually to offer a source plasma with the advantages of ease of control and high reliability. In addition, it easily achieves long-pulse steady-state operation. During the process of the development and testing of the RF ion source, a lot of original experimental data will be generated. Therefore, it is necessary to develop a stable and reliable computer data acquisition and processing application system for realizing the functions of data acquisition, storage, access, and real-time monitoring. In this paper, the development of a data acquisition and processing application system for the RF ion source is presented. The hardware platform is based on the PXI system and the software is programmed on the LabVIEW development environment. The key technologies that are used for the implementation of this software programming mainly include the long-pulse data acquisition technology, multi-threading processing technology, transmission control communication protocol, and the Lempel-Ziv-Oberhumer data compression algorithm. Now, this design has been tested and applied on the RF ion source. The test results show that it can work reliably and steadily. With the help of this design, the stable plasma discharge data of the RF ion source are collected, stored, accessed, and monitored in real-time. It is shown that it has a very practical application significance for the RF experiments.

  8. Modified superposition: A simple time series approach to closed-loop manual controller identification

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.

    1986-01-01

    Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.

  9. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  10. Attenuation - The Ugly Stepsister of Velocity in the Noise Correlation Family

    NASA Astrophysics Data System (ADS)

    Lawrence, J. F.; Prieto, G.; Denolle, M.; Seats, K. J.

    2012-12-01

    Noise correlation functions and noise transfer functions have shown in practice to preserve the relative amplitude information, despite the challenge to reliably resolve it compared to phase information. Yet amplitude contains important information about wavefield interactions with the subsurface structure, including focusing/defocusing and seismic attenuation. To focus on the anelastic effects, or attenuation, we measure amplitude decay with increased station separation (distance). We present numerical results showing that the noise correlation functions (NCFs) preserve the relative amplitude information and properly retrieve seismic attenuation for sufficient noise source distribution and appropriate processing. Attenuation is only preserved through the relative decay of distinct waves from multiple simultaneous source locations. With appropriate whitening (and no time domain normalization), the coherency preserves correlation amplitudes proportional to the relative decay expected with all the inter-station spacing. We present new attenuation results for the United States, and particularly the Yellowstone region that illustrate lateral variations that strongly correlate with known geological features such as sedimentary basins, crustal blocks and active volcanism.

  11. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  12. Advanced functional network analysis in the geosciences: The pyunicorn package

    NASA Astrophysics Data System (ADS)

    Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen

    2013-04-01

    Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.

  13. Wind-instrument reflection function measurements in the time domain.

    PubMed

    Keefe, D H

    1996-04-01

    Theoretical and computational analyses of wind-instrument sound production in the time domain have emerged as useful tools for understanding musical instrument acoustics, yet there exist few experimental measurements of the air-column response directly in the time domain. A new experimental, time-domain technique is proposed to measure the reflection function response of woodwind and brass-instrument air columns. This response is defined at the location of sound regeneration in the mouthpiece or double reed. A probe assembly comprised of an acoustic source and microphone is inserted directly into the air column entryway using a foam plug to ensure a leak-free fit. An initial calibration phase involves measurements on a single cylindrical tube of known dimensions. Measurements are presented on an alto saxophone and euphonium. The technique has promise for testing any musical instrument air columns using a single probe assembly and foam plugs over a range of diameters typical of air-column entryways.

  14. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  15. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  16. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.

  17. Patient Health Record Systems Scope and Functionalities: Literature Review and Future Directions

    PubMed Central

    2017-01-01

    Background A new generation of user-centric information systems is emerging in health care as patient health record (PHR) systems. These systems create a platform supporting the new vision of health services that empowers patients and enables patient-provider communication, with the goal of improving health outcomes and reducing costs. This evolution has generated new sets of data and capabilities, providing opportunities and challenges at the user, system, and industry levels. Objective The objective of our study was to assess PHR data types and functionalities through a review of the literature to inform the health care informatics community, and to provide recommendations for PHR design, research, and practice. Methods We conducted a review of the literature to assess PHR data types and functionalities. We searched PubMed, Embase, and MEDLINE databases from 1966 to 2015 for studies of PHRs, resulting in 1822 articles, from which we selected a total of 106 articles for a detailed review of PHR data content. Results We present several key findings related to the scope and functionalities in PHR systems. We also present a functional taxonomy and chronological analysis of PHR data types and functionalities, to improve understanding and provide insights for future directions. Functional taxonomy analysis of the extracted data revealed the presence of new PHR data sources such as tracking devices and data types such as time-series data. Chronological data analysis showed an evolution of PHR system functionalities over time, from simple data access to data modification and, more recently, automated assessment, prediction, and recommendation. Conclusions Efforts are needed to improve (1) PHR data quality through patient-centered user interface design and standardized patient-generated data guidelines, (2) data integrity through consolidation of various types and sources, (3) PHR functionality through application of new data analytics methods, and (4) metrics to evaluate clinical outcomes associated with automated PHR system use, and costs associated with PHR data storage and analytics. PMID:29141839

  18. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  19. Microseismic Full Waveform Modeling in Anisotropic Media with Moment Tensor Implementation

    NASA Astrophysics Data System (ADS)

    Shi, Peidong; Angus, Doug; Nowacki, Andy; Yuan, Sanyi; Wang, Yanyan

    2018-03-01

    Seismic anisotropy which is common in shale and fractured rocks will cause travel-time and amplitude discrepancy in different propagation directions. For microseismic monitoring which is often implemented in shale or fractured rocks, seismic anisotropy needs to be carefully accounted for in source location and mechanism determination. We have developed an efficient finite-difference full waveform modeling tool with an arbitrary moment tensor source. The modeling tool is suitable for simulating wave propagation in anisotropic media for microseismic monitoring. As both dislocation and non-double-couple source are often observed in microseismic monitoring, an arbitrary moment tensor source is implemented in our forward modeling tool. The increments of shear stress are equally distributed on the staggered grid to implement an accurate and symmetric moment tensor source. Our modeling tool provides an efficient way to obtain the Green's function in anisotropic media, which is the key of anisotropic moment tensor inversion and source mechanism characterization in microseismic monitoring. In our research, wavefields in anisotropic media have been carefully simulated and analyzed in both surface array and downhole array. The variation characteristics of travel-time and amplitude of direct P- and S-wave in vertical transverse isotropic media and horizontal transverse isotropic media are distinct, thus providing a feasible way to distinguish and identify the anisotropic type of the subsurface. Analyzing the travel-times and amplitudes of the microseismic data is a feasible way to estimate the orientation and density of the induced cracks in hydraulic fracturing. Our anisotropic modeling tool can be used to generate and analyze microseismic full wavefield with full moment tensor source in anisotropic media, which can help promote the anisotropic interpretation and inversion of field data.

  20. Microseismic Full Waveform Modeling in Anisotropic Media with Moment Tensor Implementation

    NASA Astrophysics Data System (ADS)

    Shi, Peidong; Angus, Doug; Nowacki, Andy; Yuan, Sanyi; Wang, Yanyan

    2018-07-01

    Seismic anisotropy which is common in shale and fractured rocks will cause travel-time and amplitude discrepancy in different propagation directions. For microseismic monitoring which is often implemented in shale or fractured rocks, seismic anisotropy needs to be carefully accounted for in source location and mechanism determination. We have developed an efficient finite-difference full waveform modeling tool with an arbitrary moment tensor source. The modeling tool is suitable for simulating wave propagation in anisotropic media for microseismic monitoring. As both dislocation and non-double-couple source are often observed in microseismic monitoring, an arbitrary moment tensor source is implemented in our forward modeling tool. The increments of shear stress are equally distributed on the staggered grid to implement an accurate and symmetric moment tensor source. Our modeling tool provides an efficient way to obtain the Green's function in anisotropic media, which is the key of anisotropic moment tensor inversion and source mechanism characterization in microseismic monitoring. In our research, wavefields in anisotropic media have been carefully simulated and analyzed in both surface array and downhole array. The variation characteristics of travel-time and amplitude of direct P- and S-wave in vertical transverse isotropic media and horizontal transverse isotropic media are distinct, thus providing a feasible way to distinguish and identify the anisotropic type of the subsurface. Analyzing the travel-times and amplitudes of the microseismic data is a feasible way to estimate the orientation and density of the induced cracks in hydraulic fracturing. Our anisotropic modeling tool can be used to generate and analyze microseismic full wavefield with full moment tensor source in anisotropic media, which can help promote the anisotropic interpretation and inversion of field data.

Top